content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Part 2. Brisbane Airport
Dr Bill Johnston
Using paired and un-paired t-tests to compare long timeseries of data observed in parallel by instruments housed in the same or different Stevenson screens at one site, or in screens located at
different sites, is problematic. Part of the problem is that both tests assume that the air being monitored is the control variable. That air inside the screen is spatially and temporally
homogeneous, which for a changeable, turbulent medium is not the case.
Irrespective of whether data are measured on the same day, paired t-tests require the same parcels of air to be monitored by both instruments 100% of the time. As instruments co-located in the same
Stevenson screen are in different positions their data cannot be considered ‘paired’ in the sense required by the test. Likewise for instruments in separate screens, and especially if temperature at
one site is compared with daily values measured some distance away at another.
As paired t-tests ascribe all variation to subjects (the instruments), and none to the response variable (the air) test outcomes are seriously biased compared to un-paired tests, where variation is
ascribed more generally to both the subjects and the response.
The paired t-test compares the mean of the differences between subjects with zero, whereas the un-paired test compares subject means with each other. If the tests find a low probability (P) that that
the mean difference is zero, or that subject means are the same, typically less than (P<) 0.05, 5% or 1 in 20, it can be concluded that subjects differ in their response (i.e., the difference is
significant). Should probability be less than 0.01 (P<0.01 = 1% or 1 in 100) the between-subject difference is highly significant. However, significance itself does not ensure that the size of
difference is meaningful in the overall scheme of things.
All statistical tests are based on underlying assumptions that ensure results are trustworthy and unbiased. The main assumption for is that differences in the case of paired tests, and for unpaired
tests, data sequenced within treatment groups are independent meaning that data for one time are not serially correlated with data for other times. As timeseries embed seasonal cycles and in some
cases trends, steps must be taken to identify and mitigate autocorrelation prior to undertaking either test.
A second, but less important assumption for large datasets, is that data are distributed within a bell-shaped normal distribution envelope with most observations clustered around the mean and the
remainder diminishing in number towards the tails.
Finally, a problem unique to large datasets is that the denominator in the t-test equation becomes diminishingly small as the number of daily samples increase. Consequently, the t‑statistic becomes
exponentially large, together with the likelihood of finding significant differences that are too small to be meaningful. In statistical parlance this is known as Type1 error – the fallacy of
declaring significance for differences that do not matter. Such differences could be due to single aberrations or outliers for instance.
A protocol
Using a parallel dataset related to a site move at Townsville airport in December 1994, a protocol has been developed to assist avoiding pitfalls in applying t-tests to timeseries of parallel data.
At the outset, an estimate of effect size, determined as the raw data difference divided by the standard deviation (Cohens d) assesses if the difference between instruments/sites is likely to be
meaningful. An excel workbook was provided with step-by-step instructions for calculating day-of-year (1-366) averages that define the annual cycle, constructing a look-up table and deducting
respective values from data thereby producing de-seasoned anomalies. Anomalies are differenced as an additional variable (Site2 minus Site1, which is the control).
Having prepared the data, graphical analysis of their properties, including autocorrelation function (ACF) plots, daily data distributions, probability density function (PDF) plots, and inspection of
anomaly differences assist in determining which data to compare (raw data or anomaly data). The dataset that most closely matches the underlying assumptions of independence and normality should be
chosen and where autocorrelation is unavoidable, randomised data subsets offer a way forward. (Randomisation may be done in Excel and subsets of increasing size used in the analysis.)
Most analyses can be undertaken using the freely available statistical application PAST from the University of Oslo: https://www.nhm.uio.no/english/research/resources/past/ Specific stages of the
analysis have been referenced to pages in the PAST manual.
The Brisbane Study
The Brisbane study replicates the previous Townsville study, with the aim of showing that protocols are robust. While the Townsville study compared thermometer and automatic weather station maxima
measured in 60-litre screens located 172m apart, the Brisbane study compared Tmax for two AWS each with 60-litre screens, 3.2 km apart, increasing the likelihood that site-related differences would
be significant.
While the effect size for Brisbane was triflingly small (Cohens d = 0.07), and the difference between data-pairs stabilised at about 940 sub-samples, a significant difference between sites of 0.25^oC
was found when the number of random sample-pairs exceeded about 1,600. Illustrating the statistical fallacy of excessive sample numbers, differences became significant because the dominator in the
test equation (the pooled standard error) declined as sample size increased, not because the difference widened. PDF plots suggested it was not until the effect size exceeded 0.2, that simulated
distributions showed a clear separation such that the difference between Series1 and Series2 of 0.62^oC could be regarded as both significant and meaningful in the overall scheme of things.
Importantly, the trade-off between significance and effect size is central to avoiding the trap of drawing conclusions based on statistical tests alone.
Dr Bill Johnston
4 June 2023
Two important links – find out more
First Link: The page you have just read is the basic cover story for the full paper. If you are stimulated to find out more, please link through to the full paper – a scientific Report in
downloadable pdf format. This Report contains far more detail including photographs, diagrams, graphs and data and will make compelling reading for those truly interested in the issue.
Click here to access a full pdf report containing detailed analysis and graphs
Second Link: This link will take you to a downloadable Excel spreadsheet containing a vast number of data used in researching this paper. The data supports the Full Report.
Click here to download a full Excel data pack containing the data used in this research
Why statistical tests matter
Fake-news, flash-bangs
and why statistical tests matter
Dr Bill Johnston
Main Points
Comparing instruments using paired t-tests, verses unpaired tests on daily data is inappropriate. Failing to verify assumptions, particularly that data are independent (not autocorrelated), and not
considering the effect of sample size on significance levels creates illusions that differences between instruments are significant or highly significant when they are not. Using the wrong test and
naïvely or bullishly disregarding test assumptions plays to tribalism not trust.
Investigators must justify the tests they use, validate that assumptions are not violated, that differences are meaningful and thereby show their conclusions are sound.
Paired or repeated-measures t-tests are commonly used to determine the effect of an intervention by observing the same subjects before and after (e.g., 10 subjects before and after a treatment). As
within-subjects variation is controlled, differences are attributable to the treatment. In contrast, un-paired or independent t‑tests compare the means of two groups of subjects, each having received
one of two interventions (10 subjects that received one or no treatment vs. 10 that were treated). As variation between subjects contributes variation to the response, un-paired t-tests are less
sensitive than paired tests.
Extended to a timeseries of sequential observations by different instruments (Figure 1), the paired t-test evaluates the probability that the mean of the difference between data-pairs (calculated as
the target series minus the control) is zero. If the t‑statistic indicates the mean of the differences is not zero, the alternative hypothesis that the two instruments are different prevails. In this
usage, significant means there is a low likelihood, typically less than 0.05, 5% or one in 20, that the mean of the difference equals zero. Should the P-value be less than 0.01, 0.001, or smaller,
the difference is regarded as highly significant. Importantly, significant and highly significant are statistical terms that reflect the probability of an effect, not whether the size of an effect is
To reiterate, paired tests compare the mean of the difference between instruments with zero, while un-paired t‑tests evaluate whether Tmax measured by each instrument is the same.
While sounding pedantic, the two tests applied to the same data result in strikingly different outcomes, with the paired test more likely to show significance. Close attention to detail and applying
the right test is therefore vitally important.
Figure 1. Inside the current 60-litre Stevenson screen at Townsville airport. At the front are dry and wet-bulb thermometers, behind are maximum (mercury) and minimum (alcohol) thermometers, held
horizontally to minimise “wind-shake” which can cause them to re-set, and at the rear, which faces north, are dry and wet-bub AWS sensors. Cooled by a small patch of muslin tied by a cotton wick that
dips into the water reservoir, wet-bulb depression is used to estimate relative humidity and dew point temperature. (BoM photograph).
Thermometers Vs PRT Probes
Comparisons of thermometers and PRT probes co-located in the same screen, or in different screens, rely on the air being measured each day as the test or control variable, thereby presuming that
differences are attributable to instruments. However, visualize conditions in a laboratory verses those in a screen where the response medium is constantly circulating and changing throughout the day
at different rates. While differences in the lab are strictly attributable, in a screen, a portion of the instrument response is due to the air being monitored. As shown in Figure 1, instruments that
are not accessed each day are more conveniently located behind those that are, thereby resulting in spatial bias. The paired t-test, which apportions all variation to instruments is the wrong test
under the circumstances.
Test assumptions are important
The validity of statistical tests depends on assumptions, the most important of which for paired t-tests is that differences at one time are not influenced by differences at previous times. Similarly
for unpaired tests where observations within groups cannot be correlated to those previous. Although data should ideally be distributed within a bell-shaped normal-distribution envelope, normality is
less important if data are random and numbers of paired observations exceed about 60. Serial dependence or autocorrelation reduces the denominator in the t-test equation, which increases the
likelihood of significant outcomes (false positives) and fatally compromises the test.
Primarily caused by seasonal cycles the appropriate adjustment for daily timeseries is to deduct day-of-year averages from respective day-of-year data and conduct the right test on seasonally
adjusted anomalies.
Covariables on which the response variable depends are also problematic. These includes heating of the landscape over previous days to weeks, and the effects of rainfall and evaporation that may
linger for months and seasons. Removing cycles, understanding the data, using sampling strategies and P-level adjustments so outcomes are not biased may offer solutions.
Significance of differences vs. meaningful differences
A problem of using t-tests on long time series is that as numbers of data-pairs increase, the denominator in the t-test equation, which measures variation in the data, becomes increasingly small.
Thus, the ratio of signal (the instrument difference) to noise (the standard error, pooled in the case of un-paired tests) increases. The t‑value consequently becomes exponentially large, the P-level
declines to the millionth decimal place and the test finds trifling differences to be highly significant, when they are not meaningful. So, the significance level needs to be considered relative to
the size of the effect.
For instance, a highly significant difference that is less than the uncertainty of comparing two observations (±0.6^oC) could be an aberration caused by averaging beyond the precision of the
experiment (i.e., averaging imprecise data to two, three or more decimal places).
The ratio of the difference to the average variation in the data [i.e., (PRT[average] minus thermometer[average]) divided by the average standard deviation], which is known as Cohens d, or the effect
size, also provides a first-cut empirical measure that can be calculated from data summaries to guide subsequent analysis.
Cohens d indicates whether a difference is likely to be negligible (less than 0.2 SD units), small (>0.2), medium (>0.5) or large (<0.8), which identifies traps to avoid, particularly the trap of
unduly weighting significance levels that are unimportant in the overall scheme of things.
The Townsville case study
T-tests of raw data were invalidated by autocorrelation while those involving seasonally adjusted anomalies showed no difference. Randomly sampled raw data showed significance levels depended on
sample size not the difference itself, thus exposing the fallacy of using t‑tests on excessively large numbers of data-pairs. Irrespective of the tests, the effect size calculated from the data
summary of 0.12 SD units is trivial and not important.
Using paired verse unpaired t-tests on timeseries of daily data inappropriately, not verifying assumptions, and not assessing the effect size of the outcome creates division and undermines trust. As
illustrated by Townsville, it also distracts from real issues. Using the wrong test and naïvely or bullishly disregarding test assumptions plays to tribalism not trust.
A protocol is advanced whereby autocorrelation and effect size are examined at the outset. It is imperative that this be carried out before undertaking t-tests of daily temperatures measured
in-parallel by different instruments.
The overarching fatal error is using invalid tests to create headlines and ruckus about thin-things that make no difference, while ignoring thick-things that would impact markedly on the global
warming debate.
Two important links – find out more
First Link: The page you have just read is the basic cover story for the full paper. If you are stimulated to find out more, please link through to the full paper – a scientific Report in
downloadable pdf format. This Report contains far more detail including photographs, diagrams, graphs and data and will make compelling reading for those truly interested in the issue.
Click here to download the full paper Statistical_Tests_TownsvilleCaseStudy_03June23
Second Link: This link will take you to a downloadable Excel spreadsheet containing a vast number of data points related to the Townsville Case Study and which were used in the analysis of the Full
Click here to access the full data used in this post Statistical tests Townsville_DataPackage
Day/Night temperature spread fails to confirm IPCC prediction
By David Mason-Jones,
Research by Dr. Lindsay Moore
The work of citizen scientist, Dr. Lindsay Moore, has failed to confirm an important IPCC prediction about what will happen to the spread between maximum and minimum temperatures due to the Enhanced
Greenhouse Effect. The IPCC’s position is that this spread will narrow as a result of global warming.
Moore’s work focuses on the remote weather station at Giles in Western Australia and run by Australia’s peak weather monitoring body, the Bureau of Meteorology (BoM).
Why Giles?
Giles is the most remote weather station in mainland Australia and its isolation in a desert makes it an ideal place to study the issue of temperature spread. It is virtually in the middle of the
Continent.It is far from influencing factors such as Urban Heat Island effect, land use changes, encroachment by shading vegetation, shading by buildings and so on, that can potentially corrupt the
data. Humidity is usually low and stable and it is far from the sea. In addition, as a sign of its importance in the BoM network, Giles is permanently staffed.
As stated, the IPCC hypothesis is that the ‘gap’ will become steadily smaller as the Enhanced Greenhouse Effect takes hold. As temperature rises the gap will narrow and this will result in an
increase in average temperature, so says the IPCC.
Moore’s research indicates that this is just not happening at this showcase BoM site. It may be happening elsewhere, and this needs to be tested in each case against the range of all data-corrupting
effects, but it is not happening at Giles.
Notes about the graphs. The top plot line shows the average Tmax for each year – that is, the average maximum daytime temperature. The middle plot shows the average Tmin for each year – that is, the
average minimum night time temperature.
The lower plot shows the result of the calculation Tmax-Tmin. In laypersons’ terms it is the result you get when you subtract the average yearly minimum temperature from the average yearly maximum
temperature. If the IPCC hypothesis is valid, then the lower plot line should be falling steadily through the years because, according to the IPCC, more carbon dioxide in the atmosphere should make
nights warmer. Hence, according to the IPCC’s hypothesis, the gap between Tmax and Tmin will become smaller – ie the gap will narrow. But the plot line does not show this.
The IPCC’s reasoning for its narrowing prediction is that global warming will be driven more by a general rise in minimum temps that it will be by a general rise in maximums. This is not my
assertion, nor is it Dr. Moore’s, it is the assertion of the IPCC and can be found in the IPCC’s AR4 Report.
Dr. Moore states, “In the AR4 report the IPCC claims that elevated CO2 levels trap heat, specifically the long wave radiation escaping to space.
“As a result of this the IPCC states at page 750 that, ‘almost everywhere night time temperatures increase more than day time temperatures, that decrease in number of frost days are projected over
time, and that temperatures over land will be approximately twice average Global temp rise,” he says citing page 749 of the AR4 report.
So where can we go to find evidence that the IPCC assertion of a narrowing spread of Tmax-Tmin is either happening or not happening? Giles is a great start point. Can we use the BoM’s own publicly
available data to either confirm, or disprove, the narrowing prediction? The short answer is – Yes we can.
But, before we all get too excited about the result Dr. Moore has found, we need to recognise the limitation that this is just one site and, to the cautious scientific mind, may still be subject to
some bizarre influence that somehow skews the result away from the IPCC prediction. If anyone can suggest what viable contenders for ‘bizarre influences’ might be at Giles we would welcome them in
the comments section of this post.
The caution validly exercised by the rigorous scientific mind can be validly balanced by the fact that Giles is a premier, permanently staffed and credible site. The station was also set up with
great care, and for very specific scientific purposes, in the days of the Cold War as part of the British nuclear test program in Australia in the 1950’s. It was also important in supplying timely
and accurate meteorological data for rocket launches from the Woomera Rocket Range in South Australia in the development of the Bluestreak Rocket as part of the British/Australian space program. This
range extended almost all the way across Australia from the launching site at Woomera to the arid North West of Western Australia.
In the early years there were several other weather monitoring stations along the track of the range. Such has been the care and precision of the operation of the station that Giles has the
characteristics of a controlled experiment.
Dr. Moore states, “Giles is arguably the best site in the World because of its position and the accuracy and reliability of its records which is a constant recognised problem in many sites. Data is
freely available on the BoM website for this site.”
With regard to the site validly having the nature of a controlled experiment, something about the method of analysis is also notable. The novel adoption of deriving the spread Tmax-Tmin by doing it
on a daily basis neatly avoids meta data issues that have plagued the reliability of data from other stations and sometimes skewed results from other supposedly reliable observation sites.
“I would argue that the only change in environmental conditions over the life of this station is the increase in CO2 from 280 to 410 ppm,” he says.
“In effect this is, I suggest, a controlled experiment with the only identifiable variable input being CO2 concentration,” he says.
The conclusion reached by Dr. Moore is that an examination of the historical records for this site by accessing the same data through the BoM website unequivocally shows NO significant reduction in
Tmax-Tmin. It also shows no rise in Tmin. Anyone can research this data on the Bureau of Meteorology website as it is not paywalled. It is truly sound data from a government authority for the
unrestricted attention of citizens and other researchers.
Dr. Moore concludes, “The logical interpretation of this observation is that, notwithstanding any other unidentified temperature influencing factor, the Enhanced Greenhouse Effect due to elevated CO2
had no discernible effect on temperature spread at this site. And, by inference, any other site.”
He further states, “On the basis of the observations I have made, there can be no climate emergency due to rising CO2 levels, whatever the cause of the rise. To claim so is just scaremongering.
“Any serious climate scientist must surely be aware of such basic facts yet, despite following the science for many years, I have never seen any discussion on this specific approach,” he says.
Finally, Dr. Moore poses a few questions and makes some pertinent points:
He asks, “Can anyone explain, given the current state of the science why there is no rise in minimum temperatures (raw) or, more importantly, no reduction in Tmax-Tmin spread, over the last 65 years
of records despite a significant rise in CO2 levels at Giles (280-410ppm) as projected by the IPCC in their AR4 report?” He notes that other published research indicates similar temperature profiles
in the whole of the central Australian region as well as similarly qualified North American and World sites.
Seeking further input, he asks, “Can anyone provide specific data that demonstrates that elevated CO2 levels actually do increase Tmin as predicted by the IPCC?” And further, “Has there been a
reduction in frost days in pristine sites as predicted by the IPCC?”
On a search for more information, he queries, “Can anyone explain why the CSIRO ‘State of the Climate’ statement (2020) says that Australian average temperatures have risen by more than 1 deg C since
1950 when, clearly, there has been no such rise at this pristine site?” With regard to this question, he notes that Giles should surely be the ‘go to’ reference site in the Australian Continent.
Again he tries to untangle the web of conflicting assertions by reputedly credible scientific organisations. He notes that, according to the IPCC rising average temperatures are attributable to rise
in minimum temperatures. For the CSIRO State of the Climate statement to be consistent with this, it would necessitate a rise of around 2 deg C in Tmin. But, at Giles, there was zero rise. He also
notes that, according to the IPCC, temperature rises over land should be double World average temperature rises. But he can see no data to support this.
Dr. Moore’s final conclusion: “Through examination of over 65 years of data at Giles it can be demonstrated that, in the absence of any other identifiable temperature forcing, the influence of the
Enhanced Greenhouse Effect at this site appears to be zero,” he says. “Not even a little bit!”
David Mason-Jones is a freelance journalist of many years’ experience. He publishes the website www.bomwatch.com.au
Dr. Lindsay Moore, BVSC. For approaching 50 years Lindsay Moore has operated a successful veterinary business in a rural setting in the Australian State of Victoria. His veterinary expertise is in
the field of large animals and he is involved with sophisticated techniques such as embryo transfer. Over the years he has seen several major instances in veterinary science where something that was
once accepted on apparently reasonable grounds, and adopted in the industry, has later been proven to be incorrect. He is aware that this phenomenon is not only confined to the field of Veterinary
Science but is happens in other scientific fields as well. The lesson he has taken from this is that science needs to advance with caution and that knee-jerk assumptions about ‘the science is
settled’ can lead to significant mistakes. Having become aware of this problem in science he has become concerned about how science is conducted and how it is used. He has been interested in the
global warming issue for around 20 years.
General link to Bureau of Meteorology website is www.bom.gov.au
Welcome to BomWatch.com.au a site dedicated to examining Australia’s Bureau of Meteorology, climate science and the climate of Australia. The site presents a straight-down-the-line understanding of
climate (and sea level) data and objective and dispassionate analysis of claims and counter-claims about trend and change.
BomWatch delves deeply into the way in which data has been collected, the equipment that has been used, the standard of site maintenance and the effect of site changes and moves.
Dr. Bill Johnston is a former senior research scientist with the NSW Department of Natural Resources (abolished in April 2007); which in previous guises included the Soil Conservation Service of NSW;
the NSW Water Conservation and Irrigation Commission; NSW Department of Planning and Department of Lands. Like other NSW natural resource agencies that conducted research as a core activity including
NSW Agriculture and the National Parks and Wildlife Service, research services were mostly disbanded or dispersed to the university sector from about 2005.
BomWatch.com.au is dedicated to analysing climate statistics to the highest standard of statistical analysis
Daily weather observations undertaken by staff at the Soil Conservation Service’s six research centres at Wagga Wagga, Cowra, Wellington, Scone, Gunnedah and Inverell were reported to the Bureau of
Meteorology. Bill’s main fields of interest have been agronomy, soil science, hydrology (catchment processes) and descriptive climatology and he has maintained a keen interest in the history of
weather stations and climate data. Bill gained a Batchelor of Science in Agriculture from the University of New England in 1971, Master of Science from Macquarie University in 1985 and Doctor of
Philosophy from the University of Western Sydney in 2002 and he is a member of the Australian Meteorological and Oceanographic Society (AMOS).
Bill receives no grants or financial support or incentives from any source.
BomWatch accesses raw data from archives in Australia so that the most authentic original source-information can be used in our analysis.
How BomWatch operates
BomWatch is not intended to be a blog per se, but rather a repository for analyses and downloadable reports relating to specific datasets or issues, which will be posted irregularly so they are
available in the public domain and can be referenced to the site. Issues of clarification, suggestions or additional insights will be welcome.
The areas of greatest concern are:
• Questions about data quality and data homogenisation (is data fit for purpose?)
• Issues related to metadata (is metadata accurate?)
• Whether stories about datasets consistent and justified (are previous claims and analyses replicable?)
Some basic principles
Much is said about the so-called scientific method of acquiring knowledge by experimentation, deduction and testing hypothesis using empirical data. According to Wikipedia the scientific method
involves careful observation, rigorous scepticism about what is observed … formulating hypothesis … testing and refinement etc. (see https://en.wikipedia.org/wiki/Scientific_method).
The problem for climate scientists is that data were not collected at the outset for measuring trends and changes, but rather to satisfy other needs and interests of the time. For instance,
temperature, rainfall and relative humidity were initially observed to describe and classify local weather. The state of the tide was important for avoiding in-port hazards and risks and for
navigation – ships would leave port on a falling tide for example. Surface air-pressure forecasted wind strength and direction and warned of atmospheric disturbances; while at airports, temperature
and relative humidity critically affected aircraft performance on takeoff and landing.
Commencing in the early 1990s the ‘experiment’, which aimed to detect trends and changes in the climate, has been bolted-on to datasets that may not be fit for purpose. Further, many scientists have
no first-hand experience of how data were observed and other nuances that might affect their interpretation. Also since about 2015, various data arrive every 10 or 30 minutes on spreadsheets, to
newsrooms and television feeds largely without human intervention – there is no backup paper record and no way to certify those numbers accurately portray what is going-on.
For historic datasets, present-day climate scientists had no input into the design of the experiment from which their data are drawn and in most cases information about the state of the instruments
and conditions that affected observations are obscure.
Finally, climate time-series represent a special class of data for which usual statistical routines may not be valid. For instance, if data are not free of effects such as site and instrument
changes, naïvely determined trend might be spuriously attributed to the climate when in fact it results from inadequate control of the data-generating process: the site may have deteriorated for
example or ‘trend’ may be due to construction of a road or building nearby. It is a significant problem that site-change impacts are confounded with the variable of interest (i.e. there are
potentially two signals, one overlaid on the other).
What is an investigation and what constitutes proof?
The objective approach to investigating a problem is to challenge the straw-horse argument that there is NO change, NO link between variables, NO trend; everything is the same. In other words, test
the hypothesis that data consist of random numbers or as is the case in a court of law, the person in the dock is unrelated to the crime. The task of an investigator is to open-handedly test that
case. Statistically called a NULL hypothesis, the question is evaluated using probability theory, essentially: what is the probability that the NULL hypothesis is true?
In law a person is innocent until proven guilty and a jury holding a majority view of the available evidence decides ‘proof’. However, as evidence may be incomplete, contaminated or contested the
person is not necessarily totally innocent –he or she is simply not guilty.
In a similar vein, statistical proof is based on the probability that data don’t fit a mathematical construct that would be the case if the NULL hypothesis were true. As a rule-of-thumb if there is
less than (<) a 5% probability (stated as P < 0.05) that that a NULL hypothesis is supported, it is rejected in favour of the alternative. Where the NULL is rejected the alternative is referred to as
significant. Thus in most cases ‘significant’ refers to a low P level. For example, if the test for zero-slope finds P is less than 0.05, the NULL is rejected at that probability level, and trend is
‘significant’. In contrast if P >0.05, trend is not different to zero-trend; inferring there is less than 1 in 20 chance that trend (which measures the association between variables) is not due to
Combined with an independent investigative approach BomWatch relies on statistical inference to draw conclusions about data. Thus the concepts briefly outlined above are an important part of the
overall theme.
Using the air photo archives available in Australia, Dr Bill Johnston has carried out accurate and revealing information about how site changes have been made and how these have affected the
integrity of the data record.
|
{"url":"http://www.bomwatch.com.au/category/statistical-analysis/","timestamp":"2024-11-05T12:07:47Z","content_type":"text/html","content_length":"99954","record_id":"<urn:uuid:ea265f7b-cfbb-4049-9d6d-2ace37ae9175>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00351.warc.gz"}
|
The Best Guide for Optimizing Data Analysis in Excel - KANDA DATA
The Best Guide for Optimizing Data Analysis in Excel
In addition to understanding the research topic, researchers also need basic data analysis knowledge. Excel is one of the tools often used by researchers to analyze research data. Researchers can use
excel for various data analyses. On this occasion, Kanda Data will write the best guide for optimizing data analysis in excel.
Kanda Data, on this occasion, will write six guides for optimizing data analysis in excel, namely:
1. Excel can be used to perform manual linear regression calculations and test assumptions
Data analysis using statistical software refers to the theory and calculation formulas manually. As a researcher who has a sense of “curiosity”, you should not only input data, analyze it with
statistical software, and then interpret the results. But it is also necessary to understand the origin of the calculation method.
The provision of manual calculation skills for students completing their final assignments is excellent. The ability to perform calculations manually needs to be trained from an early age. Students
learning to feel like being a researcher will have sufficient provisions. Students with a research spirit will certainly have added value when they enter the job.
Calculator users who have experienced fatigue in calculating have the opportunity to make errors in inputting data. Using excel to help us calculate manually is better than using a calculator. Manual
calculations for small data can still be easily handled with a calculator, but for complex data, analysis calculations will require a high level of concentration.
Calculator usage is different from excel, which has several advantages, namely: (a) being able to perform calculations using excel formulas that have the same function as a calculator; (b)
calculation operations can be performed on one of the observations, the rest can copy the formula automatically; (c) cross-check can be done if you make an error in data or formula input; (d)
formulas can be saved in excel files and can be used whenever needed; and (e) an analytical calculation template can be made to be used in observing data from other research results.
2. Optimizing excel for descriptive statistical analysis supporting the primary analysis
Descriptive statistical analysis is needed to complete the primary analysis. From descriptive analysis, useful information can be obtained to support the results and discussion in research reports or
even scientific articles published in national and international journals. Excel can be used for the descriptive statistical analysis of research data.
The descriptive statistical analysis facility in excel can display statistical results, including the average value, median value, mode value, minimum and maximum value, largest value, smallest value
and sum. As for the results of other descriptive statistical analyses, namely standard error, standard deviation, sample variance, kurtosis, skewness and range value.
Excel is easy for descriptive statistical analysis, even with many observations (n). So, descriptive statistical analysis in excel can be used to support linear regression analysis on cross-section
and time series data.
3. Excel to present an informative table of linear regression analysis results
Excel can also be optimized to help create tables for the publication of data analysis results. Although tables can be created using Ms Word, tables created using Excel have several advantages.
The advantages of Excel that are not found in Ms Word are: (a) Excel has a pivot table menu facility that can be used to help create informative tables from raw data; (b) Descriptive statistical
operations in the form of sum, average, minimum value and maximum value can be easily created and added to the recap table of data analysis results in Excel; and (c) Tables in Excel can be easily
copied and pasted into Ms Word documents. The menu provided by Excel to help create a table can be seen in the following figure:
4. Excel optimization for interactive and informative graphic presentation in its interface
The results of data analysis can be more informative and interactive if presented in graphs or diagrams. Excel can be used to help create charts with a good interface. Researchers can use it
according to the characteristics of the data and what information will be conveyed to the reader. There are many choices of chart types provided by Excel.
Even if the data has been inputted in Excel, there will be a choice of chart recommendations according to the characteristics of the data that has been inputted, as shown in the following figure:
Furthermore, if you choose the insert table manually, the researcher will be faced with various chart types that can be used. The types of graphs that can be created with excel are graphs/column
diagrams, line charts, pie charts, bar charts, area diagrams, scatter plots, maps, stock diagrams, surface diagrams, radar, treemap, sunburst, histogram, box and whisker, waterfall diagrams, funnel
diagrams, and combo charts.
Researchers can also adjust the selected graph’s appearance based on this choice of diagrams, whether in two or three dimensions. Thus, it shows that Excel can be used to help researchers work in
creating interactive and informative charts in the interface.
5. Optimization of data filters in excel to help input research data
One of the other advantages of data analysis is the availability of data filter features in Excel. Many benefits can be obtained by using this data filter, including (a) can sort data alphabetically
A-Z or vice versa Z-A; (b) being able to rank data values with the largest-smallest value or vice versa; (c) If the data is sorted for one variable, then several other variables will follow
Activation of data filters can be done quickly in Excel. The first step that needs to be done is to block the name of the variable that has been inputted, then click data. The last step is to
activate the “Filter” in the “Sort & Filter”. The active data filter feature can be seen in each variable name, and a dropdown arrow will appear. Several options will filter the data if the arrow is
Furthermore, by utilizing the data filter feature, it can be done to help sort the data according to the needs of data analysis. This feature can work well, even if there are more than thousands of
observations or data samples.
6. Utilize the excel formulas needed for linear regression analysis and test assumptions
Knowledge of basic excel formulas is important for researchers, so they do not encounter obstacles when analyzing data. Optimization of basic formulas that support data analysis needs to be mastered.
Some important formulas that need to be optimized in Excel are:
a. Formula to find the value of the sum of data: =SUM(number1;number2; …)
b. formula to find the average value of a number of data: =AVERAGE(number1;number2; …)
c. Formula to find the largest value: =MAX(number1;number2; …)
d. Formula to find the smallest data value: =MIN(number1;number2; …)
e. Formula to count the number of data items: =COUNT(value1;value2;…)
f. Formula to test certain conditions based on inputted data: =IF(logical_test;[value_if_true];[value_if_false])
g. Use the absolute ($) function to lock cells in formulas in excel.
The basic formula described above is a basic formula that will usually be used frequently in performing data analysis using Excel. If doing data analysis by doing calculations manually, this basic
formula will usually always be needed to perform calculations. Therefore, the researchers need to optimize the knowledge of the basic formulas in Excel. That’s what I can write for you. Hopefully
useful for all of you. Wait for the article update next week!
Leave a Comment
You must be logged in to post a comment.
|
{"url":"https://kandadata.com/the-best-guide-for-optimizing-data-analysis-in-excel/","timestamp":"2024-11-12T00:30:05Z","content_type":"text/html","content_length":"194697","record_id":"<urn:uuid:8645db88-7a9b-4bf3-8325-9c248948e29b>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00244.warc.gz"}
|
Noise Figure Measurement Accuracy: The Y-Factor Method
This application note reviews Y -factor noise figure measurements and how to improve accuracy by following a three-stage process:
1. Avoid mistakes when making measurements
2. Minimize uncertainties wherever that is possible
3. Quantify the uncertainties that remain
This application note covers the following topics:
• Fundamentals of Y-factor noise figure measurements
• Noise figure mistakes to avoid
• Measurement corrections to improve accuracy
• Calculation of the remaining uncertainties – including software tools
• Other techniques that can reduce uncertainties
• Checklist for improving accuracy
This application note is specific to instruments that use the Y-factor noise figure measurement. Various features of Keysight Technologies products are mentioned as illustrative examples of noise
figure analyzers and noise sources. Other products, however, may be used with the techniques discussed in this document.
Noise figure is a key performance parameter in many RF systems. A low noise figure provides an improved signal/noise ratio for analog receivers and reduces bit error rate in digital receivers. As a
parameter in a communications link budget, a lower receiver noise figure allows smaller antennas or lower transmitter power for the same system performance. In a development laboratory, noise figure
measurements are essential to verify new designs and support existing equipment. In a production environment, low-noise receivers can now be manufactured with minimal need for adjustment. Even so, it
is still necessary to measure noise figures to demonstrate that the product meets specifications.
Why is accuracy important?
Accurate noise figure measurements have significant financial benefits. For many products, a guaranteed low noise figure commands a premium price. This income can only be realized, however, if every
unit manufactured can be shown to meet its specifications.
Every measurement has limits of accuracy. If a premium product has a maximum specified noise figure of 2.0 dB, and the measurement accuracy is ± 0.5 dB, then only units that measure 1.5 dB or lower
are marketable. On the other hand, if the accuracy is improved to ± 0.2 dB, all products measuring up to 1.8 dB could be sold at a premium price.
Customers need accurate noise figure measurements to confirm they are getting the performance they have paid for. Using the same example, an accuracy of ± 0.5 dB for measuring a product that a
manufacturer has specified as ‘2.0 dB maximum’ would require the acceptance of units measuring as high as 2.5 dB. An improved accuracy of ± 0.2 dB sets the acceptance limit at 2.2 dB.
Speed of measurement is also an issue. High-value products favor accuracy; high[1]volume products favor speed. Due to the random nature of noise and the statistical aspects of measuring it, there is
always a trade-off between speed and accuracy. To optimize the trade-off, it is necessary to eliminate all avoidable errors and quantify the uncertainties that remain.
Y-Factor Noise Figure Measurement
Learn about the fundamental features of the Y-factor noise figure measurement technique in this application note. Many instruments use the Y-factor noise figure technique, including:
• Keysight X-Series NFA noise figure analyzers
• Keysight X-Series signal analyzers with noise figure measurement application
• Keysight FieldFox handheld microwave analyzers with noise figure measurements
• Other noise figure analyzers and spectrum analyzers with noise figure measurement personality
The equations included in the application note follow the internal calculation route of the Keysight Technologies, Inc. products. The calculation routes of other noise figure instruments that use the
Y-factor noise figure method are inevitably similar. This application note departs from previous explanations of noise figure calculations by making extensive use of the noise temperature concept.
Although noise temperature may be less familiar, it gives a truer picture of how the instruments work—and most importantly, how they apply corrections to improve accuracy.
Frequently Asked Questions
What is noise figure?
The fundamental definition of noise figure F is the ratio of:
(signal/noise power ratio at the input of the device under test)
(signal/noise power ratio at the output of the device under test)
Or alternatively:
Noise figure represents the degradation in signal/noise ratio as the signal passes through a device. Since all devices add a finite amount of noise to the signal, F is always greater than 1. Although
the quantity F in equation 2-1 has historically been called ‘noise figure’, that name is now more commonly reserved for the quantity NF, expressed in dB:
Keysight Technologies literature follows the contemporary convention that refers to the ratio F as ‘noise factor,’ and uses ‘noise figure’ to refer only to the decibel quantity NF.
What is the Y-factor noise figure?
The Y-factor noise figure is a method to measure the noise figure of a device and compare the power levels of a noisy signal source with and without the device under test (DUT) connected.
What is the Y-factor noise figure method for calculating noise figure uncertainty?
The Y-factor noise figure method uses a calibrated noise source to provide a stimulus to the DUT input; it also uses a signal analyzer, operating as a calibrated receiver, to measure the DUT’s output
noise. The calibrated noise source is specified with an excess noise ratio (ENR) that characterizes the noise power between DUT “on” and “off” states as a function of frequency.
What is noise temperature?
A resistor at any temperature above absolute zero will generate thermal noise. Effective noise temperature is the additional temperature of the resistor that would give the same output noise power
density as a noiseless DUT.
|
{"url":"https://www.keysight.com/br/pt/assets/7018-06829/application-notes/5952-3706.pdf","timestamp":"2024-11-05T15:11:31Z","content_type":"text/html","content_length":"500821","record_id":"<urn:uuid:7e5f540f-9652-4d7b-a9f5-51b8a8d5482e>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00093.warc.gz"}
|
I can no longer think of math without the Algebrator. It is so easy to get spoiled you enter a problem and here comes the solution. Recommended!
S.R., Washington
I started with this kind of programs as I am in an online class and there are times when "I have no clue". I am finding your program easier to follow. THANK YOU!
Jim Hendry, CT.
Algebrator really makes algebra easy to use.
Lee Wyatt, TX
It appears you have improved upon an already good program. Again my thanks, and congrats.
George Miller, LA
|
{"url":"https://factoring-polynomials.com/adding-polynomials/angle-complements/factoring-cubed-trinomials.html","timestamp":"2024-11-07T17:20:19Z","content_type":"text/html","content_length":"81978","record_id":"<urn:uuid:f190ce49-21f8-4be7-8c39-a66d3f5f0719>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00028.warc.gz"}
|
At CSME, it is our mission to inspire the teaching and learning of science, mathematics, and computer science education. We are dedicated to running a number of research programs that range from the
vast depths of the universe all the way to the relationship between green consumerism and social/environmental justice. While being a center for science and mathematics education, we are committed to
confronting anti-blackness, anti-racism, and various topics of social justice in our teaching and research. By partnering with a number of local schools and organizations on STEM education projects,
CSME is able to create a successful and inclusive environment of learning within a supportive community. Our collective research funding helps to provide our undergraduate and graduate students
direct financial support to continue their studies at an important time. Our intellectual and physical spaces encourage excellence, collaboration, and compassion both on and office campus.
Eric Hsu
• Eroy-Reveles, A., Hsu, E., Peterfreund, A., Rath, K., Bayliss, F. (2019). History and Evolution of STEM Supplemental Instruction at San Francisco State University, a Large, Urban,
Minority-Serving Institution. In Wilson-Kennedy, Z., Byrd, G., Kennedy, E., Frierson, H. (Eds.), Broadening Participation in STEM (Diversity in Higher Education, Volume 22), Emerald Publishing
Limited, pp.209 - 235.
• Yoon, I., Lyons, J., Horvath, L., Yue, H., Twarek, B., Remold, J., Hsu, E. (2018). SFSU INCLUDES on SF CALL (San Francisco Computing for All Levels and Learners). Proceedings of 2018 CoNECD - The
Collaborative Network for Engineering and Computing Diversity Conference.
• Hauk,S ., Speer;, N. M., Kung, D., Tsay, J.J., & Hsu, E. (2016). Video cases for college mathematics instructor professional development.
• Hauk, S., Hsu, E., & Speer, N. (2016). What would the research look like? Knowledge for teaching mathematics capstone courses for future secondary teachers. Proceedings of 19th Annual Conference
on Research in Undergraduate Mathematics Education. Pittsburgh, PA.
• Hsu, E., & Bressoud, D. (2015). Placement and Student Performance in Calculus I. In Bressoud, D., Mesa, V., Rasmussen, C. (Eds.), Insights and Recommendations from the MAA National Study of
College Calculus (pp. 59--67). Washington, D.C.: Mathematical Association of America.
• Hsu, E., Mesa, V., & The Calculus Case Collective. (2015). Synthesizing Measures of Institutional Success: CSPCC-Technical Report #1. Washington DC: Mathematical Association of America.
• Raychaudhuri, D. & Hsu, E. (2012). A Longitudinal Study of Mathematics Graduate Teaching Assistants’ Beliefs about the Nature of Mathematics and their Pedagogical Approaches toward Teaching
Mathematics. In (Eds.) S. Brown, S. Larsen, K. Marrongelle, and M. Oehrtman, Proceedings of the 15th Annual Conference on Research in Undergraduate Mathematics Education, 2012, Portland, Oregon.
• Hsu, E., Kysh, J., Ramage, K., & Resek, D. (2012). Changing Teachers’ Conception of Mathematics. NCSM Journal of Mathematics Education Leadership. Spring 2012.
• Hsu, E., Kysh, J., Ramage, K., & Resek, D. (2009). Helping Teachers Un-structure: A Promising Approach. The Montana Math Enthusiast, 6 (3).
• Hsu, E., Iwasaki, K., Kysh, J., Ramage, K., & Resek, D. (2009). Three REAL Lessons About Mentoring. In G. Zimmermann, L. Fulmore, P. Guinee, & E. Murray (Eds.), Empowering Mentors of Experienced
Mathematics Teachers. Reston, VA: National Council of Teachers of Mathematics. Print ISBN: 978-0-87353-836-7
• Hsu, E., Murphy, T. J., & Treisman, U. (2008). Supporting High Achievement In Introductory Mathematics Courses: What We Have Learned From 30 Years of the Emerging Scholars Program. In M. Carlson,
& C. Rasmussen (Eds.), Making the Connection: Research and Teaching in Undergraduate Mathematics. Washington, DC: Mathematical Association of America. Print ISBN: 978-0-88385-183-8.
• Hsu, E., Kysh, J., Ramage, K., & Resek, D. (2007). Seeking Big Ideas in Algebra: The Evolution of a Task. Journal of Mathematics Teacher Education, 10(4--6), 325--332. DOI 10.1007/
• Hsu, E., Kysh, J., and Resek, D. (2007). Using Rich Problems for Differentiated Instruction. New England Mathematics Journal, 39, 6--13.
• Computing; Research Association. (2005). Cyberinfrastructure for Education and Learning for the Future. Washington, DC: Computing Research Association.
• Gutmann, T., Hsu, E., Marrongelle, K., Murphy, T., Speer, N., Star, J., et al. (2004). Mathematics Teaching Assistant Preparation And Development Research. In D. E. Mcdougall and J. A. Ross
(Eds.), Proceedings of the Twenty-Sixth Annual Meeting of the North American Chapter of the International Group for the Psychology of Mathematics Education, Joint Meeting with Psychology of
Mathematics Education. Toronto, Ontario.
• Hsu, E., and Contini, V. (2004). Student Use (and Misuse) of Graphs and Rates of Change in Economics. In D. E. Mcdougall and J. A. Ross (Eds.), Proceedings of the Twenty-Sixth Annual Meeting of
the North American Chapter of the International Group for the Psychology of Mathematics Education, Joint Meeting with Psychology of Mathematics Education. Toronto, Ontario.
• Hsu, E. (2004). Re-considering On-line and Live Communities of Practice. In Proceedings of the Society for Information Technology and Teacher Education 15th International Conference.
• Hsu, E., and Moore, M. (2003). Online Teacher Communities: Measuring Engagement, Responsiveness and Refinement. In N. A. Pateman, B. J. Dougherty and J. T. Zilliox (Eds.), Proceedings of the
Twenty-Fifth Annual Meeting of the North American Chapter of the International Group for the Psychology of Mathematics Education, Joint Meeting with Psychology of Mathematics Education. Manoa,
• Carlson, M., Jacobs, S., Coe, T., Larsen, S., and Hsu, E. (2003). “Applying Covariational Reasoning While Modeling Dynamic Events: A Framework and a Study (translation)”. Revista EMA, 8(2),
• Carlson, M., Jacobs, S., Coe, T., Larsen, S. and Hsu, E. (2002) “Applying Covariational Reasoning While Modeling Dynamic Events: A Framework and a Study”. Journal for Research in Mathematics
Education. Vol. 33, No. 5, 352â 378.
• Hsu;, E. (2002). “Pictures of Calculus Knowledge Networks: Data and Software”. In Mewborn, D.S., Sztajn, P., White, D.Y., Wiegal, H.G., Bryant, R.L. and Nooney, K. (Eds.), Proceedings of the
Twenty-Fourth Annual Meeting of the North American Chapter of the International Group for the Psychology of Mathematics Education (Vols. 1-4). p. 1063â 1064. Columbus, OH: ERIC Clearinghouse for
Science, Mathematics, and Environmental Education. SE 066 8888.
• Hsu, E. (2001). “On-line Math Teacher Conversations: Graphical, Statistical and Semantic Analysis”. In R. Speiser and C. A. Maher and C. N. Walter (Eds.), Proceedings of the Twenty-Third Annual
Meeting of the North American Chapter of the International Group for the Psychology of Mathematics Education (pp. 673). Columbus, OH: ERIC Clearinghouse for Science, Mathematics, and
Environmental Education. SE 065 164.
• Hsu, E. (2000) “Developing A Virtual Community Of Teachers: The Effects Of Live Contact And On-Line Conversational Dynamics”, Proceedings of the Twenty Second Annual Meeting of the North American
Chapter of the International Group for the Psychology of Mathematics Education. Fernandez, M. L. (Ed.). (2000). Columbus, OH: ERIC Clearinghouse for Science, Mathematics, and Environmental
Education. SE 064 088. p.702.
• Hsu, E. and Oehrtman, M. (2000) “Mixed Metaphors: Undergraduates Do Calculus Out Loud”, Proceedings of the Twenty Second Annual Meeting of the North American Chapter of the International Group
for the Psychology of Mathematics Education. Fernandez, M. L. (Ed.). (2000). Columbus, OH: ERIC Clearinghouse for Science, Mathematics, and Environmental Education. SE 064 088. p.101.
Larry Horvath
• Krim, J.S., Coté, L.E., Schwartz, R.S., Stone, E.M., Cleeves, J.J., Barry, K.J., Burgess, W., Buxner, S.R., Gerton, J.M., Horvath, L. and Keller, J.M., Lee, SC., and Rebar, B. (2019). Models and
impacts of science research experiences: A review of the literature of CUREs, UREs, and TREs. CBE—Life Sciences Education, 18(4), p.65.
• Yoon, I., Lyons J., Horvath, L., Yue, H., Twarek, B., Remold, J., & Hsu, E., (2018). SFSU Includes on SF CALL (Computing for All Levels and Learners. CoNECD – The Collaborative Network for
Engineering and Computing Diversity Conference.
• Horvath, L. & Brownstein, E. (2016). Next Generation Science Standards and edTPA: Evidence of Science and Engineering Practices. Electronic Journal of Science Education, 20(4).
• Horvath, L. & Marshall, J. (2014, September 7). The Natural Sciences: Understanding the Natural World. Chapter in Marshall, J. & Donahue, D., Art-Centered Learning Across the Curriculum:
Integrating Contemporary Art in the Secondary School Classroom. Teacher’s College Press.
• Grants
• Aligning the Science Teacher Education Pathway (A-STEP), Horvath, L. subaward PI. $107,000 from Korb, M., Aligning the Science Teacher Education Pathway. A Networked Improvement Community,
$3,577,306.00. National Science Foundation.
• Western Regional Noyce Alliance Horvath L, PI; Seashore, K., Hoellwarth, C., Hsu E., Ross, D., and Keller, J. Co-PIs (2018-2022). Robert Noyce Scholarship Program, $3,299,995, National Science
• Collaborative Research: A Study of the Impact of Pre-Service Teacher Research Experience on Effectiveness, Persistence, and Retention (2017-2022). Horvath L. PI Noyce Track 4 Grant with Cal Poly
SLO Prime, Fresno State, and Sacramento State. $60,000 subaward. National Science Foundation.
• San Francisco State Robert Noyce Teacher Scholarship Program (2011-2017). Horvath L. PI; Hsu E., M Cool A., co-PIs. Noyce Track 1 $1,200,000. National Science Foundation.
• Western Region Noyce Initiative (2013-2014), Horvath, L. subaward PI for planning and hosting Western Regional Noyce Conference in San Francisco, Fall 2014. $51,000. National Science Foundation.
Tendai Chitewere
• Chitewere, T. 2018. Sustainable Community and Green Lifestyles. Routledge Press. Chitewere, T., J.K. Shim, J.C. Barker, and I.H. Yen. 2017. How Neighborhoods Influence Health: Lessons to be
learned from the application of political ecology. Health & Place 45. Pg. 117-123.
• Chitewere, T. 2015. Ecovillages: Lessons for Sustainable Community. In: Environmental Magazine 57(2). Pg. 38-39.
• Augsburg, T. and Chitewere, T. 2013. Starting with World Views: A Five-Step Preparatory Approach to Integrative Interdisciplinary Learning. Issues in Interdisciplinary Studies. 31:174-191.
• Chitewere, T. 2012. Between a Rock and a Green Place: Exploring the relationship between green consumerism and social justice: In S.H. Emerman, M. Bjørnerud, S.A. Levy, and J.S. Schneiderman
(eds.), Liberation Science: Putting science to work for environmental justice. Lulu Press.
• Roberts, N.S. and Chitewere, T. 2011. Speaking of Justice: Exploring ethnic minority perspectives of the Golden Gate National Recreation Area. Environmental Practice 13(4). Pg. 1-16.
• Chitewere, T. 2010. Equity in Sustainable Communities: Exploring tools from environmental justice and political ecology. Natural Resources Journal 50(2). Pg. 315-339.
• Chitewere, T. and Taylor, D. E. 2010. Sustainable Living and Community Building in EcoVillage at Ithaca: The challenges of incorporating social justice concerns into the practices of an
ecological cohousing community. Research in Social Problems and Public Policy 18. Pg. 141-176.
• Chitewere, T. 2008. Green Technology and the Design of a Green Lifestyle. Humanities and Technology Review 27. Pg. 87-106.
Jamie Chan
• Chan. J.M., Tanner, K.D.Understanding the Nature of Science: Science Views From the Seventh Grade Classroom. (2006). pdf
Shandy Hauk
• Hauk, S., St. John, K., & Jones, M. (to appear). What does it take for faculty to be ready to change instructional practice? Journal of Geoscience Education.
• Hauk, S., Toney, A. F., Judd, A. B., & Salguero, K. (2020). Activities for enacting equity in mathematics education research. International Journal of Research in Undergraduate Mathematics
• Hauk, S., & Kaser, J. (2020). A search to capture and report on feasibility of implementation. American Journal of Evaluation 41(1), 145-155.
• Jackson, B., Hauk, S., Tsay, J-J., & Ramirez, A. (2020). Professional development for mathematics teacher education faculty: Need and design. The Mathematics Enthusiast 17(2), Article 8.
• Hauk, S. & Speer, N. (2020). Grading and Assessment Module for the Justice, Equity, Diversity, & Inclusion in Inquiry Based Learning (JEDIBL) College Mathematics Workshops. Invited, peer-reviewed
professional learning activity [n>200 participants]. Houston, TX: Academy of Inquiry Based Learning.
• Hauk, S. (2019). Understanding students’ perspectives: Mathematical autobiographies of undergraduates who are future K-8 teachers. In S. Hauk, B. Jackson, & J. J. Tsay (Eds.) Professional
Resources & Inquiry in Mathematics Education (PRIMED) Short-Course. San Francisco, CA: WestEd.
• Hauk, S., & D’Silva, K. (2018). Goals, resources, and orientations for equity in collegiate mathematics education research [Conference Long Paper]. In A. Weinberg, C. Rasmussen, J. Rabin, M.
Wawro, and S. Brown (Eds.), Proceedings of the 21st Annual Conference on Research in Undergraduate Mathematics Education (pp. 227-241). San Diego, California. Hauk, S., & Matlen, B., (2018).
Implementation and impact of a web-based activity and testing system in community college algebra. In A. Weinberg, C. Rasmussen, J. Rabin, M. Wawro, and S. Brown (Eds.), Proceedings of the 21st
Annual Conference on Research in Undergraduate Mathematics Education (pp. 908-916). San Diego, California.
• Hauk, S., Rasmussen, C., Engelke Infante, N., Lockwood, E., Zandieh, M., Brown, S., Lai, Y., & Hsu, P. (2018). Research in collegiate mathematics education. In A. Deines, et al. (Eds). Advances
in the mathematical sciences (pp. 245-268). New York: Springer.
• Hauk, S. (2017). Research in collegiate mathematics education arrives at the AWM Symposium. Association for Women in Mathematics (AWM) Newsletter, 47(4), 27-29.
• Hauk, S., Weinberg, A., & Murphy, T. J. (2017). Making RUME for improving mathematics teaching and learning. MAA Focus, 37(6), pp. 12-14.
• Hauk, S., Jackson, B., & Tsay, J-J. (2017). Those who teach the teachers: Knowledge growth in teaching for mathematics teacher educators [Conference Long Paper]. In A. Weinberg, C. Rasmussen, J.
Rabin, M. Wawro, and S. Brown (Eds.), Proceedings of the 20th Conference on Research in Undergraduate Mathematics Education (pp. 428-439).
• Hauk, S. & Mandinach, E. B. (2017). Exploration of the alignment of state data and infrastructure to mathematics and science success indicators. American Educational Research Association Online
Paper Repository. ERIC Number: ED577449.
• Hauk, S., & Matlen, B. (2017). Exploration of the factors that support learning: Web-based activity and testing systems in community college algebra [Conference Long Paper]. In A. Weinberg (Ed.),
Proceedings of the 20th Conference on Research in Undergraduate Mathematics Education (pp. 360-372). Online, peer-reviewed. ERIC Number: ED583986.
• Kaser, J., & Hauk, S. (2017). An Equity Audit: Is It in Your Future? Leadership for Educational Achievement (LEAF) Subscription for Professional Learning 4(1), 1-5.
• Hauk, S., & Toney, A. (2016). Communication, culture, and work in mathematics education in departments of mathematical sciences. In J. Dewar, P. Hsu and H. Pollatsek (Eds.), Mathematics
Education: A Spectrum of Work in Mathematical Sciences Departments (pp. 11-26). New York: Springer.
• Kaser, J., & Hauk, S. (2016). To be or not to be an online instructor? MathAMATYC Educator 7(3), 41–47. ERIC Number: ED567767.
• Hauk, S., Salguero, K., Kaser, J. (2016). How “good” is “good enough”? Exploring fidelity of implementation for a web-based activity and testing system in developmental algebra instruction. In T.
Fukawa-Connelly, N. Infante, M. Wawro, and S. Brown (Eds.), Proceedings of the 19th Conference on Research in Undergraduate Mathematics Education (pp. 210-217). ERIC Number: ED567765.
• Schneider, S. A., et al. (2016). Exploring models of online professional development. In C. Dede, A. Eisenkraft, K. Frumin, and A. Hartley (Eds.). Teacher learning in the digital age: Online
professional development in STEM education (Chapter 12). Cambridge, MA: Harvard Education Press. ERIC Number: ED568767.
• Hauk, S., Toney, A. F., Nair, R., Yestness, N. R., Troudt, M. (2015). Discourse in pedagogical content knowledge [Long Paper]. In T. Fukakawa-Connelly (Ed.), Proceedings of the 17th Conference on
Research in Undergraduate Mathematics Education.
• Deshler, J. M., Hauk, S., & Speer, N. M. (2015). Professional development in teaching for mathematics graduate students. Notices of the American Mathematical Society, 62(6), 638-643. www.ams.org/
• Hauk, S., Cremer, S., Carroll, C., D’Silva, K. M., Gale, M., Salguero, K., & Viviani, K. (2015) Connecting concepts in color: Patterns and algebra. Journal of the California Mathematics Project,
7, 15-20.
• Hauk, S., Powers, R. A., & Segalla, A. (2015). A comparison of web-based and paper-and-pencil homework on student performance in college algebra. PRIMUS 25(1), 61-79.
• Hauk, S., Toney, A. F., Jackson, B., Nair, R., & Tsay, J.-J. (2014). Developing a model of pedagogical content knowledge for secondary and post-secondary mathematics instruction. Dialogic
Pedagogy: An International Online Journal, 2, A16–40. Available: dpj.pitt.edu/ojs/index.php/dpj1/article/download/40/50
• Hauk, S., Speer, N. M., Kung, D., Tsay, J.-J. & Hsu, E. (Eds.) (2013). Video cases for college mathematics instructor professional development.
• Powers, R., Hauk, S., & Goss, M. (2013). Identifying change in secondary mathematics teachers' pedagogical content knowledge. In S. Brown (Ed.), Proceedings of the 16th Conference on Research in
Undergraduate Mathematics Education (Denver, CO).
• Toney, A. F., Slaten, K. M., Peters, E. F., & Hauk, S. (2013). Color work to enhance proof-writing in college geometry. Journal of the California Mathematics Project, 6, 9-20.
• Hauk, S., Chamberlin, M. C., Jackson, B., Yestness, N., King, K., & Raish, R. (2012). Interculturally rich mathematics pedagogical content knowledge for teacher leaders (long paper). In S. Brown
(Ed.), Proceedings of the 15th Conference on Research in Undergraduate Mathematics Education (Portland, OR).
• Hauk, S. (2012). Understanding students’ perspectives: Mathematical autobiographies of under-graduates who are not math majors. Journal of the California Mathematics Project, 5, 36-48.
• Tsay, J.-J., Judd, A. B., Hauk, S., & Davis, M. K. (2011). Case study of a college mathematics instructor: Patterns of classroom discourse. Educational Studies in Mathematics, 78, 205-229.
• Davis, M. K., Hauk, S., & Latiolais, P. (2010). Culturally responsive college mathematics. In B. Greer, S. Nelson-Barber, A. Powell, & S. Mukhopadhyay (Eds.), Culturally responsive mathematics
education (pp. 345–372). Mahwah, NJ: Erlbaum.
• Hauk, S., & Segalla, A. (2010). WeBWorK- Part II: Perceptions and success in college algebra. Journal of the Central California Mathematics Project, 3(1), 23-34.
• Segalla, A., & Hauk, S. (2010). High school mathematics homework and WeBWorK. Journal of the Central California Mathematics Project, 2(1), 8-14.
• Hauk, S., Chamberlin, M., Cribari, R., Judd, A. B., Deon, R., Tisi, A., & Kakakhail, H. (2009). Case story: Mathematics teaching assistant. Studies in Graduate and Professional Student
Development, 12, 39-62.
• Hauk, S. & Isom, M. A. (2009). Fostering college students' autonomy in written mathematical justification. Investigations in Mathematics Learning 2(1), 49-70.
• Hauk, S., Judd, A. B., Tsay, J-J., Barzilai, H., & Austin, H. (2008). Pre-service teachers’ understanding of logical inference. Investigations in Mathematics Learning 1(2), 1-34. Hauk
• Segalla, A., & Hauk, S. (2008). Using WeBWorK in an online course environment. Journal of Research in Innovative Teaching, 1, 128-144.
• Tsay, J.-J., & Hauk, S. (2006). Multiplication schema for signed number: Case study of three prospective teachers. Mathematical Sciences and Mathematics Education, 1(1), 33-37.
• Farmer, J., Hauk, S., & Neumann, A. M. (2005). Negotiating reform: Implementing Process Standards in culturally responsive professional development. High School Journal, 88, 59-71.
• Hauk, S. (2005). Mathematical autobiography among college learners in the United States. Adults Learning Mathematics International Journal 1(1), 36-56.
• Hauk, S., & Segalla, A. (2005). Student perceptions of the web-based homework program WeBWorK in moderate enrollment college algebra courses. Journal of Computers in Mathematics and Science
Teaching, 24(3), 229-253. Selden, A.,
• Selden, J., Hauk, S., & Mason, A. (2000). Why can't calculus students access their knowledge to solve non-routine problems? In E. Dubinsky, A. H. Schoenfeld, & J. Kaput (Eds.) Research in
Collegiate Mathematics Education. IV (pp. 128-153). Providence, RI: American Mathematical Society.
Isabel Quita
• Quita, Isabel. (Fall 2003). What is a Scientist? Perspectives of Teachers of Color. Multicultural Education, vol. II, Number 1.
• Quita, Isabel N. (February 2002). Interpreting A Filipino Elementary Student's Ideas Of Force And Motion: A Case Study. Journal of Interdisciplinary Education, North America Chapter, volume 5.
World Council for Curriculum and Instruction.
• Quita, Maria Isabel. (Summer 2000). The Challenges in Elementary Science Teaching and Learning in the 21st Century. In College of Education: Review, San Francisco State University, vol. 12.
Kim Seashore
• Wernet, Jamie & Lepak, Jerilynn & Seashore, Kimberly & Nix, Sarah & Reinholz, Daniel & Floden, Robert. (2011). Assessing What Counts.
• Seashore, Kimberly. (2021). Growth series of root lattices.
• Seashore, Kimberly & Beck, Matthias & Ardila, Federico. (2021). Using Polytopes to Derive Growth Series for Classical Root Lattices.
Stephanie Sisk-Hilton
• Meier, D., and Sisk-Hilton, S., eds. (2020). Nature education with young children: Integrating inquiry and practice 2nd ed. New York: Routledge.
• Metz, K. E., Cardace, A., Berson, E., Ly, U., Wong, N., Sisk-Hilton, S., Metz, S. E. & Wilson, M. (2019). Primary Grade Children’s Capacity to Understand Microevolution: The Power of Leveraging
Their Fruitful Intuitions and Engagement in Scientific Practices. Journal of the Learning Sciences, 1-60.
• Sisk-Hilton, S., K.E. Metz, & E. Berson (2018). Jumping into natural selection: Using thought experiments and first hand investigation of crickets to construct explanations of natural selection.
Science and Children, 55(6), 29-35.
• Sisk-Hilton, S. and Meier, D. (2017). Narrative inquiry in early childhood and elementary school: Learning to teach, teaching well.
• Sisk-Hilton, S. (2009). Teaching and Learning In Public: Professional Development Through Shared Inquiry. Teachers College Press, New York, NY.
Ilmi Yoon
• Ilmi Yoon, Julio C. Ramirez, Sushil Kumar Plassar*, Ting Yin*, Vipul KaranjKar*, Joseph G. Lee*, Carmen Domingo, "Deep Transfer Learning based Web Interfaces for Biology Image Data
Classification”, To appear in Proceeding of the Intelligent Systems Conference, 3-4 September 2020 in Amsterdam, The Netherlands.
• Beste F. Yuksel, Pooyan Fazli, Umang Mathur*, Vaishali Bisht*, Soo Jung Kim*, Joshua Junhee Lee*, Seung Jung Jin*, Yue-Ting Siu, Joshua A Miele, Ilmi Yoon, “Human-in-the-Loop Machine Learning to
Increase Video Accessibility for Visually Impaired and Blind Users” accepted to present at ACM DIS 2020.
• Beste F. Yuksel, Pooyan Fazli, Umang Mathur*, Vaishali Bisht*, Soo Jung Kim*, Joshua Junhee Lee*, Seung Jung Jin*, Yue-Ting Siu, Joshua A Miele, Ilmi Yoon, “Increasing Video Accessibility for
Visually Impaired Users with Human-in-the-Loop Machine Learning”, ACM CHI Extended Abstracts 2020 In Press
• Ilmi Yoon, Umang Mathur*, Brenna Gibson Tirumalashetty*, Pooyan Fazli, Joshua Miele, "Video Accessibility for the Visually Impaired", In Proceedings of the Workshop on AI for Social Good at the
International Conference on Machine Learning, ICML, Long Beach, CA, USA, 2019.
• Anagha Kulkarni, Ilmi Yoon, Pleuni Pennings, Kaz Okada, Carmen Domingo, “Promoting Diversity in Computing”, In Proceedings of the 23rd Annual ACM Conference on Innovation and Technology in
Computer Science Education, Larnaca, Cyprus, 2018.
Hao Yue
• Dian Shi, Jiahao Ding, Sai Mounika Errapotu, Hao Yue, Wenjun Xu, Xiangwei Zhou, and Miao Pan, “Deep Q-Network Based Route Scheduling for TNC Vehicles with Passengers’ Location Differential
Privacy,” IEEE Internet of Things Journal (IOTJ), vol. 6, no. 5, pp. 7681-7692, October 2019.
• Shaohua Li, Kaiping Xue, David Wei, Hao Yue, Nenghai Yu and Peilin Hong, “SecGrid: A Secure and Efficient SGX-enabled Smart Grid System with Rich Functionalities,” IEEE Transactions on
Information Forensics and Security (TIFS), vol. 15, pp. 1318-1330, September 2019.
• Kaiping Xue, Peixuan He, Xiang Zhang, Qiudong Xia, David Wei, Hao Yue, and Feng Wu, “A Secure, Efficient and Accountable Edge-based Access Control Framework for Information Centric Networks,”
IEEE/ACM Transactions on Networking (TON), vol. 27, no. 3, pp. 1220-1233, June 2019.
• Charles Tuttle*, Savankumar Patel*, and Hao Yue, “Malicious Message Detection on Twitter via Dissemination Paths”, The IEEE International Conference on Computing, Networking and Communications
(ICNC 2020), Big Island, HI, February 17-20, 2020.
• Liang Li, Ronghui Hou, Xinyue Zhang, Hao Yue, Hui Li, and Miao Pan, “Participant Recruitment for Coverage-Aware Mobile Crowdsensing with Location Differential Privacy”, The IEEE Global
Communications Conference (GLOBECOM 2019), Waikoloa, HI, December 9-13, 2019.
Maria Zavala
• Latina/o Youth's Perspectives on Race, Language, and Learning Mathematics. Maria del Rosario Zavala (2014).Journal of Urban Mathematics Education, vol 7 no 1
• Developing Robust Forms of Pre-Service Teachers’ Pedagogical Content Knowledge through Culturally Responsive Mathematics Teaching Analysis. Julia M. Aguirre, Maria del Rosario Zavala & Tiffany
Katanyoutanant (2012), Mathematics Teacher Education and Development, v14, Special Issue on Pedagogical Content Knowledge.
• Making Culturally Responsive Mathematics Teaching Explicit: A Lesson Analysis Tool. Julia M. Aguirre & Maria del Rosario Zavala (2012), Pedagogies, v8 issue2. And read our chapter in Rethinking
Mathematics, 2nd Edition.
Kim Coble
• Wooten, M. M., Coble, K., Puckett, A. W., and Rector, T. 2018, Investigating introductory astronomy students’ perceived impacts from participation in course-based undergraduate research
experiences, Phys. Rev. Phys. Educ. Res. 14, 010151
• Coble, K., Conlon, M. & Bailey, J. M. 2018, Investigating undergraduate students’ ideas about the curvature of the Universe, Phys. Rev. Phys. Educ. Res. 14, 010144
• Conlon, M., Coble, K., Bailey, J. M., & Cominsky, L. R. 2017, Investigating undergraduate students’ ideas about the fate of the Universe, Phys. Rev. Phys. Educ. Res. 13, 020128
• Coble, K., Camarillo, C. T., Trouille, L. E., Bailey, J. M., Cochran, G. L., Nickerson, M. D., & Cominsky, L. R. 2013, Investigating Student Ideas About Cosmology I: Distances and Structure,
Astronomy Education Review, 12, 010102
• Coble, K., Nickerson, M. D., Bailey, J. M., Trouille, L. E., Cochran, G. L., Camarillo, C. T., & Cominsky, L. R. 2013, Investigating Student Ideas About Cosmology II: Composition, Astronomy
Education Review, 12, 010111
• Trouille, L. E., Coble, K. A., Cochran, G. L., Bailey, J. M., Camarillo, C. T., Nickerson, M. D., and Cominsky, L. R. 2013, Investigating Student Ideas About Cosmology III: Big Bang, Expansion,
and Age of the Universe, Astronomy Education Review, 12, 010110
• Bailey, J. M., Coble, K. A., Cochran, G. L., Larrieu, D. M., Sanchez, R., and Cominsky, L. R. 2012, A Multi-Institutional Investigation of Students’ Preinstructional Ideas About Cosmology,
Astronomy Education Review, 11, 010302
• Sabella, M. S., Coble, K., and Bowen, S. P. 2008, Using the resources of the student at the urban, comprehensive university to develop an effective instructional environment, PERC Proceedings,
(AIP, NY)
• A. K. Hodari, B. Cunningham, L. J. Martinez-Miranda, M. Urry, K. Coble, E. Freeland, T. Hodapp, R. Ivie, M. Ong, S. Petty, S. Seestrom, S. Seidel, E. Simmons, M. Thoennessen, and H. White, 2011,
Many Steps Forward, A Few Steps Back: Women In Physics In The U. S., ICWIP 2011 Conference Proceedings
• D. Norman et al. 2009, Research Science and Education: The NSF’s Astronomy and Astrophysics Postdoctoral Fellowship, Astro2010 State of the Profession Position Paper
• Elwood, B., Puckett, A. W., Coble, K., Cortes, S., 2011, Searching For Hazardous Asteroids, BAAS, 218, 224.04
• Coble, K., et al. 2007, Radio Sources Toward Galaxy Clusters at 30 GHz, AJ, 134, 897
• Coble, K., et al. 2003, Observations of Galactic and Extra-galactic Sources From the BOOMERANG and SEST Telescopes, submitted to ApJS
• Crill, B. P., et al. 2003, BOOMERANG: A Balloon-borne Millimeter Wave Telescope and Total Power Receiver for Mapping Anisotropy in the Cosmic Microwave Background, ApJS, 148, 527
• Masi, S., et al. 2002, The BOOMERanG experiment and the curvature of the Universe, Prog. Part. Nucl. Phys. 48, 243, astro-ph/0201137
• de Bernardis, P., et al. 2002, Multiple Peaks in the Angular Power Spectrum of the Cosmic Microwave Background: Significance and Consequences for Cosmology, ApJ, 564, 559
• Netterfield, C. B., et al. 2001, A measurement by BOOMERANG of multiple peaks in the angular power spectrum of the cosmic microwave background, ApJ, 571, 604
• Jaffe, A., et al. 2001, Cosmology from Maxima-1, Boomerang and COBE/DMR CMB Observations, Phys. Rev. Lett., 86, 3475
• Prunet, S., et al. 2000, Noise estimation in CMB time-streams and fast map-making. Application to the BOOMERanG98 data, Proc. of the MPA/ESO/MPA conference: Mining the Sky, Garching, July 31 -
August 4, 2000
• de Bernardis, P., et al. 2000, First results from the BOOMERanG experiment, Proc. CAPP2000 Conference, Verbier, July 17 - 28, 2000
• de Bernardis, P., et al. 2000, Detection of anisotropy in the Cosmic Microwave Background at horizon and sub-horizon scales with the BOOMERanG experiment, Proc. IAU Symposium 201: New
Cosmological Data and the Values of the Fundamental Parameters, Manchester, August 7 - 11, 2000
• Bond, J. R., et al. 2000, The Cosmic Background Radiation circa nu2K, in Proc. Neutrino 2000 (Elsevier), CITA-2000-63
• Bond, J. R., et al. 2000, The Quintessential CMB, Past & Future, in Proc. CAPP-2000 (AIP), CITA-2000-64
• Bond, J. R., et al. 2000, CMB Analysis of Boomerang & Maxima & the Cosmic Parameters {Omega_tot,Omega_b h^2,Omega_cdm h^2,Omega_Lambda,n_s}, in Proc. IAU Symposium 201 (PASP), CITA-2000-65
• Lange, A. E., et al. 2001, First Estimations of Cosmological Parameters From BOOMERANG, Phys. Rev. D, 63, 042001
• de Bernardis, P., et al. 2000, A Flat Universe from High-Resolution Maps of the Cosmic Microwave Background Radiation, Nature, 404, 955
• Peterson, J. B., et al. 2001, Cosmic Microwave Background Anisotropy Data from the Arcminute Cosmology Bolometer Array Receiver, BAAS, 33, 1357
• Romer, A. K., et al. 2001, Imaging the Sunyaev Zel'dovich Effect from the South Pole with the ACBAR instrument on Viper Telescope, BAAS, 33, 1522
• Mukherjee, P., Coble, K., Dragovan, M., Ganga, K., Kovac, J., Ratra, B. and Souradeep, T. 2003, Galactic Foreground Constraints from the Python V Cosmic Microwave Background Anisotropy Data, ApJ,
592, 692
• Coble, K., Dodelson, S., Dragovan, M., Ganga, K., Knox, L., Kovac, J., Ratra, B. and Souradeep, T. 2003, Cosmic Microwave Background Anisotropy Measurement From Python V , ApJ, 584, 585
• Wilson, G. W., et al. 2000, New CMB Power Spectrum Constraints from MSAM I, ApJ, 532, 57
|
{"url":"https://csme.sfsu.edu/index.php/research","timestamp":"2024-11-12T15:40:42Z","content_type":"text/html","content_length":"97932","record_id":"<urn:uuid:177df5d1-73c9-40b6-8ee5-82e2dd5a53be>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00382.warc.gz"}
|
How is photon emissions calculated?
How is photon emissions calculated?
The power output is equal to the number of photons emitted per second multiplied by the energy of each photon. Divide the emitted power by the energy of each photon, given by equation 30-4, to
calculate the rate of photon emission.
What is photon rate?
The rate of photons at a particular wavelength and power is calculated by use of the equation for photon energy, E = hc/λ (where h is Planck’s constant).
How do you calculate the number of photons emitted per minute?
1 Answer
1. Calculate the energy of a photon. E=hcλ=6.626×1034J⋅s×2.998×108m⋅s−1670×10−9m=2.965×10−19J. ( 3 significant figures + 1 guard digit)
2. Calculate the total energy per second. Total energy = 1.0×10−3W×1J⋅s−11W=1.0×10−3J⋅s−1.
3. Calculate the number of photons per millisecond. Number of photons =
How do you calculate a photon?
Photons velocity equals the speed of light. Photons are massless, but they have energy E = hf = hc/λ. Here h = 6.626*10-34 Js is called Planck’s constant. The photon energy is inversely proportional
to the wavelength of the electromagnetic wave.
How much photons are emitted per second?
Divide the power of the wave by this answer. If, for instance, you are calculating all the photons emitted by a 100-watt bulb: 100 / (3.06 x 10^-19) = 3.27 x 10^20. This is the number of photons that
the light carries each second.
How many photons are emitted per second by a 60 watt bulb?
So, in order to emit 60 Joules per second, the lightbulb must emit 1.8 x 1020 photons per second.
How many photons are emitted per second?
Thus the number of photons emitted per second =3. 14×105×10=1.
What will be the number of photons emitted per second by a 25 Watt?
4 A. Was this answer helpful?
How do you calculate the energy of a photon given the frequency?
The energy associated with a single photon is given by E = h ν , where E is the energy (SI units of J), h is Planck’s constant (h = 6.626 x 10–34 J s), and ν is the frequency of the radiation (SI
units of s–1 or Hertz, Hz) (see figure below).
How many photons are emitted per second by a 5?
Thus the number of photons emitted per second =3.
How many photons SEC are emitted by a 100 watt light bulb?
If, for instance, you are calculating all the photons emitted by a 100-watt bulb: 100 / (3.06 x 10^-19) = 3.27 x 10^20. This is the number of photons that the light carries each second.
How many photons are emitted per second by 5m?
How many photons are emitted per second by a laser with a power of 1 mW?
3.2×1015 photons/s
A laser pointer typically has a power rating of 1 mW, or 0.001 Joules per second. This means that every second a laser pointer emits a number of photons equal to (3.2x1018photons/J)(0.001J/s) =
3.2×1015 photons/s.
What is the formula for calculating the energy of a photon quizlet?
How is the energy of a photon related to its frequency? Given the wavelength, λ, of a photon, the energy, E, can be calculated using the equation: E=hcλ where h is Planck’s constant (h=
6.626×10−34J⋅s) and c the speed of light (c=2.998×108m/s). A laser pulse with wavelength 550 nm contains 4.40 mJ of energy.
How many photons are emitted per second by a 5 mW laser?
How do you calculate photons per second given wavelength and Watts?
Assuming the wave’s speed to be the speed of light in a vacuum, which is 3 x 10^8 meters per second: 6.63 x 10^-34 x 3 x 10^8 = 1.99 x 10^-25. Divide the power of the wave by this answer. If, for
instance, you are calculating all the photons emitted by a 100-watt bulb: 100 / (3.06 x 10^-19) = 3.27 x 10^20.
How do you calculate the energy emitted by a laser?
In standard units, c = 3×108 m/s and h = 6.63×10-34m2kg/s. We can use these values to calculate the energy of a single photon, E = ( 6.63×10-34m2kg/s)(3×108 m/s)/(6.3×10-7m) = 3.1×10-19J per photon.
What is the formula used to calculate the energy of a photon?
E = hf
If the frequency f of the photon is known, then we use the formula E = hf. This equation was first suggested by Max Planck and hence is referred to as Planck’s equation. Similarly, if the wavelength
of the photon is known then the energy of the photon can be calculated using the formula E=hc/λ.
How do I calculate the number of photons emitted?
Calculate the energy of a photon. E=hcλ=6.626×1034J⋅s×2.998×108m⋅s−1670×10−9m=2.965×10−19J. ( 3 significant figures+1 guard digit)
Calculate the total energy per second. Total energy = 1.0×10−3W×1J⋅s−11W=1.0×10−3J⋅s−1.
Calculate the number of photons per millisecond. Number of photons =
How do you calculate photon?
How do you calculate the number of photons emitted per second? Divide the power of the wave by this answer. If, for instance, you are calculating all the photons emitted by a 100-watt bulb: 100 /
(3.06 x 10^-19) = 3.27 x 10^20. This is the number of photons that the light carries each second.
What is the equation for a photon?
Equation Photon Energy. Photons are transverse waves of energy as a result of particle vibration. The equation to calculate photon energy uses the energy wave equation and the longitudinal energy
difference between two points measured as a distance (r) from the atom’s nucleus. The difference in longitudinal wave energy creates a new transverse wave (photon).
How many photons per second does the source emit?
So, in order to emit 60 Joules per second, the lightbulb must emit 1.8 x 10 20 photons per second. (that’s 180,000,000,000,000,000,000 photons per second!) In view of this, how many photons are
emitted per second by the sun? The total solar output is around 4*10^26 W, most of which is in the visible range.
|
{"url":"https://erasingdavid.com/blog/how-is-photon-emissions-calculated/","timestamp":"2024-11-15T00:17:02Z","content_type":"text/html","content_length":"55973","record_id":"<urn:uuid:30b82f09-4526-43fc-876e-04455923f7f7>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00147.warc.gz"}
|
You have $54,773.07 in a brokerage account, and you plan to
deposit an additional $6,000 at...
You have $54,773.07 in a brokerage account, and you plan to deposit an additional $6,000 at...
You have $54,773.07 in a brokerage account, and you plan to deposit an additional $6,000 at the end of every future year until your account totals $290,000. You expect to earn 11% annually on the
account. How many years will it take to reach your goal? Round your answer to two decimal places at the end of the calculations.
A. What's the future value of a 3%, 5-year ordinary annuity that pays $700 each year? Round your answer to the nearest cent.
B. If this was an annuity due, what would its future value be? Round your answer to the nearest cent.
Number of year it take is calculated in excel and screen shot provided below:
it will take 11 year.
In the ordinary annuity payment is made at the end of the period and in the annuity due payment is made at the beginning of the year.
Future value of Ordinary annuity is calculate in excel and screen shot provided below:
Future value of ordinary annuity is $3,716.40.
Future value of annuity due is calculate in excel and screen shot provided below:
future value of annuity due is $3,827.89.
|
{"url":"https://justaaa.com/finance/109515-you-have-5477307-in-a-brokerage-account-and-you","timestamp":"2024-11-07T07:43:57Z","content_type":"text/html","content_length":"41285","record_id":"<urn:uuid:f2cf7301-3c31-4b21-8c23-ec335abff0d4>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00469.warc.gz"}
|
Chosen-ciphertext security via correlated products
We initiate the study of one-wayness under correlated products. We are interested in identifying necessary and sufficient conditions for a function f and a distribution on inputs (x [1], ⋯, x [k] ),
so that the function (f(x [1]), ⋯, f(x [k] )) is one-way. The main motivation of this study is the construction of public-key encryption schemes that are secure against chosen-ciphertext attacks
(CCA). We show that any collection of injective trapdoor functions that is secure under a very natural correlated product can be used to construct a CCA-secure encryption scheme. The construction is
simple, black-box, and admits a direct proof of security. We provide evidence that security under correlated products is achievable by demonstrating that lossy trapdoor functions (Peikert and Waters,
STOC '08) yield injective trapdoor functions that are secure under the above mentioned correlated product. Although we currently base security under correlated products on existing constructions of
lossy trapdoor functions, we argue that the former notion is potentially weaker as a general assumption. Specifically, there is no fully-black-box construction of lossy trapdoor functions from
trapdoor functions that are secure under correlated products.
Original language English
Title of host publication Theory of Cryptography - 6th Theory of Cryptography Conference, TCC 2009, Proceedings
Pages 419-436
Number of pages 18
State Published - 2009
Externally published Yes
Event 6th Theory of Cryptography Conference, TCC 2009 - San Francisco, CA, United States
Duration: 15 Mar 2009 → 17 Mar 2009
Publication series
Name Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume 5444 LNCS
ISSN (Print) 0302-9743
ISSN (Electronic) 1611-3349
Conference 6th Theory of Cryptography Conference, TCC 2009
Country/Territory United States
City San Francisco, CA
Period 15/03/09 → 17/03/09
Dive into the research topics of 'Chosen-ciphertext security via correlated products'. Together they form a unique fingerprint.
|
{"url":"https://cris.huji.ac.il/en/publications/chosen-ciphertext-security-via-correlated-products-14","timestamp":"2024-11-11T10:57:38Z","content_type":"text/html","content_length":"50244","record_id":"<urn:uuid:8877ee18-0ce0-46ee-bd3a-4b6ba9a364e4>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00831.warc.gz"}
|
Analysis and Partial Differential Equations
James Brennan Professor (Potential Theory and Complex Analysis), Ph.D., Brown University, 1968
Francis Chung Associate Professor (Inverse Problems, Mathematical Physics, Partial Differential Equations, Harmonic Analysis), Ph.D., University of Chicago, 2012
Peter Hislop Professor (Mathematical Physics and Geometric Analysis), Ph.D., University of California-Berkeley, 1984
Peter Perry Professor (Mathematical Physics, Geometric Analysis, Dispersive Nonlinear PDE), Ph.D., Princeton University, 1981
Zhongwei Shen Professor (Partial Differential Equations, Harmonic Analysis, and Mathematical Physics), Ph.D., University of Chicago, 1989
Mihai Tohaneanu Associate Professor (Analysis and Partial Differential Equations), Ph.D., University of California-Berkeley, 2009
Emeritus Faculty
Russell Brown Harmonic Analysis, Inverse Problems, Partial Differential Equations
Richard Carey Functional Analysis
Ray Cox Analysis
Ronald Gariepy Partial Differential Equations
Lawrence Harris Functional Analysis, Infinite-Dimensional Holomorphy, Polynomials
John Lewis Partial Differential Equations
Ted Suffridge Complex Analysis
Sad News
Emeritus Professor James Wells passed away on 14 March 2023. Wells retired from the Department in 2004. His obituary describes some of his many contributions to the University of Kentucky Department
of Mathematics.
Analysis and Partial Differential Equations Seminar
Tuesdays at 11:00 A.M.; Coordinator: Mihai Tohaneanu
Schedules from past semesters: Fall 2023 | Spring 2023 | Fall 2022
Ohio River Analysis Meeting (ORAM)
The Ohio River Analysis Meeting is an annual meeting sponsored by the University of Kentucky and the University of Cincinnati. Each meeting brings leading experts in analysis to the region and also
provides an opportunity for students and recent graduates to present their work. The 13th edition of this conference is scheduled for 16-17 March 2024 at the University of Cincinnati. Information
about the series can be found at list of ORAM conference websites at the University of Cincinnati.
Ph.D. Dissertations Since 1989
• Robert Righi (2024) Dirichlet Problems in Perforated Domains (advisor: Z. Shen)
• Jamison Wallace (2024) Uniform Regularity Estimates for the Stokes System in Perforated Domains (advisor: Z. Shen)
• Shi-Zhuo Looi (2023) Asymptotic behaviour of hyperbolic partial differential equations (advisor: Mihai Tohaneanu)
• Camille Schuetz (2023) A Scattering Result for the Fifth-order KP-II Equation (advisor: Peter Perry)
• Landon Gauthier (2022) Inverse Boundary Value Problems for Polyharmonic Operators with Non-smooth Coefficients (advisor: R.Brown)
• Sam Herschenfeld (2021), Some Proofs Regarding Minami Estimates and Local Eigenvalue Statistics for some Random Schrödinger Operator Models (advisor: P. Hislop)
• Joel Klipfel (2020), The Direct Scattering Map for the Intermediate Long Wave Equation (advisor: P. Perry)
• Ben Brodie (2020), Eigenvalue Statistics and Localization for Random Band Matrices with Fixed Width and Wegner Orbital Model (advisor: P. Hislop)
• Maryam al Ghafli (2019), An Inverse Eigenvalue Problem for the Schrödinger Equation on the Unit Ball of R^3 (advisor: P. Hislop)
• Jinping Zhuge (2019), Boundary Layers in Periodic Homogenization (advisor: Z. Shen)
• George Lytle (2019), Approximations in Reconstructing Discontinuous Conductivities in the Calderón Problem (advisor: P. Perry)
• Stephen Deterding (2018), Bounded Point Derivations on Certain Function Spaces (advisor: J. Brennan)
• B. Chase Russell (2018), Homogenization in Perforated Domains and with Soft Inclusions (advisor: Z. Shen)
• Joseph Lindgren (2017), Orbital Stability of Solitons Solving Non-Linear Schrödinger Equations in an External Potential (advisor: P. Hislop)
• Jiaqi Liu (2017), Global Well-Posedness for the Derivative Nonlinear Schrödinger Equation Through Inverse Scattering (advisor: P. Perry)
• Morgan Schreffler (2017), Approximation of Solutions to the Mixed Dirichlet-Neumann Boundary Value Problem on Lipschitz Domains (advisor: R. Brown)
• Robert Wolf (2017), Compactness of Isoresonant Potentials (advisor: P. Hislop)
• Laura Croyle (2016), Solutions to the L^p Mixed Boundary Value Problem in C^1,1 Domains (Advisor: R. Brown)
• Shu Gu (2016), Homogenization of Stokes Systems with Periodic Coefficients (Advisor: Z. Shen)
• Michael Music (2016), Inverse Scattering for the Zero-Energy Novikov-Veselov Equation (Advisor: P. Perry)
• Yaowei Zhang (2016), The Bourgain Spaces and Recovery of Magnetic and Electric Potentials of Schrödinger Operators (Advisor: R. Brown)
• Aaron Saxton (2014), Decay estimates on trace norms of locaized functions of Schrödinger operators (Advisor: P. Hislop)
• Murat Akman (2014), On The Dimension of a Certain Measure Arising From a Quasilinear Elliptic PDE (Advisor: J. Lewis)
• Megan Gier (2014), Eigenvalue Multiplicities of the Hodge Laplacian on Coexact 2-forms for Generic Metrics on 5-Manifolds (Advisor: P. Hislop)
• Tao Huang (2013), Regularity and Uniqueness of Harmonic and Biharmonic Map Heat Flows (Advisor: C. Wang).
• Ryan Walker (2013), On a Paley-Wiener Theorem for the ZS-AKNS Scattering Transform (Advisor: P. Perry).
• Jay Hineman (2012), The Hydrodynamic Flow of Nematic Liquid Crystals in Three Dimensions (Advisor: C. Wang)
• Chris Mattingly (2012), Rational Approximation on Compact Nowhere Dense Sets (Advisor: J. Brennan).
• Jun Geng (2011), Elliptic Boundary Value Problems on Non-smooth Domains (Advisor: Z. Shen).
• Erin Militzer (2011), L^p Polynomial Approximation and Uniform Rational Approximation (Advisor: J. Brennan).
• Justin Taylor (2011), Convergence of Eigenvalues for Elliptic Systems on Domains with Thin Tubes and the Green Function for the Mixed Problem (Advisor: R. Brown).
• Phuoc Ho (2010), Upper Bounds on the Splitting of Eigenvalues (Advisor: P. Hislop).
• Joel Kilty (2009), L^p Boundary Value Problems on Lipschitz Domains (Advisor: Z. Shen).
• Julie Miker (2009), Eigenvalue Inequalities for a Family of Spherically Symmetric Riemannian Manifolds (Advisor: P. Hislop).
• Zhongyi Nie (2009), Estimates for a Class of Multi-linear forms (Advisor: R. Brown).
• Teng Jiang (2008), Absolute Minimizers of L^∞ Functional under the Dirichlet Energy Constraint (Advisor: C. Wang).
• Christopher Frayer (2008), Scattering Theory on the Line with Singular Miura Potentials (Advisor: P. Perry).
• Aekyoung Shin Kim (2007), The L^p Neumann Problem for Laplace Equation in Convex Domains (Advisor: Z. Shen).
• Yuho Shin (2006), Geodesics of a Two-Step Nilpotent Lie Group (Advisor: P. Perry).
• Bjorn Bennewitz (2006), Nonuniqueness in a Free Boundary Problem (Advisor: J. L. Lewis).
• Mike Dobranski (2004), Construction of Exponentially Growing Solutions to First-Order Systems with Non-Local Potentials (Advisor: R. Brown).
• Mary Goodloe (2004), Hadamard Products of Convex Harmonic Mappings (Advisor: T. Suffridge).
• Steve Kovacs (2004), Invertibility Preserving maps of C^*-Algebras (Advisor: L. Harris).
• Stacey Mueller (2004), Harmonic Mappings and Solutions of a Differential Equation Related to de la Vallee Poussin Means (Advisor: T. Suffridge).
• Wataru Ishizuka (2003), The Weak Compactness and Regularity of Weakly Harmonic Maps (Advisors: C. Wang and L. Harris).
• Christopher Morgan (2001), On Univalent Harmonic Mappings (Advisor: T. Suffridge).
• Carl Lutzer (2000), On the extraction of topological and geometric information from the spectrum of the Dirichlet to Neumann operator (Advisor: P. Hislop).
• Jeffery D. Sykes (1999), Regularity of Solutions of the Mixed Boundary Problem for Laplace's Equation on a Lipschitz Graph Domain (Advisor: R. Brown).
• Jerry R. Muir, Jr. (1999), Linear and Holomorphic Idempotents and Retracts in the Open Unit Ball of a Commutative C*-algebra with Identity (Advisor: T. Suffridge ).
• Michael D. Galloy (1998), Harmonic Univalent Mappings on the Unit Disk and the Punctured Unit Disk (Advisor: T. Suffridge ).
• John T. Thompson (1998), A Study of Harmonic Mappings on Punctured Domains: An Argument Principle and Some Coefficient Results (Advisor: T. Suffridge).
• Wei Hu (1997), The Initial-boundary Value Problem for Higher Order Differential Operators on Lipschitz Cylinders (Advisor: R. Brown).
• John Prather (1997), Geometric Properties of the Hadamard Product (Advisor: T. Suffridge).
• Michael Dorff (1997), The Inner Mapping Radius and Construction of Harmonic, Univalent Mappings of the Unit Disk (Advisor: T. Suffridge).
• Robert Robertson (1996), An inverse boundary value problem in linear elasticity (Advisor: P. Hislop).
• John Tolle (1996), Location of Inhomogeneities in Elastic Media (Advisor: R. Gariepy).
• Ron Vandenhouten (1996), Stability for the Biharmonic and Polyharmonic Obstacle Problems (Advisor: D. R. Adams).
• Kevin Roper (1995), Convexity Properties of Holomorphic Mappings in C (Advisor: T. Suffridge).
• Evelyn Pupplo-Cody (1992), A Structural Formula for a Class of Typically Real Functions and Some Consequences (Advisor: T. Suffridge).
• Barbara Hatfield (1991), Gradient Estimates for Capillary Problems (Advisor: R. Gariepy).
• Hi Jun Choe (1989), Regularity for Minimizers of Certain Singular Functionals (Advisor: R. Gariepy).
|
{"url":"https://math.as.uky.edu/analysis-and-partial-differential-equations","timestamp":"2024-11-04T22:29:47Z","content_type":"text/html","content_length":"29621","record_id":"<urn:uuid:0b892111-724a-4664-a386-d1359c32b21f>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00038.warc.gz"}
|
Numerical Linear Algebra - M1 - 8EC
Students should have linear algebraic capabilities that surpass the mere ability
to perform linear algebraic computations and that include geometric intuition in
normed spaces and inner product spaces. They should be acquainted with the basic
principles of numerical mathematics and have programming skills that allow them
to work in MatLab.
This requires that apart from a first-year BSc course in Linear Algebra, student
has followed an advanced course in Lineair Algebra or a BSc course in Numerical
Linear Algebra or even Representation Theory. Moreover, they have successfully
passed a course in Numerical Mathematics that includes the formal definitions of
conditioning of a mathematical problem and backward stability of an algorithm
to solve that problem.
Students know how to use and compute LU- Cholesky, and QR-factorizations. They
have worked with plane rotations (Givens) and reflectors in hyperplanes (House-
holder) and know how these generate the (special) orthogonal and unitary groups.
Knowledge of other matrix Lie groups is a plus but not strictly necessary.
Students know the Spectral Theorems for selfadjoint, normal and unitary linear
transformations. They know and understand the Schur-, Jordan-, and Singular
Value factorizations and know how to compute them by hand for small matrices.
They know the Power Method, the Rayleigh Quotient Iteration, and the QR-iteration
for the approximation of eigenpairs.
These prerequisites and assumed prior knowledge can for example be obtained from:
[1] L.N. Trefethen and D. Bau (1997).
Numerical Linear Algebra, SIAM Society for Industrial and Applied Matematics.
Lectures 1-31.
[2] A. Quarteroni, R. Sacco and F. Saleri (2006).
Numerical Mathematics. Springer Verlag, 2nd edition.
Chapters 1-5.
The first two lectures will be spent on reviewing this material. Note that reviewing
is not the same as explaining in detail. If you have not seen the material before it
may be hard to absorb everything in just these two weeks.
Long before the course starts, a yes/no quiz will be placed on the elo-website of
the course with 30 easy questions about the prerequisites. If they do not seem easy,
please reconsider taking this course, or work hard to get to the required level.
Aim of the course
This course is a first introduction into the main aspects of iterative methods
to approximate the solutions of finite- but high-dimensional linear equations,
eigenvalue-, and singular value problems. Many of these methods are based on
the clever reduction of the problem to an approximating problem of much smaller
dimensions. The smaller problem yields an approximate solution of the original
problem but simultaneously provides information how to set up the next reduced
problem whose corresponding approximation is better than the previous one. This
leads to a sequence of smaller problems that need to be solved in order to get
increasingly better approximations of the solution of the original problem.
The aim is to teach students how to approximate solutions of large scale linear
algebra problems by cleverly designed small scale linear algebra problems, how
to analyse the approximation properties mathematically, and how to iplement the
corresponding methods in MatLab. Students are taught how to perform experiments
in MatLab and how to discuss their outcomes.
The focus will be on mathematical theorems and proofs. Instead of covering a
large number of algorithms, we study a smaller number of central algorithms in
greater detail, from defining mathematical principles via algorithms to their
efficient and stable implementation.
This course is part of Master Programmes in Mathematics and can be of added
value in the other MasterMath courses Parallel Algorithms, Systems and Control,
and Numerical Bifurcation Analysis of Large-scale systems. It also supplements
Numerical Methods for PDEs (stationary or time-dependent).
Jan Brandts, Korteweg-de Vries Institute for Mathematics, UvA
|
{"url":"https://elo.mastermath.nl/course/info.php?id=787","timestamp":"2024-11-09T08:12:03Z","content_type":"text/html","content_length":"52281","record_id":"<urn:uuid:7096164a-dacf-4b16-b8cf-641c94ab9403>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00762.warc.gz"}
|
The Calculate Density tool calculates the density of features in a neighborhood around those features. It can be calculated for both point and line features.
Possible uses include analyzing density of housing or occurrences of crime for community planning purposes or exploring how roads or utility lines influence wildlife habitat. The population field can
be used to weight some features more heavily than others or allow one point to represent several observations. For example, one address may represent a condominium with six units, or some crimes may
be weighted more heavily than others in determining overall crime levels. For line features, a divided highway may have more impact than a narrow dirt road.
How kernel density is calculated
Kernel density is calculated differently for different features.
Point features
Calculate Density calculates the density of point features around each output raster cell.
Conceptually, a smoothly curved surface is fitted over each point. The surface value is highest at the location of the point and diminishes with increasing distance from the point, reaching zero at
the Search radius distance from the point. Only a circular neighborhood is possible. The volume under the surface equals the Population field value for the point, or 1 if NONE is specified. The
density at each output raster cell is calculated by adding the values of all the kernel surfaces where they overlay the raster cell center. The kernel function is based on the quartic kernel function
described in Silverman (1986, p. 76, equation 4.5).
If a population field setting other than NONE is used, each item's value determines the number of times to count the point. For example, a value of 3 will cause the point to be counted as three
points. The values can be integer or floating point.
By default, a unit is selected based on the linear unit of the projection definition of the input point feature data or as otherwise specified in the Output Coordinate System environment setting.
If an output Area units factor is selected, the calculated density for the cell is multiplied by the appropriate factor before it is written to the output raster. For example, if the input units are
meters, the output area units will default to Square kilometers. The end result of comparing a unit scale factor of meters to kilometers will result in the values being different by a multiplier of
1,000,000 (1,000 meters × 1,000 meters).
Line features
Kernel Density can also calculate the density of linear features in the neighborhood of each output raster cell.
Conceptually, a smoothly curved surface is fitted over each line. Its value is greatest on the line and diminishes as you move away from the line, reaching zero at the specified Search radius
distance from the line. The surface is defined so the volume under the surface equals the product of line length and the Population field value. The density at each output raster cell is calculated
by adding the values of all the kernel surfaces where they overlay the raster cell center. The use of the kernel function for lines is adapted from the quartic kernel function for point densities as
described in Silverman (1986, p. 76, equation 4.5).
A line segment and the kernel surface fitted over it are shown.
The illustration above shows a line segment and the kernel surface fitted over it. The contribution of the line segment to density is equal to the value of the kernel surface at the raster cell
By default, a unit is selected based on the linear unit of the projection definition of the input polyline feature data or as otherwise specified in the Output Coordinate System environment setting.
When an output Area units factor is specified, it converts the units of both length and area. For example, if the input units are meters, the output area units will default to Square kilometers and
the resulting line density units will convert to kilometers per square kilometer. The end result, comparing a unit scale factor of meters to kilometers, will be the density values being different by
a multiplier of 1,000.
You can control the density units for both point and line features by manually selecting the appropriate factor. To set the density to meters per square meter (instead of the default kilometers per
square kilometer), set the area units to Square meters. Similarly, to have the density units of your output in miles per square mile, set the area units to Square miles.
If a population field other than NONE is used, the length of the line is considered to be its actual length multiplied by the value of the population field for that line.
Formulas for calculating kernel density
The following formulas define how the kernel density for points is calculated and how the default search radius is determined within the kernel density formula.
Predicting the density for points
The predicted density at a new (x,y) location is determined by the following formula:
• i = 1,…,n are the input points. Only include points in the sum if they are within the radius distance of the (x,y) location.
• pop[i] is the population field value of point i, which is an optional parameter.
• dist[i] is the distance between point i and the (x,y) location.
The calculated density is then multiplied by the number of points or the sum of the population field if one was provided. This correction makes the spatial integral equal to the number of points (or
sum or population field) rather than always being equal to 1. This implementation uses a Quartic kernel (Silverman, 1986). The formula will need to be calculated for every location where you want to
estimate the density. Since a raster is being created, the calculations are applied to the center of every cell in the output raster.
Default search radius (bandwidth)
The algorithm used to determine the default search radius, also known as the bandwidth, does the following:
1. Calculates the mean center of the input points. If a Population field was provided, this, and all the following calculations, will be weighted by the values in that field.
2. Calculates the distance from the (weighted) mean center for all points.
3. Calculates the (weighted) median of these distances, D[m].
4. Calculates the (weighted) Standard Distance, SD.
5. Applies the following formula to calculate the bandwidth.
• D[m] is the (weighted) median distance from (weighted) mean center.
• n is the number of points if no population field is used, or if a population field is supplied, n is the sum of the population field values.
• SD is the standard distance.
Note that the min part of the equation means that whichever of the two options, either SD or
There are two methods for calculating the standard distance, unweighted and weighted.
Unweighted distance
• x [i ], y [i ] and z [i ] are the coordinates for feature i
• {x̄, ȳ, z̄} represents the mean center for the features
• n is equal to the total number of features.
Weighted distance
• w[i] is the weight at feature i
• {x [w], y [w], z [w]} represents the weighted mean center.
This methodology for choosing the search radius is based on Silverman's Rule-of-thumb bandwidth estimation formula, but it has been adapted for two dimensions. This approach to calculating a default
radius generally avoids the ring around the points phenomenon that often occurs with sparse dataset and is resistant to spatial outliers—a few points that are far away from the rest of the points.
How barrier affects the density calculation
A barrier alters the influence of a feature while calculating kernel density for a cell in the output raster. A barrier can be a polyline or a polygon feature layer. It can affect the calculation of
density in two ways, by either increasing the distance between a feature and the cell where density is being calculated or excluding a feature from the calculation.
Without a barrier, the distance between a feature and a cell is the shortest one possible, that being a straight line between two points. With an open barrier, usually represented by a polyline, the
path between a feature and a cell is influenced by the barrier. In this case, the distance between the feature and the cell is extended due to a detour around the barrier, as shown in the
illustration below. As a result, the influence of the feature is reduced while estimating the density at the cell. The path around the barrier is created by connecting a series of straight lines to
go around the barrier from the input point feature to the cell. It is still the shortest distance around the barrier but longer than the distance would be without the barrier. With a closed barrier,
usually represented by a polygon completely encompassing a few features, the density calculation at a cell on one side of the barrier completely excludes the features on the other side of the
A conceptual figure for the distance calculation between a cell and an input point feature is shown. Kernel density without a barrier is on the left; Kernel density with a barrier is on the right.
The kernel density operation with a barrier can provide the more realistic and accurate results in some situations compared to the kernel density without a barrier operation. For example, when
exploring the density of the distribution of an amphibian species, the presence of a cliff or road may affect their movement. The cliff or road can be used as a barrier to get a better density
estimation. Similarly, the result of a density analysis of the crime rate in a city may vary if a river that passes through the city is considered as a barrier.
The illustration below shows the kernel density output of late-night traffic accidents in Los Angeles (data available from the Los Angeles County GIS Data Portal). The density estimation without a
barrier is on the left (1) and with a barrier on both sides of the roads is on the right (2). The tool provides a much better estimation of density using the barrier, where the distance is measured
along with the road network, than using the shortest distance between the accident locations.
Kernel density estimation is shown without a barrier (1) and with a barrier on both sides of the roads (2).
Silverman, B. W. Density Estimation for Statistics and Data Analysis. New York: Chapman and Hall, 1986.
|
{"url":"https://doc.arcgis.com/en/arcgis-online/analyze/how-kernel-density-works.htm","timestamp":"2024-11-04T14:34:16Z","content_type":"text/html","content_length":"29477","record_id":"<urn:uuid:c97b5a71-bb6a-4a65-a892-ac141eead4aa>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00873.warc.gz"}
|
A man gave 0.15 of his savings to his son, 0.25 to his daughterA man gave 0.15 of his savings to his son, 0.25 to his daughter and 0.6 of the rest to his wife. If he had still ₹960 left to him, what amount did he save?
Please type your username.
Please type your E-Mail.
Please choose an appropriate title for the question so it can be answered easily.
Please choose the appropriate section so the question can be searched easily.
Please choose suitable Keywords Ex: question, poll.
Type the description thoroughly and in details.
Choose from here the video type.
Put Video ID here: https://www.youtube.com/watch?v=sdUUx5FdySs Ex: "sdUUx5FdySs".
a) ₹5200
b) ₹5000
c) ₹4000
d) ₹3800
correct answer is: c) ₹4000
Son got 0.15 of savings.
And, daughter got 0.25 of savings.
So that, son and daughter total got =\left(0.15+0.25\right) of savings.
\rightarrow 0.40 or \large\frac{40}{100} or \large\frac{2}{5} of saving.
Let, the man saved ₹'x'.
So that,
Son and daughter total got ₹\left(x\times\large\frac25\right)
\rightarrow ₹\large\frac{2x}5
\therefore Rest saving =₹\left(x-\large\frac{2x}5\right)
\rightarrow ₹\large\left(\frac{5x-2x}5\right)
\rightarrow ₹\large\frac{3x}5
Wife got 0.6 or \large\frac{6}{10} or \large\frac{3}{5} of the rest savings.
\therefore Wife got ₹\large\left(\frac{3x}5\times\frac35\right)
\rightarrow ₹\large\frac{9x}{25}
So that, remaining savings =₹\large\left(\frac{3x}5-\frac{9x}{25}\right)
\rightarrow ₹\large\left(\frac{15x-9x}{25}\right)
\rightarrow ₹\large\frac{6x}{25}
According to the question,
\rightarrow x=₹\left(960\times\large\frac{25}6\right)
\rightarrow x=₹4000
\therefore The man saved ₹4000.
Ans: The man total saved ₹4000.
Another method to articulate this particular math query is available:
1. Imagine a man saved some money. He gave 15% to his son, 25% to his daughter, and 60% of what remained to his wife. If he still had ₹960 left, can you find out how much money he initially saved?
2. Suppose a man had some savings. He shared 0.15 with his son, 0.25 with his daughter, and then 0.6 of what was left with his wife. If he ended up with ₹960, can you calculate how much he saved
3. Imagine a man who decided to share his savings with his family. He distributed 0.15 to his son, 0.25 to his daughter, and then 0.6 of the remaining amount to his wife. If he had ₹960 left
afterward, can you calculate how much he originally saved?
|
{"url":"https://sciencesparkle.com/question/a-man-gave-0-15-of-his-savings-to-his-son-0-25-to-his-daughter-and-0-6-of-the-rest-to-his-wife-if-he/","timestamp":"2024-11-02T02:17:22Z","content_type":"text/html","content_length":"164457","record_id":"<urn:uuid:3cac6e03-60d8-40bf-bd00-4e145b843e27>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00522.warc.gz"}
|
Investigation of AC Electrical Properties of MXene-PCL Nanocomposites for Application in Small and Medium Power Generation
Department of Electrical Devices and High Voltage Technology, Lublin University of Technology, 38A, Nadbystrzycka Str., 20-618 Lublin, Poland
Department of Nanoelectronic and Surface Modification, Sumy State University, 2, R-Korsakova Str., 40007 Sumy, Ukraine
Biomedical Reseach Center, Medical Institute of Sumy State University, 2, R-Korsakova Str., 40007 Sumy, Ukraine
Institute of Atomic Physics and Spectroscopy, University of Latvia, 3, Jelgavas Str., LV-1004 Riga, Latvia
Materials Research Center, 3, Krzhizhanovskogo Str., 03142 Kyiv, Ukraine
Y-Carbon Ltd., 18, Vandy Vasilevskoy Str., 04116 Kyiv, Ukraine
Authors to whom correspondence should be addressed.
Submission received: 8 October 2021 / Revised: 21 October 2021 / Accepted: 25 October 2021 / Published: 1 November 2021
The paper examined Ti[3]C[2]T[x] MXene (T—OH, Cl or F), which is prepared by etching a layered ternary carbide Ti[3]AlC[2] (312 MAX-phase) precursor and deposited on a polycaprolactone (PCL)
electrospun membrane (MXene-PCL nanocomposite). X-ray Diffraction analysis (XRD) and Scanning Electron Microscopy (SEM) indicates that the obtained material is pure Ti[3]C[2] MXene. SEM of the
PCL-MXene composite demonstrate random Ti[3]C[2] distribution over the nanoporous membrane. Results of capacitance, inductance, and phase shift angle studies of the MXene-PCL nanocomposite are
presented. It was found that the frequency dependence of the capacitance exhibited a clear sharp minima in the frequency range of 50 Hz to over 10^4 Hz. The frequency dependence of the inductance
shows sharp maxima, the position of which exactly coincides with the position of the minima for the capacitance, which indicates the occurrence of parallel resonances. Current conduction occurs by
electron tunneling between nanoparticles. In the frequency range from about 10^4 Hz to about 10^5 Hz, there is a broad minimum on the inductance relationship. The position of this minimum coincides
exactly with the position of the maximum of the phase shift angle—its amplitude is close to 90°. The real value of the inductance of the nanocomposite layer was determined to be about 1 H. It was
found that the average value of the distance over which the electron tunnels was determined with some approximation to be about 5.7 nm and the expected value of the relaxation time to be τ[M] ≈ 3 ×
10^−5 s.
1. Introduction
Mxenotronics is a currently growing discipline [
], within the framework of which the application of MXenes in electronics, electrical devices, and photovoltaics is being carried out. To extend a field of application, new structural features and
properties of MXenes need to be considered and reviewed. For example, supercapacitors and batteries are only starting to employ Mxene-based components as substitutes for Li-ion accumulators [
MXenes in photovoltaics by far were applied for hole/electron transport layers and electrodes. As solar cell elements, they demonstrate much better performance in energy conversion, reduced trap
state, and better charge transfer [
]. Yu. Gogotsi et al. demonstrated the possible application of electrospun MXene/Carbon Nanofibers as supercapacitor electrodes for energy storage and opened a wide range of applications [
The flexible electronics concept was introduced several decades ago and conductive polymers, organic semiconductors, and amorphous silicon have since found many applications in different areas [
]. Despite the progress in this area, new challenges arise due to the increasing application of implantable systems with high flexibility and biocompatibility [
]. Cardiac patches, electrodes, brain and muscle stimulators, and neural guide conduits require the application of conductive biocompatible polymer membrane with satisfactory electronic properties [
]. Conductive polymers and membranes with conductive materials, including graphene and carbon nanotubes, are widely used in biomedical device development [
]. Over the next decade, MXenes was applied to take flexible electronics to a whole new level.
MXenes are a special type of 2D material that consists of thin exfoliated sheets of transition metal carbide or nitride [
]. Typically, to obtain an MXene, a simple method is used: top-down selective etching of MAX phase in hydrofluoric acid to remove the element (Al, Si or Ga). Based on the initial M materials, MXenes
are divided into single (Ti
C, V
C, Ti
, Ti
and double transition metals (Mo
, Mo
). The finite stage of MXene development is intercalation and/or delamination, in which the MXene can improve initial characteristics and achieve even more versatile properties. This t is ensured
through contact with surface functional groups (H, F, and O) in the obtained solution [
From the practical side, undoubtedly, this material has many crossovers with graphene, including properties [
]. However, in a number of applications, it surpassed the eminent competitor, both in electrical characteristics and manufacturability. Ti
, as one of the most popular MXenes, exhibits a high conductivity of 4600 ± 1100 S cm
for each individual flake and field effect electron mobility of 2.6 ± 0.7 cm
. The electrical resistance of layered Ti
is only one order of magnitude bigger than that of individual flakes that grant an exceptional electron transport between layers in comparison to the majority of other 2D materials [
An illustrative case of high technological effectiveness could be the application of MXenes in flexible antennas in the 2.4 GHz band (Wi-Fi and Bluetooth bands) for wearable electronics. They require
high flexibility to withstand repeated bending during operation. An MXene solution based on Ti
, in this case, is not only mechanically stable but also emits radio signals 50 times better than graphene analogues and 300 times better than antennas with a radiating structure made of Ag. However,
the manufacturing of “MXene nanoantenna” is several times easier, and as a bonus, the material is water-dispersable, which are very important for contact with the environment [
Some research works have demonstrated the application of MXene for deposition on flexible membranes for development of a triboelectric nanogenerator [
]. They reported on a poly(vinylidene fluoride-trifluoroethylene) (PVDF-TrFE)/MXene nanocomposite material with superior dielectric constant and high surface charge density. Application of an
electrospun membrane loaded by MXene prevents active material from delaminating the substrate during folding or bending.
The aim of this study was to measure the AC properties (phase shift angle, capacitance and inductance) of a MXene-PCL nanocomposite, analyse the results obtained, and determine the AC conduction
mechanism of the MXene-PCL nanocomposite based on them.
2. Materials and Methods
2.1. MXene Synthesis and Characterization
MXene (T—OH, Cl or F) was prepared by etching a layered ternary carbide Ti
(312 MAX-phase) precursor with a mixture of hydrochloric (HCl) acid and lithium fluoride (LiF) by the MILD method [
]. The etching solution was prepared as follows: 200 mL of 12M HCl (37%) was added to 50 mL of DI-water to yield 250 mL of 9M HCl; then 16 g of LiF was added under stirring. The mixture was placed in
a plastic container (volume 500 mL). 10 g of the Ti
powder with mean particle size of less than 40 μm was gradually added to the etching solution. The reaction mixture was held at 25 °C under constant stirring for 24 h. The aluminum layer in Ti
was removed by hydrofluoric acid formed an in-situ via reaction between HCl and LiF, leaving Ti
flakes weakly bonded through Van der Vaals interaction. After etching, the obtained MXene slurry was rinsed with DI-water via repetitive centrifugation (10 min each cycle at 3500 rpm) to remove
excess acid. After each cycle the acidic supernatant was decanted, followed by the addition of a fresh portion of DI-water, redispersion, and another centrifuging cycle. Rinsing was performed until
the pH value of supernatant reached 6. The obtained wet slurry containing MXene was subject to a delamination process in order to separate MXene Ti
flakes into a water-based colloidal solution.
The delamination was assisted by intercalation of Li
ion between Ti
flakes following separation into the colloidal solution [
]. The solution for the intercalation-assisted delamination was prepared as follows: 2 g of lithium chloride (LiCl) was added to 40 mL of DI-water in a plastic container (volume 50 mL). Two gram of
the etched MXene slurry was added to the prepared solution. The process was performed at 35 °C for 24 h under constant stirring. After intercalation in the LiCl solution, the MXene slurry was rinsed
via repetitive cycles of centrifuging (10 min each cycle at 3500 rpm), decanting the supernatant, and redispersion in freshy added DI-water until the supernatant turns from transparent to black in
color, signaling MXene flakes’ separation into colloidal solution. At this stage, the MXene supernatant after centrifuging is collected and stored. Rinsing is performed until the supernatant after
centrifugation becomes transparent again. The collected supernatant containing MXene is centrifuged at 6000 rpm for 1 h to obtain concentrated MXene sediment.
The prepared MXene is characterized by X-Ray Diffraction (XRD) and Scanning Electron Microscopy (SEM) methods. Powder X-Ray diffraction patterns were obtained using the Rigaku Ultima-IV
diffractometer (Rigaku Corporation, Tokyo, Japan). The XRD investigation MXene slurry was placed on a square glass samples with a side of 24 mm in the form of a thin non-transparent film and let to
dry in vacuum. The SEM investigation was performed using the Tescan Mira 3 LMU (Tescan Orsay Holding, a.s., Brno, Czech Republic) scanning electron microscope. For the SEM investigation, drops of
diluted MXene colloidal solution in water were placed on pieces of silicon wafer and let to dry in vacuum.
For crystal structure/phase studies, a TEM-125K (Selmi, Ukraine) was used at the accelerating voltage of 125 kV. In order to create test samples, the Mxenes solution was applied onto NaCl crystals
(drop-drying method) with a carbon film that further dissolved in water and was then caught by the microscopic grid made of Cu. The calculation of the selected area’s electron diffraction (SAED)
patterns were performed with an Al standard.
2.2. PCL Electrospun Membrane Synthesis
Polycaprolactone (PCL), Mn = 45,000 g/mol, was obtained from Sigma Aldrich (Saint Louis, MI, USA). Chloroform (purity ≥ 99%), ethanol (purity 95.4–96.8%), and acetic acid (purity ≥ 99%) were obtained
from Penta Chemicals (Prague, Czech Republic). Polymer solution was prepared as described in [
]. Electrospun fiber mats was produced by the conventional method of electrospinning with the following parameters: 25 kV power, 180 mm distance between the syringe and the collector, and spinning
speed of 12 mL/h.
2.3. MXene Deposition on PCL Membranes
The MXene solution was prepared as follows: 0.02 g of thawed concentrated MXene slurry containing about 15% MXene by mass was dispersed in 4 mL DI-water under sonication for 1 min in an ultrasonic
bath with power of 50 V at 40 kHz. This solution is used to soak PCL scaffolds. PCL is known to be a hydrophobic polymer; therefore, PCL scaffolds (sized around 5 mm/14 mm) are first treated in 1M
Sodium hydroxide (NaOH) solution for four hours at 30 °C in order to improve the hydrophilic properties of the PCL surface and then washed in DI-water for 24 h. As-treated PCL scaffolds are put in
small glass containers (20 mL) with 4 mL of the MXene solution. The container was filled with argon to prevent oxidation of MXenes. The container was sonicated for 5 min in an ultrasonic bath with 50
V at 40 kHz and left for 3 h to let MXene soak into the PCL scaffold. Then, the PCL scaffolds were removed from the MXene solution, immersed in DI water for 1 s to remove excess of the solution, and
dried on a filter paper. The MXene-PCL nanocomposite scaffolds were subject to the same coating procedure a second time.
The structure of the as-spun and MXene-deposited membranes was assessed using a scanning electron microscope (SEO-SEM Inspect S50-B, FEI, Brno, Czech Republic; accelerating voltage—15 kV) paired with
an energy-dispersive X-ray spectrometer (AZ-tecOne with X-MaxN20, Oxford Instruments plc, Abingdon, UK).
3. Experimental
Alternating current measurements of MXene-PCL nanocomposites were carried out using a test stand developed and constructed at the Department of Electrical Devices and High Voltage Technology, Lublin
University of Technology (Lublin, Poland). View of the test stand is shown in
Figure 1
The stand includes the CS 204AE-FMX-1AL helium cryostat (Advanced Research Systems, Inc., Macungie, PA, USA) (3–7), which allows to measure temperatures in the range from 15 K to 450 K with an
accuracy of 0.002 K. The entire measurement takes place under vacuum (~0.2 atm.), which is achieved using a vacuum pump (4). The tested nanocomposite sample (11) is placed in the cryostat head (3)
and cooled in a closed circuit by means of a helium compressor (5). Temperature detection and control are carried out by a system consisting of a silicon sensor (8), a temperature controller (7), and
a connected heater mounted in the cryostat head. Electrical parameters were measured every 1 K in the range 15 K–20 K, every 2 K in the range 20 K–40 K, every 3 K in the range 40 K–151 K, and every 7
K in the range 151 K–305 K. For the AC measurements, 3532 LCR HiTESTER (Hioki, Japan) impedance meters were used (1). The impedance meters have the ability to measure four of the following 14
electrical parameters: Z—impedance, Y—admittance, φ—phase shift angle, tg δ—loss factor, Q—Q factor, C[S]—static capacitance in series equivalent circuit, C[P]—static capacitance in parallel
equivalent circuit, L[S]—inductance in series equivalent circuit, L[P]—inductance in parallel equivalent circuit, R[S]—effective resistance in series equivalent circuit, G—conductance, R[P]—effective
resistance in parallel equivalent circuit, X—reactance, B—susceptance. The amplitude of the voltage applied to the test sample was U = 0.4 V. The impedance meter and the temperature controller are
connected to a computer (11), where the measurement results are saved as xls files.
A program, written in the C++ environment, was developed to control the impedance meters and control and record electrical and temperature parameters. The program has the ability to select any four
electrical parameters for simultaneous measurement and allows to enter specific values for measurement voltage and frequency range.
In this study, the AC properties of the MXene-PCL nanocomposite samples were investigated. At both ends of the tested nanocomposite, a thin layer (~10 µm) of silver paste was applied to eliminate the
transition resistance at the point between the sample and the contacts (
Figure 2
As shown in
Figure 2
, the AC current applied at the ends of the nanocomposite layer flows between the two contacts. This means that both the real and imaginary components of the current, which consists of capacitive and
inductive currents, flow between the same contacts. This means that these three currents flow in parallel in the nanocomposite layer. Therefore, a parallel equivalent impedance meter scheme was
chosen to measure the alternating current parameters of the nanocomposite. Measurements were performed in the temperature range of 20 K to 305 K (70 temperatures) and frequencies from 50 Hz to 1 MHz
with a step of 50 points per decade (about 240 frequency values). The following parameters were measured: phase shift angle
, inductance
, and capacitance
in the parallel equivalent scheme.
As is known [
], there are two components of current in AC parallel circuits consisting of
elements. The first one, the real (or resistive) component, is in a phase with applied sinusoidal voltage. Its value is determined by the formula:
—real component of the current,
—parallel circuit resistance,
—applied sinusoidal voltage.
The second component of the current, called the imaginary component, is determined by the value of susceptance
= 2π—circular frequency,
—parallel circuit capacitance,
—parallel circuit inductance.
Using the value of susceptance
, the value of the imaginary component of current
is calculated:
$I I = U B = U ω C P − 1 ω L P ,$
—imaginary component of the current.
The vector of the imaginary component is perpendicular to the real current vector. The value of the angle modulus between these vectors is 90°. The sign of this angle depends on which component of
susceptance (capacitive or inductive) from Formula (2) is higher. When the capacitive component is higher, the sign of the angle is negative and when the inductive component is higher, the sign of
the angle is positive. One of the basic alternating current parameters of the parallel
circuit is the phase shift angle between the vector of the real current component and the resultant current vector (
Figure 3
). The phase shift angle value is calculated from the formula:
$φ = − a r c t g R P ω C P − 1 ω L P ,$
—phase shift angle.
Figure 3
shows a phasor diagram of the AC sinusoidal current’s real and imaginary components for a parallel
circuit in case the capacitive component is larger than the inductive component.
In parallel
circuits at the resonant circular frequency
= 2π
, a parallel resonance [
] occurs. From Formula (2) for susceptance, it follows that at the value of resonant circular frequency
, the modules of capacitive and inductive components of susceptance are equal and their difference is equal to zero.
The value of the resonant circular frequency for a parallel circuit is determined by the formula:
—resonant circular frequency.
Let us now examine how to measure the experimental frequency dependence of the capacitance and inductance of the MXene-PCL nanocomposites using the 3532 LCR HiTESTER impedance meter. The method of
measurement and formulas for calculating the parameters are given in the user manual [
]. According to the user manual, after applying sinusoidal voltage
to the tested circuit, in the first stage, the meter calculates the phase shift angle
between the vectors of voltage
and current
Figure 3
) and the circular frequency
= 2π
. In the second stage, based on three values:
, the other parameters of the circuit under test are calculated using the formulas given in the manual. For the purposes of this work, the following formulas are needed:
—the capacitance value measured by the impedance meter.
We will now determine the relationship between the actual value of the capacitance of the parallel circuit and the value measured by the impedance meter. By substituting into Equation (8) the value
of susceptance
from Equation (2) we obtain:
$C P M = B ω = Y sin φ ω = C P − 1 ω 2 L P ,$
—actual value of capacitance in the tested parallel circuit,
—actual value of inductance in the tested parallel circuit.
Formula (9) shows that the measured capacitance is smaller than the actual one. The result of the capacitance measurement coincides with its actual value only if there is no inductance in the circuit
under test. The value of the measured capacitance in the case of φ > 0° should not be taken into account.
By substituting the value of the resonant circular frequency (5) into the formula for the measured value of capacitance (9), we obtain:
$C P M ω r = C P − 1 ω r 2 L P = C P − C P L P L P = 0 .$
Formula (10) shows that at the resonant circular frequency, the measured capacitance of the circuit should be zero. When the circular frequency ω approaches the resonant value from the side of lower
values, the value of the measured capacitance becomes smaller and smaller, and at the resonant circular frequency ω[r], its value, theoretically, is zero. A further increase in the circular frequency
will increase the measured capacitance. This means that there will be a clear minimum in the frequency dependence of the measured capacitance. It is one of the criteria that allows to observe the
parallel resonance and determine the value of the resonant circular frequency ω[r].
According to the user manual, the meter performs the calculation of inductance based on the formula:
By substituting into Formula (11) the value of susceptance given by Formula (2), we obtain:
$L P M = 1 ω B = 1 ω ω C P − 1 ω L P = 1 ω 2 C P − 1 L P .$
Formula (12) shows that the inductance of a parallel circuit is measured correctly only in the absence of capacitance.
By substituting the value of the resonant circular frequency (5) into Formula (12) for the measured inductance, we obtain:
$L P M ω r = 1 ω r 2 C P − 1 L P = 1 1 L P C P C P − 1 L P = ∞ .$
As the frequency approaches its resonance value, the measured inductance begins to increase, reaching its maximum value at the resonance frequency. A further frequency increase causes the measured
value to decrease.
4. Results and Discussion
Figure 4
a–c represented results of XRD and SEM analysis of prepared MXene, respectively. The XRD pattern in
Figure 4
a indicates that the obtained material is pure Ti
MXene. SEM demonstrates that MXene has a typical shape, with a size from 25 to 500 nm.
Figure 5
a,b represents the dark-field images of pure Ti
MXene samples. It seems that the samples exhibit good exfoliation. In some regions, MXenes are almost transparent to the electron beam because the thickness is close to several atomic distances [
]. Further analysis of SAED from the flakes revealed the Ti
hexagonal lattice of high crystallinity [
]. Titanium distribution in the crystal lattice ensures good electrical conductivity. Depending on the concentration and thickness (periodicity of layers), the MXenes crystal structure varies from
single crystal to polycrystalline-like. Lattice parameters were increased in the tabular Ti
structure (hexagonal P63/mmc symmetry)
= 3.183 Å (
= 3.071 Å),
= 15.68 Å (
= 15.131 Å). Test samples were then exposed to air for two weeks to analyse the oxidation behaviour. As a result of the analysis, the crystallinity of the samples was dropped (
Figure 5
c). Some flakes exhibit the transition to titanium dioxide, but only local decomposition to oxygen compounds was observed. Moreover, under the thermal effect of the electron beam, the structure of
the MXene layers was changed instantly. White areas changed to black, which meant that oxidation of the specimens was only partial and potentially reversible after the thermal annealing.
After MXene was coated on the PCL photomicrograph, an EDS scan was taken (
Figure 6
). It seems that the distribution of fibers is random, with an average diameter of 1.41 ± 0.33 μm. The structure is similar to pristine PCL with unified MXene flakes along the fibers. MXene
nanosheets occupy most of the space, which is very good for electron transport. The chemical composition derived from EDS is consistent with articles by other authors [
]. The most intensive signal (~74%) is from the C-Kα line since both PCL and Ti
contains it. O, F, and Cl signals suggest that MXene exhibits a binding with the functional groups. No Al concentration was observed. Thus, it was completely removed during the precursor exfoliation.
Samples of the MXene-PCL nanocomposites were chosen for the analysis in order to obtain clear and original results.
Figure 7
shows the frequency dependence of the capacitances measured in the parallel equivalent scheme. The figure shows six waveforms in the temperature range of 20 K–305 K, selected from 70 waveforms
obtained during the measurements. The measurements, in order to precisely determine the positions of the minima, were performed at 50 points per decade.
The capacitance values measured in the parallel equivalent circuit diagram slowly decrease with increasing frequency. The values are in the range of 1.7 × 10
pF to about 4 × 10
pF. Such low capacitance values occur due to the shape of the sample, together with the contacts applied to it (
Figure 2
). As can be seen from the figure, the area of the measured sample is equal to the cross-sectional area of the nanocomposite layer. The thickness of the dielectric is equal to the distance between
the contacts. This results in the capacitance of the sample being very low. From the frequency dependence of the measured capacitances, shown in
Figure 7
, it can be seen that the MXene-PCL nanocomposite exhibits a series of minima against a background of a slow decrease of capacitance with frequency increase. Some of them are very clear. These are
minima at frequencies of around 100 Hz, around 200 Hz, around 1100 Hz and around 2200 Hz. In the frequency range above 2200 Hz, further sharp minima are observed. However, determining the frequency
values at which they are observed is relatively difficult due to their very close proximity. The only thing is that these minima practically disappear at room temperature. A broad clear maximum is
observed in the frequency range from about 10
Hz to about 10
Hz. The position of the maximum, depending on the temperature, occurs at frequencies from about 1.3 × 10
Hz to about 3 × 10
Hz. In the frequency range above 10
Hz, oscillations of large amplitudes occur that completely interfere with the capacitance measurements. Oscillations of this type were not observed by us for other types of nanocomposites [
]. The oscillations are probably related to the unique structure of the MXene-PCL nanocomposites. Explanation of their causes requires additional research, far beyond the scope of this article.
Accordingly, this paper focuses on the analysis of the behaviour of the minima observed in the frequency range up to 10
Hz. Therefore, in
Figure 7
Figure 8
Figure 9
, the frequency range is limited to 10
As can be seen from Equation (10), the capacitance values at the minima should be zero. This is consistent for a parallel
circuit, whose resistance is zero. In the case of non-zero resistance, the depth of the minimum is smaller, which is also observed in
Figure 7
. A second factor, reducing the depth of the minimum, is that measurements were made for 50 points per decade. This means that it was difficult to precisely hit the value of the resonant frequency.
As a result of missing such hits, the measured capacitance for a given minimum does not reach zero. The minimum at 2200 Hz is closest to the resonance frequency, and its depth is more than two
orders. In conventional parallel
circuits consisting of discrete elements, there is only one minimum. This is due to the fact that the values of the discrete elements are constant values. However, in nanocomposites, capacitance and
inductance values are functions of the frequency, morphology, and structure of the nanomaterial [
]. This allows a greater number of frequencies to occur in the nanocomposite at which parallel resonance is observed.
We will now analyse the effect of temperature on the minima occurring at frequencies of around 100 Hz, around 200 Hz, around 1100 Hz, and around 2200 Hz. It can be seen from
Figure 7
that the depth of the minimum at about 100 Hz practically does not depend on the temperature. A similar situation is also characteristic for the minimum at around 200 Hz. An increase in the
temperature value causes a slight shift of the minimum position into the area of lower frequencies. The temperature increase practically does not change position of minimum around 1100 Hz, but it
clearly reduces its depth. At 291 K, this minimum almost disappears. The next minimum at around 2200 Hz shifts slightly into the higher frequency region as the temperature increases. The depth of
this minimum reaches more than two orders and decreases rapidly with increasing temperature. The depth of minima, located in the frequency region (10
) Hz, also decreases rapidly with increasing temperature and disappears at room temperature. This means that there are at least two types of tunneling between nanoparticles in the nanocomposite,
which become apparent in the form of
minima. This is evidenced by the different way in which the depth of the minima changes under the influence of temperature. For the minima of the first group (at 100 Hz and 200 Hz), their depth
practically does not depend on temperature. For the second group (1100 Hz and 2200 Hz and minima in the frequency region (10
) Hz)), the depth of the minima decreases very rapidly with increasing temperature. At room temperature, the minima of the second group practically disappear. The two different types of tunneling can
be related to the different morphologies and structures of the nanoparticles between which tunneling takes place.
The frequency dependence of the MXene-PCL nanocomposite inductance measured in the parallel equivalent scheme is shown in
Figure 8
. Only 6 waveforms, selected from 70, obtained during the tests for temperatures ranging from 20 K to 305 K, are shown in the figure. As can be seen from
Figure 8
, there are a number of maxima on the frequency dependence of
). Their positions exactly match the positions of the minima on the frequency dependence of the measured capacitances, shown in
Figure 7
. This means that the frequencies at which inductance maxima occur are the frequencies for which parallel resonance occur. The value of inductance measured at maxima for a circuit not containing
resistance should be infinity—Formula (13). The presence of a resistance causes the value at maximum of the measured inductance to be lower. A second factor lowering the value at maximum is that the
measurements were made with a step of 50 points per decade. As a result of the simultaneous interaction of these two factors, the inductance at the maximum does not reach infinity. The maximum at
2200 Hz is closest to the resonance frequency. Its amplitude is about two orders. Wide clear minima are observed on the
) relation at frequencies from about 1.3 × 10
Hz to about 3 × 10
Hz—depending on the temperature. The positions of the minima of the measured inductances (
Figure 8
) exactly coincide with the positions of the maxima on the frequency dependence of the measured capacitances (
Figure 7
). In the frequency range above 2200 Hz to about 10,000 Hz, further sharp maxima are observed.
Figure 9
shows 6 selected from 70 frequency dependencies of the MXene-PCL phase shift angle
, obtained for temperatures ranging from 20 K to 305 K. The measurements, in order to precisely determine the frequencies in which the maximum occurs, were made at 50 points per decade.
The figure shows that up to a frequency of about 10
Hz, the values of the phase shift angle are close to 0°. With further frequency increase practically up to about 10
Hz, the values of the phase shift angle are weakly negative. It follows that in this frequency region, the capacitive component of the conductivity is slightly larger than the inductive component.
Beyond a frequency of 10
Hz, the values of the phase shift angle become positive. As can be seen from
Figure 9
, in the frequency region from about 10
Hz to about 10
Hz, an increase in positive phase angle values is observed and a maximum is reached, the value of which ranges from about 80° to about 85°, depending on the temperature. A further increase in
frequency causes a decrease in the value of the phase shift angle—the values of which remain positive. This means that in this frequency range, the inductive component of the conductivity of the
MXene-PCL nanocomposite is many times greater than the capacitive component. This phase shift angle behavior occurs in a range of nanocomposites containing conductive phase nanoparticles in
dielectric matrices [
]—both capacitive and inductive components were observed in them.
It should be noted that in conventional
circuits, inductance occurs, as a rule, in the form of a coil wound from a thin conductor. In the nanocomposite layers studied, there were no windings (
Figure 2
). The occurrence of a phase shift in them, characteristic of coils, is related to the conduction mechanism based on the phenomenon of electron tunneling between neighbouring nanoparticles [
]. This paper [
] presents an impedance model and its experimental verification for nanocomposites in which conductivity takes place by electron tunneling between nanoparticles. The model assumes that there are
nanometer-sized potential wells in the material where electrons are located (
Figure 10
). Distances between the wells are also nanometric. This allows the electrons to tunnel between neighbouring wells of the potential, defined by the following formula [
$P T = P 0 ( T ) exp − β α r − Δ W k T ,$
—distance over which the electron is tunneling,
—value close to the inverse of the radius of location of the tunneling electron (so called Bohr radius),
—numerical coefficient close to 2 [
], Δ
—activation energy of electron tunneling,
—Boltzman’s constant,
—numerical factor.
The electric field forcing the current flow is weak and does not change the probability of electrons tunneling from one neutral potential well to another. The field only leads to an asymmetry of the
jumps, related to the Debye factor [
—charge of the electron,
—distance the electron tunneling,
—electric field strength.
The value of a weak electric field can be defined as:
—amplitude of the electric field strength.
Under the influence of this field (on the tunneling path between adjacent potential wells) flow, a current of density (
Figure 10
$j 1 = σ E = σ E 0 sin ω t ,$
The electron, after tunneling into the second well, remains there for relaxation time
. The value of the relaxation time is a function of the temperature and the distance over which the electron tunnels [
]. After the relaxation time, two variants of tunneling are possible. In the first one, the electron with probability
tunnels to the next (third) well in the direction determined by the forcing electric field (
Figure 10
). This results in a second component of the current density:
$j 2 = σ E 0 p sin ω ( t − τ ) .$
In the second variant, the electron after relaxation time
tunnels from the second well to the first one (
Figure 10
) with probability (1−
). This results in the appearance of the third component of current density:
$j 3 = − σ E 0 ( 1 − p ) sin ω ( t − τ ) .$
Equations (17)–(19) can be used for the temperature region T < 500 K, when values p(T)τ << 1 (see Formula (14)).
This means that the resultant current density, due to electron tunneling, has both a real and an imaginary component.
We will now extract the real and imaginary components of the tunneling current density from Equations (17)–(19). The current density j[1] is in the same phase as the forcing electric field and
therefore contains only the real component.
components are in an equal phase. Hence:
$j 2 + j 3 = − σ E 0 sin ω t − τ 1 − 2 p .$
From Formulas (17) and (20), it follows that the real component of the tunneling current density is:
$j R = σ E 0 1 − 1 − 2 p cos ω τ$
and the imaginary component of the current density due to tunneling is:
$j I T = σ E 0 1 − 2 p sin − ω τ .$
The phase shift angle between the real (21) and imaginary (22) components of the current density due to tunneling is:
By substituting the value of
into Equations (21) and (22), we obtain:
$j R = σ E 0 1 − 1 − 2 p cos θ ,$
$j I T = σ E 0 1 − 2 p sin θ .$
A material of the same composition as the nanocomposite, in the absence of tunneling in it, has a dielectric permittivity
> 1. This means that, according to Maxwell’s second equation, there will be a component of capacitive current flowing through the material that is not related to electron tunneling. The density of
this current component is described by the following formula [
$j C = ω ε r ε 0 E 0 sin ω t − π 2 ,$
—relative dielectric permittivity,
—dielectric permittivity of vacuum.
The total density of the imaginary component of the current, taking into account Formulas (22) and (26), is:
$j I = σ E 0 1 − 2 p sin θ − ε r ε 0 ω E 0 .$
The phase shift angle between the real and imaginary components of the current density is:
$φ ( ω ) = arctan j I ( ω ) j R ( ω ) = arctan σ 1 − 2 p sin θ − ε r ε 0 θ τ σ 1 − 1 − 2 p cos θ .$
From Equation (28), it follows that the value of the phase shift angle is a function of conductivity σ, dielectric permittivity
, relaxation time
and frequency
. From Equation (28), it follows that in the low frequency region, where:
$− θ = ω τ → 0 , sin θ ≅ − ω τ , cos ω τ ≅ 1 .$
Formula (28) transforms to the form:
$φ ( ω ) = − arctan ω τ σ 1 − 2 p − ε r ε 0 τ 2 σ p .$
Equation (30) shows that for low frequency values (ωτ << 2σp), the phase shift angle φ is negative and close to zero.
In the paper [
], computer simulations were performed based on Equation (28). They show that with further increase of frequency depending on the conductivity value
, the following cases can occur:
For low conductivity values in the low frequency range, the phase shift angle
is approximately equal to 0° and a decrease of the phase shift angle value to about −90° with stabilization at this level is observed. This situation occurs, according to Equation (28), in the
high frequency region when:
An indication diagram for this case is shown in
Figure 11
For average conductivity values, the phase shift angle
waveforms show values close to zero in the low frequency region. An increase of frequency causes an increase of negative values until a minimum is reached. After crossing zero, positive values of
the phase shift angle occur, passing through the maximum and then decreasing the phase shift angle. The zero crossing at frequency
corresponds to the phenomenon of parallel resonance. From Equations (22) and (28), it follows that
= 0° occurs when:
$− σ 1 − 2 p sin θ = ε r ε 0 ω r$
An indication diagram for the case of parallel resonance is shown in
Figure 12
For high values of conductivity, when
σ >> ε[r]ε[0]ω
and medium values of frequency, positive values of the phase shift angle occur. When the maximum is reached, the value of which
≈ 90°, a decrease in the phase shift angle value takes place. An indication graph for this case is shown in
Figure 13
Figure 12
shows that the angle θ between the
components at the resonance frequency is slightly more than −
. From the value of the frequency at the inductance maximum of about 8×10
Hz and using Formula (23), it is possible to determine the values of the relaxation times τ for the individual sharp maxima observed in
Figure 8
. For maximum at frequency
≈ 100 Hz,
≈ 5×10
s; for
≈ 200 Hz,
≈ 2.5 × 10
s; for
≈ 1100 Hz
≈ 4.5 × 10
s, and for
≈ 2200 Hz,
≈ 2.3×10
As can be seen from
Figure 9
, in the frequency range
< 10
Hz, there are close to zero values of the phase shift angle. This means that resistive conduction is dominant in this range. The series resonance observed in this range (
Figure 7
Figure 8
) prove that apart from resistive conduction, capacitance and inductance are simultaneously present. If the nanocomposite layer had a resistive conductivity and only one of the imaginary components,
inductive or capacitive, the resonance of the currents would not occur.
This means that in the MXene-PCL nanocomposite, at least two types of nanoparticles are involved in tunneling in this frequency range. This is due to differences in the behavior of the amplitudes and
frequencies of the individual minima induced by the temperature change. The two types of tunneling occurring in this frequency range belong to the case of average conductivity values, described in
(b), Formula (28) and (32). An indication diagram is shown in
Figure 12
. In the region from about 10
Hz to about 10
Hz, there is a broad maximum of the phase shift angle, values in which are 80° ≤
≤ ~85° (
Figure 9
). The position of the maximum is at frequencies from about 1.3 × 10
Hz to about 3 × 10
Hz, depending on the temperature. This situation is described in (c)—Equation (28). The appearance of a wide maximum at frequencies from about 10
Hz to about 10
Hz means that this is associated with a case of dominant conduction of the inductive type—as shown in case (c). This shows that a third type of tunneling occurs in the MXene-PCL nanocomposite. From
the value of the frequency at the maximum of the phase shift angle, the expected value of the relaxation time was determined. At 40 K, it is
≈ 8 × 10
s. The occurrence of at least three types of tunneling in the MXene-PCL nanocomposite, with different relaxation times, is probably related to differences in the morphology and structure of the
nanoparticles between which tunneling takes place and the distances over which electrons tunnel.
From calculations based on the amplitude and frequency of the position of the wide maximum (
Figure 8
), the actual value of the inductance of the MXene-PCL nanocomposite layer was determined to be
(4 × 10
Hz) ≈ 1 H. It is important to consider what causes the nanocomposite layer to have such a high inductance. For this, we use the formula for the inductance value of a conventional coil without a
ferromagnetic core given in [
$L P 4 × 10 4 H z = μ 0 n 2 v ,$
—magnetic permittivity of vacuum,
—number of coils per unit length,
—volume of coil.
By transforming Equation (33), we obtain the “distance between neighbouring coil windings” from the nanocomposite:
$Δ l = 1 n = μ 0 v L P 4 × 10 4 H z$
Using this formula, obviously with a high degree of approximation, it is possible to determine the geometric dimensions of a single “coil” formed by the nanocomposite. By substituting into Equation
(34) the values of
(4 × 10
Hz) ≈ 1 H,
and the values of geometrical dimensions of the nanocomposite sample from
Figure 2
, we obtain the “distance between neighbouring coil windings” of the nanocomposite, which is about ∆
≈ 5.7 nm. As noted above, conduction in the nanocomposite occurs by electron tunneling between neighbouring nanoparticles. After each jump, the electron remains for a relaxation time
in the nanoparticle on which it has tunneled and thus causes a phase shift between the real and imaginary components of the current density due to tunneling (see Equations (18)–(23)). There is some
analogy here with an induction coil, where each coil affects the phase shift of the current. This may mean that the “distance between neighbouring coil windings” from the nanocomposite obtained from
Equation (34) is, to a large approximation, the expected value of the distance over which the electron tunnels.
5. Conclusions
Flaked MXene nanosheets received a single crystal structure of Ti[3]C[2] without intermetallic phases. Preliminary analyses show their relatively fast oxidation rate, so their deposition onto PCL
scaffolds was conducted in an inert atmosphere (Ar). EDS chemical analysis of coated PCL membranes confirmed the absence of Al and revealed the uniform distribution of MXenes linked to termination
The studies of capacitance, inductance, and phase shift angle of the MXene-PCL nanocomposite were performed with an impedance meter at 50 points per decade in the measurement frequency range from 50
Hz to 1 MHz at temperatures from 20 K to 305 K.
On the frequency dependence of the capacitance, there are clear sharp minima at about 100 Hz, about 200 Hz, about 1100 Hz, and about 2200 Hz. There is a wide maximum in the area from about 10^4 Hz to
about 10^5 Hz. The position of the maximum occurs at frequencies from about 1.3 × 10^4 Hz to about 3 × 10^4 Hz, depending on the temperature.
Measurements of the nanocomposite inductance have shown that there are clear sharp maxima on its frequency dependence, the position of which exactly agrees with the position of minima for the
frequency dependence of capacitance. This indicates the presence of parallel resonances in the nanocomposite.
In the frequency range from about 10^4 Hz to about 10^5 Hz, there is a wide minimum in the inductance dependence, the position of which exactly agrees with the position of the maximum of the phase
shift angle. Based on the value in this minimum, the actual value of the inductance of the tested nanocomposite layer (about 1 H) was calculated. The expected value of the distance over which the
electron tunnels (about 5.7 nm) was also determined with some approximation.
It was found that in the frequency region of up to about 10^4 Hz, the values of the phase shift angle are close to zero. In the higher frequency region, there is a wide maximum of up to 85° on the φ(
f) relationship. From the frequencies in the phase shift angle maximum, the expectation value of the relaxation time was determined. At 40 K, it is τ[M] ≈ 8 × 10^−5 s.
At least three types of tunneling between nanoparticles were found to occur in the MXene-PCL nanocomposite. This is due to differences in the behavior of the amplitudes and frequencies of individual
parallel resonances induced by temperature changes. This is probably due to differences in the morphology and structure of the nanoparticles between which tunneling occurs and the distances through
which electrons tunnel.
Author Contributions
Conceptualization, T.N.K., A.D.P., M.P. and O.G.; methodology, T.N.K., A.D.P., M.P., O.G. and V.Z.; software, P.R. and P.O.; validation, T.N.K. and M.P.; formal analysis, T.N.K. and P.G.;
investigation, T.N.K., P.G., K.K, P.R., P.O., V.B. (Vladimir Burranich), K.D., V.Z., V.B. (Vitalii Balitskyi), V.S. and I.B.; resources, P.G. and K.K.; data curation, P.G., K.K., P.R., P.O., K.D.,
V.B. (Vitalii Balitskyi), V.S. and I.B.; writing—original draft preparation, T.N.K., P.G., V.B. (Vladimir Burranich) and I.B.; writing—review and editing, T.N.K., A.D.P., M.P. and O.G.;
visualization, T.N.K., P.G. and V.B. (Vitalii Balitskyi); supervision, T.N.K. and A.D.P.; funding acquisition, T.N.K., K.K., P.R. and P.O. All authors have read and agreed to the published version of
the manuscript.
MXene synthesis and electrospinning membrane preparation were supported by EU Horizon 2020 MSCA RISE grants: 777810, 778157 and by the National Research Found of Ukraine (grant 2020.02/0223). The
research was supported by the subsidy of the Ministry of Education and Science (Poland) for the Lublin University of Technology as funds allocated for scientific activities in the scientific
discipline of automation, electronics and electrical engineering—grants: FD-20/EE-2/702, FD-20/EE-2/703, FD-20/EE-2/705 and FD-20/EE-2/707.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest.
Figure 1.
Test stand for determining electrical parameters [
]: 1 and 2—HIOKI 3532 LCR HiTESTER impedance meters, 3—helium cryostat head, 4—vacuum pump, 5—compressor of helium cryostat, 6—vacuum gauge, 7—LakeShore 335 temperature controller, 8—silicon
temperature sensor, 9—test contacts, 10—test samples, 11—computer.
Figure 2. Photo (a) and view (b) of the nanocomposite sample: 1—MX nanocomposite layer, 2—dielectric substrate, 3—silver paste contacts.
Figure 3. Phase diagram of the AC sinusoidal current’s real and imaginary components for a parallel RLC circuit in case of I[C] > I[L]: U—applied voltage vector, I[I]—real component of imaginary
current, I[C]—capacitive component of imaginary current, I[L]—inductive component of imaginary current, I—resultant current, φ—phase shift angle, I[U] = I[C]−I[L]—parallel circuit current, R[P]
—resistance, C[P]—capacitance, L[P]—inductance.
Figure 5. TEM images of the structure of Ti[3]C[2] MXenes obtained by the method of dripping onto a substrate and their electron diffraction patterns: high-concentration drips (a), low concentration
flakes (b), oxidation test (c).
Figure 6. EDS spectrum of the MXene-PCL nanocomposite with element Wt.% distribution (insertion (A)). Insertion (B)—optical (1) and (2) and SEM images (3) and (4) of PCL-MXene composite scaffolds
after first and second coating iteration, respectively.
Figure 7. Frequency dependence of capacitance C[PM] of the MXene-PCL nanocomposite for six selected temperatures measured in a parallel scheme.
Figure 8. Frequency dependence of inductance L[PM] of the MXene-PCL nanocomposite for six selected temperatures measured in a parallel substitution scheme.
Figure 9. Frequency dependence of phase shift angle φ of the MXene-PCL nanocomposite for six selected temperatures measured in a parallel substitution scheme.
Figure 10. Potential wells and possible directions of electron tunneling: E—electric field, j[1]—current of density, j[2]—second component of the current density, j[3]—third component of current
Figure 11. Indication diagram of current density for the case of dominant capacitive type conduction: j[1]—current density determined by Formula (17), j[2] + j[3]—current density determined by
Formula (20), θ = –ωτ—angle between the vectors j[1] and (j[2] + j[3]), φ—phase shift angle determined by Formula (28).
Figure 12. Indication diagram of current density for the case of parallel resonance: j[1]—current density determined by Formula (17), j[2] + j[3]—current density determined by Formula (20), θ = –ωτ
—angle between the vectors j[1] and (j[2] + j[3]), φ = 0°—phase shift angle determined by Formula (32).
Figure 13. Indication diagram of current density for the case of dominant inductive type conduction: j[1]—current density determined by Formula (17), j[2] + j[3]—current density determined by Formula
(20), θ = –ωτ—angle between the vectors j[1] and (j[2] + j[3]), φ—phase shift angle determined by Formula (32).
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:
Share and Cite
MDPI and ACS Style
Kołtunowicz, T.N.; Gałaszkiewicz, P.; Kierczyński, K.; Rogalski, P.; Okal, P.; Pogrebnjak, A.D.; Buranich, V.; Pogorielov, M.; Diedkova, K.; Zahorodna, V.; et al. Investigation of AC Electrical
Properties of MXene-PCL Nanocomposites for Application in Small and Medium Power Generation. Energies 2021, 14, 7123. https://doi.org/10.3390/en14217123
AMA Style
Kołtunowicz TN, Gałaszkiewicz P, Kierczyński K, Rogalski P, Okal P, Pogrebnjak AD, Buranich V, Pogorielov M, Diedkova K, Zahorodna V, et al. Investigation of AC Electrical Properties of MXene-PCL
Nanocomposites for Application in Small and Medium Power Generation. Energies. 2021; 14(21):7123. https://doi.org/10.3390/en14217123
Chicago/Turabian Style
Kołtunowicz, Tomasz N., Piotr Gałaszkiewicz, Konrad Kierczyński, Przemysław Rogalski, Paweł Okal, Alexander D. Pogrebnjak, Vladimir Buranich, Maksym Pogorielov, Kateryna Diedkova, Veronika Zahorodna,
and et al. 2021. "Investigation of AC Electrical Properties of MXene-PCL Nanocomposites for Application in Small and Medium Power Generation" Energies 14, no. 21: 7123. https://doi.org/10.3390/
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
Article Metrics
|
{"url":"https://www.mdpi.com/1996-1073/14/21/7123","timestamp":"2024-11-03T04:32:55Z","content_type":"text/html","content_length":"533188","record_id":"<urn:uuid:324cf5ef-84ce-4633-8018-c1c7867c7dd3>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00512.warc.gz"}
|
learning statistics with jamovi
This textbook covers the contents of an introductory statistics class, as typically taught to undergraduate psychology, health or social science students. The book covers how to get started in jamovi
as well as giving an introduction to data manipulation. From a statistical perspective, the book discusses descriptive statistics and graphing first, followed by chapters on probability theory,
sampling and estimation, and null hypothesis testing. After introducing the theory, the book covers the analysis of contingency tables, correlation, t-tests, regression, ANOVA and factor analysis.
Bayesian statistics are touched on at the end of the book.
Citation: Navarro DJ and Foxcroft DR (2022). learning statistics with jamovi: a tutorial for psychology students and other beginners. (Version 0.75). DOI: 10.24384/hgc3-7p15
|
{"url":"https://davidfoxcroft.github.io/lsj-book/index.html","timestamp":"2024-11-12T12:09:08Z","content_type":"application/xhtml+xml","content_length":"34620","record_id":"<urn:uuid:8fbab24a-2002-4553-b10d-49e04760c3b2>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00058.warc.gz"}
|
Extreme Value Theorem, Global vs Local Extrema, and Critical Points - Knowunity
Extreme Value Theorem, Global vs Local Extrema, and Critical Points: AP Calculus Study Guide
Howdy, mathematicians! 🎓 Get ready to unravel the mysteries of extreme values in calculus, where we rotate loops of functions and find out where they hit their highest highs and lowest lows. Got your
graphing calculator? Great! Let's dive in! 📈
🎢 Extreme Value Theorem (EVT)
Let's kickstart with the Extreme Value Theorem, a fancy name for a very cool idea. Imagine your function ( f ) is like a super-smooth roller coaster with no breaks or weird gaps (a.k.a. it's
continuous) on the interval [a, b]. The EVT guarantees that your roller coaster must have both a highest peak (maximum) and a lowest dip (minimum) somewhere along that ride. And yes, it applies even
if our max and min values show up at the endpoints, like a thrill ride that starts or ends at the edge of your track. 🎢
Picture it like this: If you can smoothly trace your finger from point ( a ) to point ( b ) without lifting it off the paper, voila! EVT is in action.
🌐 Global vs. Local Extrema
Next stop: distinguishing between global and local extrema. Imagine you're hiking up and down mountainous terrains (yes, math can be an adventure too! 🏔️).
Global Extrema are like discovering the Everest (absolute maximum) or the Marianas Trench (absolute minimum) of your function over its entire domain. They are the highest or lowest points anywhere on
the function.
Local Extrema, on the other hand, are like stumbling upon small hills and valleys during your hike. These are peaks (local maxima) and dips (local minima) that stand out in specific sections or
neighborhoods but might not be the ultimate high or low over the entire function.
For example, imagine your function is like a bowl of mashed potatoes. The potatoes may have little peaks and valleys (local extrema), but the gravy might sit at the absolute highest point (global
maximum). 🍲
🎯 Critical Points
Now, onto critical points, the stepping stones of discovering our extrema. A critical point of a function ( f ) is where the derivative ( f' ) either:
1. Equals zero (( f'(c) = 0 )) or,
2. Does not exist.
Think of critical points as traffic lights on your mathematical journey. At these spots, your rate of change either pauses, changes direction, or gets a bit undefined (that traffic jam feeling 🚦).
Not all critical points are extrema, but all extrema that are within the interval have to be at critical points.
Practice Problems
Now let's get some hands-on experience to cement our understanding:
Problem 1: Identifying Critical Points from a Graph
Given the graph of ( f'(x) ), identify the critical points on the interval ( (0, 7) ).
(Here, imagine a graph showing where the derivative ( f'(x) ) crosses the x-axis or is undefined.)
Recall that at a critical point, the derivative equals zero or does not exist. So, look for those zero crossings or undefined spots. Suppose the graph shows ( f'(x) = 0 ) at ( x = 2 ) and ( x = 5 ).
Boom! Those are your critical points: ( x = 2 ) and ( x = 5 ).
Problem 2: Identifying Extrema from a Graph
Given the function ( f(x) = x^4 - 4x^3 + 4x^2 ), identify if all critical points qualify as extrema, and find the absolute maximum and minimum on the interval ( (-1, 2.5) ).
First, find the critical points by setting the derivative ( f'(x) = 4x^3 - 12x^2 + 8x ) to zero. Solving ( 4x(x^2 - 3x + 2) = 0 ), you get ( x = 0, 1, 2 ).
Next, check each for maxima and minima:
• At ( (0, 0) ) and ( (2, 0) ), you have local minima since surrounding values are higher.
• At ( (1, 1) ), you have a local maximum since surrounding values are lower.
Finally, check the endpoints:
• Point ( (-1, 9) ) is the absolute maximum because it’s higher than all other points in the interval.
• Point ( (2.5, 1.5625) ) is a local maximum since it’s higher than its immediate neighbors.
Problem 3: Applying Extreme Value Theorem
Suppose ( f(x) ) is defined over the interval ( (3, 9) ). Is the function guaranteed to have a maximum and minimum value in this interval?
For the EVT to guarantee extrema, the function must be continuous on the closed interval ([3, 9]). If the problem doesn't specify that ( f(x) ) is continuous or provide an equation to check, you
can't be certain there are guaranteed extrema in ( (3, 9) ).
Bravo, math adventurers! 🎉 You've explored the thrilling world of extrema, unraveling global vs local highs and lows, and mastering the art of finding critical points. Keep these principles close as
you tackle AP Calculus questions because they’re your ticket to solving those extrema-related mysteries.
May your calculations be accurate, and your maxima always within sight. Now, onward to more mathematical conquests! 🚀
|
{"url":"https://knowunity.com/subjects/study-guide/extreme-value-theorem-global-vs-local-extrema-critical-points","timestamp":"2024-11-13T01:32:44Z","content_type":"text/html","content_length":"271126","record_id":"<urn:uuid:bdd032e8-cebf-462b-8e02-1b8f841de00e>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00009.warc.gz"}
|
The Stacks project
Lemma 26.21.1. The diagonal morphism of a morphism between affines is closed.
Proof. The diagonal morphism associated to the morphism $\mathop{\mathrm{Spec}}(S) \to \mathop{\mathrm{Spec}}(R)$ is the morphism on spectra corresponding to the ring map $S \otimes _ R S \to S$, $a
\otimes b \mapsto ab$. This map is clearly surjective, so $S \cong S \otimes _ R S/J$ for some ideal $J \subset S \otimes _ R S$. Hence $\Delta $ is a closed immersion according to Example 26.8.1. $\
Comments (0)
There are also:
• 18 comment(s) on Section 26.21: Separation axioms
Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder,
this is tag 01KI. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 01KI, in case you are confused.
|
{"url":"https://stacks.math.columbia.edu/tag/01KI","timestamp":"2024-11-08T08:27:48Z","content_type":"text/html","content_length":"14318","record_id":"<urn:uuid:29cd672b-19fb-40e7-bde4-8d89d9609978>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00010.warc.gz"}
|
An object c of type Sphere_circle is an oriented great circle on the surface of a unit sphere.
Such circles correspond to the intersection of an oriented plane (that contains the origin) and the surface of \( S_2\). The orientation of the great circle is that of a counterclockwise walk along
the circle as seen from the positive halfspace of the oriented plane.
Sphere_circle ()
creates some great circle.
Sphere_circle (const Sphere_point &p, const Sphere_point &q)
If \( p\) and \( q\) are opposite of each other, then we create the unique great circle on \( S_2\) which contains p and q. More...
Sphere_circle (const Plane_3 &h)
creates the circle corresponding to the plane h. More...
Sphere_circle (const RT &x, const RT &y, const RT &z)
creates the circle orthogonal to the vector \( (x,y,z)\).
Sphere_circle (Sphere_circle c, const Sphere_point &p)
creates a great circle orthogonal to \( c\) that contains \( p\). More...
template<typename Traits >
CGAL::Nef_polyhedron_S2< Traits >::Sphere_circle::Sphere_circle ( const Sphere_point & p,
const Sphere_point & q
If \( p\) and \( q\) are opposite of each other, then we create the unique great circle on \( S_2\) which contains p and q.
This circle is oriented such that a walk along c meets \( p\) just before the shorter segment between \( p\) and \( q\). If \( p\) and \( q\) are opposite of each other then we create any great
circle that contains \( p\) and \( q\).
|
{"url":"https://doc.cgal.org/5.5.2/Nef_S2/classCGAL_1_1Nef__polyhedron__S2_1_1Sphere__circle.html","timestamp":"2024-11-13T15:32:58Z","content_type":"application/xhtml+xml","content_length":"24055","record_id":"<urn:uuid:f43159e6-c2f6-4bab-b9f3-3fc031337042>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00362.warc.gz"}
|
We Want Your Ideas » Chandoo.org - Learn Excel, Power BI & Charting Online
We Want Your Ideas
Posts by Hui
- 85 comments
Chandoo.org Wants You
Over the past 9 years Chandoo has written about 2,200 posts on all things Excel and Hui has contributed another 140 posts mostly targeted at the application of Excel techniques to real life
But is this really what you want to see us write about?
So in this post we’re opening the floor to you, with a single simple question:
What would you like to see discussed in future posts in 2017 at Chandoo.org ?
Your ideas can be as specific or general as you like:
One Rule only: The Idea must involve the Functionality, Use or Application of Excel !
We cannot guarantee that your idea will result in a Post, But if you don’t ask, you won’t receive
We will do our best to schedule posts where most requested and suitable skills and time is available by authors.
So, What would you like to see discussed in future posts at Chandoo.org ?
Let us know what you’d like to see in future posts in the comments below:
Hello Awesome...
My name is Chandoo. Thanks for dropping by. My mission is to make you awesome in Excel & your work. I live in Wellington, New Zealand. When I am not F9ing my formulas, I cycle, cook or play lego with
my kids. Know more about me.
I hope you enjoyed this article. Visit Excel for Beginner or Advanced Excel pages to learn more or join my online video class to master Excel.
Thank you and see you around.
Related articles:
Written by Hui...
Home: Chandoo.org Main Page
? Doubt: Ask an Excel Question
85 Responses to “We Want Your Ideas”
1. Excel connection to SAP Data.
□ Are you able to some of the statistical features available? In particular a multiple regression analysis.
The regression function returns a large amount of information and a bit on how to use that information and how to interpret it could be very helpful, particularly anyone in studying or
researching economics.
Also, how to identify errors after the regression has been run.
□ This would be extremely helpful!
2. some ideas
1) how to read data from various websites into excel
2) present environmental and sustainability key performance indicators in an interesting way
3) create a league table in excel
4) create a form which validates input from other data on the web (eg check a certificate number is valid from a site like http://info.fsc.org/certificate.php)
□ Please contact me (mgolcer@yahoo.com). I prepared a "League Table". You input the results of games. Output comes as classification of the league. If you change the week number than you get
the classification at the end of this week. You can ask a question to the program such as "what would be the classification of the league if the league was played in any N number of teams
instead of all teams". You choose the teams which combine N teams. There are also a lot more abilities of the program.
3. Power Query / Power Pivot / DAX and the use of Excel across Office 365 suite
4. I think it would be good to see post regarding Excel and how to connect to other applications.
Some examples:
Other Microsoft Office Applications
Adobe Acrobat
5. I want to learn more about the Cloud Applications.
6. Sequential numbers and arrays..
I'm interested in formulae generating an array of numbers which can be used in array formulae . However, some formulae unexpectedly don't yield an array of values. I tried named formulae, INDEX
and offset constructs without satisfying results. The final aim is to load an array in vba with Evaluate(formula).
At your request, I ll send a workbook. Thanks.
□ Hi ,
Why don't you post this as a question in the forum , where you can upload your sample workbook ?
☆ @Narayan
This isn't a "How do you solve my xxx problem" post?
This post is about ideas and themes that people want to learn about in future posts in 2017 and beyond
As always if people have a "I need help with ..." post they are welcome to visit the Chandoo.org Forums and start a New Thread and attach a file if required.
○ Hi Hui ,
A lot of posts on this blog have arisen out of questions posed in the forum.
Whether Jan wants a post on his ideas or an answer to his problem depends on him.
Whether you make a blog post out of his question , where ever it is posed , is up to you.
I am merely giving him a suggestion. It is for him to decide what he wishes to do.
■ Hai, if so it would be interesting to know which suggestions were taken into account and when there will be a post (rough order). Thanks.
■ @Jan
I will be doing a summary of responses, probably late this week but more likely next week
The summary will contains links to suitable solutions where known
Other solutions will require finding a suitable author or web site where we can learn or refer people to.
The issue is that the normal authors here don't have for example access to a SAP system to use in developing a solutions etc
7. I would like to see CHANDOO in 2017 with the below areas :
Excel usage with society & daily life .
8. I second the database programming requests using the full range of Excel techniques. I love tables but would like to access data from relational databases within my spreadsheets.
9. Daily Showcase of some user-submitted models that use various functionalities. I'd love to see how other people use their knowledge of Excel to build dynamic dashboards.
□ I love this Idea. I think this can become a medium to showcase skills and creativity. With potential freelancing opportunity (an incentive to participate).
However, I feel daily update would be a little overwhelming. if Chandoo can manage the quality and showcase only the best of class that would be amazing.
☆ Yeah even if its a weekly submission format, that would be amazing. We can then see how individual Excel applications can be combined to make some amazing dashboards. 🙂
□ This seems a great idea, particularly for dashboards, as there can various innovative ways to create dashboards for addressing common reporting/analysis requirements.
10. Discuss the interoperability of Excel with Google Sheets. The ability to share Excel worksheets with others over the internet and allow collaborative development and use of the workbook is very
important. Google seems to have come up with a good solution by allowing me to start with Excel, then go to the Google Sheets world, work with others and then, if needed/desired, upload the
Google stuff back into the Excel world.
Thank you for allow us to contribute to your world and for making ours so much better.
11. Relationships. Not the interpersonal kind, but between tables. I frequently have to pull small datasets from census-like datasets and link it to other small data from state and federal data. I
could probably do this in MS Access, but 1) I don't know how, 2) I have to create dashboards from pivot tables/charts that can be emailed to users who also don't use Access, and 3) the tables are
small enough that Excel seems like the right choice.
12. Analyzing text cells, frequency on words, phrases, sentiment...
13. Would love to see varying degrees of tips. Like Excel 101, 201, 301.
□ I love this idea. I think there's a lot of 101 material that would go well on Chandoo, and the ability to filter posts by the level of proficiency that the reader is interested in would be
really helpful. I'd love to refer my beginner friends to Chandoo, but it seems like a lot of what's here is 200-level content.
14. How about a series of Excel uses aimed at senior citizens. This could include medical history records, financial/stock records, and the like. I am at a retirement community and there are many
other residents with computers that have Excel. Something that encourages better use of Excel would be welcome.
To be fair there must also be Excel applications that would start youngsters on the road to awesomeness rather than playing mindless games.
15. Excel and Access interaction
Regular expressions and excel
Manipulating text with excel
16. Getting Started step by step curriculum
17. I would like to see more topics about using powerpivot and the interaction of excel with Microsoft Power BI and examples of the use of DAX functions
18. Any chance of seeing macro integration between excel and access?
19. Dear Mr. Chandoo, My concern is to use Excel in determining point spreads for American Football that give a better than %60 chance of winning. Also could I use the program "League Table" to do
the same thing? Thanks.
20. I would like to see more of use cases of how excel has helped simplify capturing of records and reporting based on the data captured. Users could also share their solutions for a particular use
case and then Chandoo can advice a recommended approach for the same.
21. Dear Mr. Chandoo, can we have a function that can convert numbers into letters and also be able to handle approximation in fraction.
□ As far as I know Excel has not any function for this issue. I have a short program with no macro converts numbers to words in English and in Turkish languages. The biggest number it can
convert is 999,999,999,999.99. If you contact me I share the program.
22. I would love to see more about the amazing "new" tooling which we have since Office 2010 like PowerBI, Powerpivot, powerview, DAX etc.
23. I would like to see interfaces and layouts and navigation suggestions towards to end user of my excel spreadsheets. Say you have 10 sheets but you want a modern looking Index page and layouts
□ Oh, yeah - I'd second and third and fourth this suggestion!!
25. Just to reiterate what's already been said above: PowerPivot, Dax, and Power BI. I think with the emergence of dashboard tools I think it's important to start focusing on this.
To start with ...
What's the most efficient way to organise data in Excel to read into Power BI to allow drill down?
Ways to get around the row limitations (maybe via CSV)?
26. Dear Chandoo,
I would like to see more about Excel-Powerpoint and Word relationship samples and some reviews about useful third party excel add-ins .
May be you can suggest or review some hardware for office rats like me 🙂 (ergonomic monitor , mouse, etc...).
Best Regards
27. Dear Chandoo,
Also i would like to see some articles about Excel-SAP relationship and formatting the raw extracted excel data which is extracted from SAP.
Best Regards
□ I second this! Create an integrated SAP-Excel dashboard which is automatically updated. Work in SAP, visualize in Excel Dashboard.
28. Addins! Please show how to create SECURE add-ins that can be distributed and/or sold such that the source code is protected and is not visible or hackable...
29. Real life (clear & somewhat simple) VBA examples and explanations.
30. I would like to see more HR related themes such as forecasting turnover depending on variables such as age, sex, position, etc
31. How to create Kanban board in excel with drag n drop cards.
32. Hi Chandoo..
love you webpage, your approach as the man to enriching others..great job you do..thank you..
as the sales manager I would like to see what and how can be analysied / visualised in Excel from sales data.
33. I would be interested in seeing a comparative review and the limitations of other excel alternative softwares such as LibreOffice Calc, WPS Office Sheets, Gnumeric, Spread32.
For instance I am impressed with spread32's concatenate function's ability to operate on a range and not just individual cell selection.
I am not sure if it's possible to convert to a table in LibreOffice Calc.
34. When we type an inbuilt function in any cell, Excel pops up on the screen a help about that function.
My request is to teach us how to make a similar feature for a UDF.
35. I would like to convert and edit PDF table to excel
36. How to create a SIGN IN which needs a username and password to enter my USERFORM
37. 1) How to handle recursion/recursive events in VBA?
2) Treeview/Hierarchical data structures in relation with the first question
38. Hi Hui,
Good day...
Chandoo.org have covered all most all the topics in detail, and we have learned all the things here.
But since you have opened the floor, I would like to avail this opportunity to learn more about AGGREGATE function, specially with 2 or more conditions (sum or max), I have seen a lot of example
at the forum, specially using AGGREGATE function to avoid Ctrl+Shift+Enter.
39. VBA created forms - how to dynamically add check boxes/radio buttons, how to adjust their size based on wording length, how to add OK/Cancel buttons programmatically.
40. I think for Small Businesses
Using Excel
Data entry Form
Storing Data using Excel
Searching data using Excel
41. How to build a fully functional Media Player embedded in an Excel file.
42. Protecting worksheets and vba workbooks effectively.
I mean, anyone with a hex editor can get in; but a guide to making it as hard as possible.
43. Excel web app would be great since everything is "Online" now. Embedding Excel as you can see on (my newly created website) may offer a quick way to share our workbook easily.
44. Publishing spreadsheets that are (at least mostly) automated for daily/monthly updates:
- connections to other data sources
- Power BI/Query
- methods of publishing/distribution
Statistical analysis for people with only basic knowledge of statistics. Ex. clustering to find customers who are "similar" in their purchasing patterns. Ex2. analyzing data that's not a Normal
Methods of gathering user feedback via XL: Ex. sending a list of customers for review, asking for updates on specific fields (like price level or market segment) based on included data.
45. How to open PDF file data into excel.
46. I would like to see the best options to present Excel created charts and tables in MS PowerPoint without linking to a file, when presentation is emailed out to a group.
47. More Power Pivot and Power Query.
Fewer videos. I read my RSS feed, but don't have the time to watch videos. Text is also more searchable from my mail client.
48. can there be an excel quiz or a workbook? i am a beginner and have to understand where to apply all this excel knowledge! there must be ways(quiz/fun Q & A kind of forum) that could sharpening
the excel basics and standard everyday businness templates that i might need.when i need it, it would be easier for me to recall and use it .
Eg: my dad is having trouble understanding the startup costs for his new business, i could set up a template for him. or have him take your quiz to be confident.
49. I would like to see full discussion on the use of both (Indirect) and (Address) formulas
50. I'd like tutorials on template creation in Excel. I really would like more information on how to create useful Excel templates.
Thanks Chandoo.
51. How about an Excel for Moron Managers study?? You know the kind where we have to create some type of report in Excel for our managers who have all the technical skill of a brick and constantly
need us to hold their hands to the mouse to get them to click... :^)
Ok - maybe just a bit joking.
□ I create tutorials on template creation for PowerPoint but am at loss w.r.t how to do Excel ones and the formulas that I see in some of the calendar templates. So yeah, I'd like a few
template creation tutorials for Excel. 🙂 And yeah templates would also help the uninitiated Excel user just copy and paste into already formatted nifty re-usable templates.
52. I think we should work without indirect function.
Because i don't want to open the number of files for that.
53. I would love to see a discussion on "Excel and Sharepoint"
54. Hai, I'm interested in learning more about table driven approaches for generating names, UI interfaces and more. Hui used one for generating names in the pendulum miracle.
Also:VBA interaction between Excel and Access.
55. Hello,
I would like to see VBA codes and Powerpivote intersection.
56. Hi, I want more and more posts and templates on Budgeting and Expense, Income and Savings Tracking. Detailed, complex and comprehensive reports with charts and analytical reports on daily,
monthly, Annual and multiple years data generated with different excel tools and formulas.
57. Please share useful dashboards for Information Technology industries like Invoice Dashboard, Executive Dashboards which helps Top Management to take good decisions.
58. Hello Chandoo..
Thank you so very much for checking what people wanted.
These days Robotics are killing VBA skills i belive...which is one step head of automations..
So need to prove Excel vba which is fav of millions and billions is not less then of any Robotics tools like...Blueprism, Automation anywhere and UIpath..
59. It would be better if I had written this before Christmas as this was a great time waster for me.
Use Excel to solve "Aristotle's Number Problem" -see http://www.britishmuseumshoponline.org/invt/cmcn439400 -each 'row', whether 3, 4 or 5 tiles, must add up to the same number.
60. Will like to see how to efficiently use:
1. Power Query
2. Link Excel to Access and Vice-versa for importing and exporting charts and tables
3. Use solver add-in for regression and other analysis.
4. use excel sheets in browsers and how to utilize SharePoint for excel
lastly: how to read PDF data directly in excel - using VBA
61. Dear Chandoo,
Please note that I have been working in Excel file contains two times of our teammates who claims overtime an each calendar month
My excel file as like this :-
ROW 1 Days of Month
ROW 2 Date of Month
Cell -1 [Time IN], cell -2 [Time OUT] no break in our factory and anything after Eight hours assume as overtime as standard in all across.
Appreciate if you could help me in providing the best an Exclusive Excel formula to calculate each day overtime excluding staff eight hours regular duty and Friday consider as full day overtime.
Kindly help me at the earliest convenience.
awaiting for your expertise.............
Best Regards / Ikram Siddiqui
62. Daylight savings time conversions: I have been given a list of records of transactions. Name, Address, City, State, Country, Postal Code, and sales transactions columns.
One of those sales transaction columns is a SaleDate column formatted in ISO 8601 format.
Example: 2016-01-12T05:40:06Z
Goal: the date needs to be formatted as MM/DD/YYYY HH:MM AM/PM **AND** adjusted to reflect the proper timezone **AND** also adjusted for either daylight savings time observation or standard time
- for each address as well as converted to local time.
So if the address is in New York City then the adjusted ISO time for the example above would show:
2016-01-12 05:40:06 UTC
and since the date is the 12th day of January, it is in "standard time" so the final, local, format of the date will become:
2016-01-12 00:40:06 EST (UTC-5)
If the date was 2016-04-19T00:28:57Z then the final date would be 2016-04-18 20:28:57 EDT (UTC-4) reflecting the local date AND daylight savings time being in effect (EDT) for that NYC address.
I can do this manually but I have over 10,000 records. I have tried to create a VBA function but I am lost. How do I / where do I find information about local timezone adjustment and daylight
savings time adjustments for a particular address anywhere in the world? The data has a variety of addresses from around the globe. The task became overwhelming so I have begun to do it manually.
Would love to see any ideas on this sort of data conversion...
63. Dear Chandoo
You are doing a great job by helping others to add new skills and tricks in Excel but now you should also focus on google sheets. Google sheets being an online platform adds a new dimension for
the user specially if he wants it to share / collaborate with others. One can even create a small online database for small business without spending even a penny.
64. hello,
I would love it if you could find a VBA solution for the error that arises from overlapping pivot tables. There must be some code to handle automatic spacing to, for example, 1-2 rows.
Than you.
65. Is it possible to record time changes on website?
what i mean is, on our office, we have an intranet website that we use to login and logout. I want to record the changes on the website so that i can track the time that I'm logged in and logged
out as it is the basis on how we could get paid. Once I gather the time data, i would be able to create a table that i will use to add the time that im logged in.
I appreciate your response excel masters. Thanks.
« Finding the closest school [formula vs. pivot table approach] Sorting to your Pivot table row labels in custom order [quick tip] »
|
{"url":"https://chandoo.org/wp/we-want-your-ideas-2/","timestamp":"2024-11-11T00:02:49Z","content_type":"text/html","content_length":"460640","record_id":"<urn:uuid:809cb865-bf70-40e9-957b-f58eb8cf9930>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00887.warc.gz"}
|
Frama-C-discuss mailing list archives
This page gathers the archives of the old Frama-C-discuss archives, that was hosted by Inria's gforge before its demise at the end of 2020. To search for mails newer than September 2020, please visit
the page of the new mailing list on Renater.
Date Prev
Date Next
Thread Prev
Thread Next
Date Index
Thread Index
[Frama-c-discuss] Problem with ACSL annotations
Le mar. 18 d?c. 2012 12:28:20 CET,
intissar mzalouat <intissar_mzalouat at yahoo.fr> a ?crit :
> /*@ requires \valid_range(queue,0,14);
> ??? ensures \result==0 ==> (\exists int i;0<=i<15 && ((\forall int j;
> 0<=j<15 && queue[i] > queue[j] )&&((queue[i]>=5) ||
> (queue[i]<=10)))); */ int find_array_max(int queue[15]){
> I had some problems to write postconditions in ACSL for this function.
> Could you, please help me?
It would have helped if you had stated explicitely what your problems
were, but my guess is that
\forall int j; 0<=j<15 && queue[i]>queue[j]
should be replaced with
\forall int j; 0<=j<15 ==> queue[i]>queue[j]
The former says that for any int j, both 0<=j<15 and queue[i]>queue[j]
hold, which is trivially falsified by taking j == -1. The latter says
that for any int j between 0 and 15, we have queue[i]>queue[j].
More generally, the informal spec "for any x in P, Q holds" is
written \forall x; P ==> Q, while "there exists x in P such that Q
holds" is indeed written as what you've done: \exists x; P && Q.
Best regards,
E tutto per oggi, a la prossima volta.
|
{"url":"https://frama-c.com/html/fc-discuss/2012-December/msg00014.html","timestamp":"2024-11-11T17:57:59Z","content_type":"text/html","content_length":"7152","record_id":"<urn:uuid:ca0ce504-9ff1-4bde-96f3-136564a4fc6b>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00451.warc.gz"}
|
R and Data Mining - Social Network Analysis
> # load termDocMatrix
> load("data/termDocMatrix.rdata")
> # inspect part of the matrix
> termDocMatrix[5:10,1:20]
This post presents an example of social network analysis with R using package igraph.
The data to analyze is Twitter text data of @RDataMining used in the example of Text Mining, and it can be downloaded as file "termDocMatrix.rdata" at the Data webpage. Putting it in a general
scenario of social networks, the terms can be taken as people and the tweets as groups on LinkedIn, and the term-document matrix can then be taken as the group membership of people. We will build a
network of terms based on their co-occurrence in the same tweets, which is similar with a network of people based on their group memberships.
At first, a term-document matrix, termDocMatrix, is loaded into R. After that, it is transformed into a term-term adjacency matrix, based on which a graph is built. Then we plot the graph to show the
relationship between frequent terms, and also make the graph more readable by setting colors, font sizes and transparency of vertices and edges.
Note that the above termDocMatrix is a standard matrix, instead of a term-document matrix under the framework of text mining. To try the code with your own term-document matrix built with the tm
package, you need to run the code below before going to the next step.
> termDocMatrix <- as.matrix(termDocMatrix)
Transform Data into an Adjacency Matrix
> # change it to a Boolean matrix
> termDocMatrix[termDocMatrix>=1] <- 1
> # transform into a term-term adjacency matrix
> termMatrix <- termDocMatrix %*% t(termDocMatrix)
> # inspect terms numbered 5 to 10
> termMatrix[5:10,5:10]
Now we have built a term-term adjacency matrix, where the rows and columns represents terms, and every entry is the number of co-occurrences of two terms. Next we can build a graph with
graph.adjacency() from package igraph.
> library(igraph)
> # build a graph from the above matrix
> g <- graph.adjacency(termMatrix, weighted=T, mode = "undirected")
> # remove loops
> g <- simplify(g)
> # set labels and degrees of vertices
> V(g)$label <- V(g)$name
> V(g)$degree <- degree(g)
> # set seed to make the layout reproducible
> set.seed(3952)
> layout1 <- layout.fruchterman.reingold(g)
> plot(g, layout=layout1)
A different layout can be generated with the first line of code below. The second line produces an interactive plot, which allows us to manually rearrange the layout. Details about other layout
options can be obtained by running ?igraph::layout in R.
> plot(g, layout=layout.kamada.kawai)
> tkplot(g, layout=layout.kamada.kawai)
Next, we will set the label size of vertices based on their degrees, to make important terms stand out. Similarly, we also set the width and transparency of edges based on their weights. This is
useful in applications where graphs are crowded with many vertices and edges. In the code below, the vertices and edges are accessed with V() and E(). Function rgb(red, green, blue, alpha) defines a
color, with an alpha transparency. We plot the graph in the same layout as the above figure.
> V(g)$label.cex <- 2.2 * V(g)$degree / max(V(g)$degree)+ .2
> V(g)$label.color <- rgb(0, 0, .2, .8)
> V(g)$frame.color <- NA
> egam <- (log(E(g)$weight)+.4) / max(log(E(g)$weight)+.4)
> E(g)$color <- rgb(.5, .5, 0, egam)
> E(g)$width <- egam
> # plot the graph in layout1
> plot(g, layout=layout1)
|
{"url":"https://www.rdatamining.com/examples/social-network-analysis","timestamp":"2024-11-13T19:14:30Z","content_type":"text/html","content_length":"149133","record_id":"<urn:uuid:c44e2d0f-b432-42dd-b314-ad5d346481c2>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00136.warc.gz"}
|
Tag Results
Items tagged with "r.math" (1)
Note: some items may not be visible to you, due to viewing permissions.
Workflows (1)
GRASS-GIS orchestration using pyWPS (2)
Generic workflow that run r.watershed, with auxiliary services: r.math and geotiff2png. Watershed accumulation is calculated from DEM using r.watershed, the accumulation result is then filtered using
r.math with equation:output=(if(a>10,a,null())) Generic workflow that run r.watershed, with auxiliary services: r.math and geotiff2png. Watershed accumulation is calculated from DEM using
r.watershed, the accumulation result is then filtered using r.math with equation: output=(if(a>10,...
Created: 2011-04-18 | Last updated: 2011-04-25
Credits: Jorgejesus
|
{"url":"https://www.myexperiment.org/tags/2586.html","timestamp":"2024-11-12T12:29:15Z","content_type":"application/xhtml+xml","content_length":"12028","record_id":"<urn:uuid:a889f962-68b0-4b24-aae4-a712fca9fcf9>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00043.warc.gz"}
|
WP 34s registers and integer modes - Printable Version
WP 34s registers and integer modes - signals - 04-02-2015 04:32 PM
I'm not sure I'm understanding a behavior on the WP 34s. Integers stored in registers (either local or global) are no longer integers if recalled after DECM. Stack values are OK. Here's a little test
program I wrote to demonstrate:
LBL 'TST'
BASE 10
STO 00
RCL 00
Y: 3
X: 3e-398
The content of R00 is 3e-398 as well. If I shift back to BASE10 the value in X becomes 0 (as you would expect) but the R00 register goes back to 3.
I'm either not understanding something critical about integer modes, or I've run into a bug. I don't have my WP 34s yet, so I guess it could be a bug in the emulator, but the Windows and iOS
emulators show the same behavior.
I'm figuring I just don't understand something about integer representation. Can anyone explain this to me?
RE: WP 34s registers and integer modes - Marcus von Cube - 04-02-2015 05:40 PM
Not a bug but a feature! Mode switching shouldn't affect stored values because this would essentially destroy their contents. Think of storing PI in R00, switching to integer mode and back to decimal
mode. If we'd automatically convert the contents of R00 to integer and back the result would be 3, losing all the digits after the decimal point. The stack is a different case: here we decided to do
the conversion.
You can access values in registers stored in the "wrong" format with specialized RCL commands: iRCL treats the register contents as an integer and converts it to decimal if needed. Likewise sRCL
treats the register contents as if it was stored in single precision decimal mode. If called from integer mode, the value is dynamically converted. Now guess what dRCL is meant for...
RE: WP 34s registers and integer modes - signals - 04-02-2015 05:58 PM
(04-02-2015 05:40 PM)Marcus von Cube Wrote: You can access values in registers stored in the "wrong" format with specialized RCL commands: iRCL treats the register contents as an integer and
converts it to decimal if needed. Likewise sRCL treats the register contents as if it was stored in single precision decimal mode. If called from integer mode, the value is dynamically converted.
Now guess what dRCL is meant for...
Thanks again. Now that I know what to look for, I found this info on page 159 in the manual. As I said in an earlier thread, it's going to take some time, and some dumb questions, to fully understand
this thing. There's just so much material in the manual.
The fact that the stack is handled differently than the registers on mode switches seems a bit surprising, but makes sense.
RE: WP 34s registers and integer modes - walter b - 04-08-2015 06:38 AM
(04-02-2015 05:58 PM)signals Wrote: Now that I know what to look for, I found this info on page 159 in the manual. As I said in an earlier thread, it's going to take some time, and some dumb
questions, to fully understand this thing. There's just so much material in the manual.
The fact that the stack is handled differently than the registers on mode switches seems a bit surprising, but makes sense.
You find even a bit more information on pp. 273ff of the current manual (available here, reflecting v3.3 build 3778).
RE: WP 34s registers and integer modes - Gerald H - 04-08-2015 06:47 AM
(04-08-2015 06:38 AM)walter b Wrote:
(04-02-2015 05:58 PM)signals Wrote: Now that I know what to look for, I found this info on page 159 in the manual. As I said in an earlier thread, it's going to take some time, and some dumb
questions, to fully understand this thing. There's just so much material in the manual.
The fact that the stack is handled differently than the registers on mode switches seems a bit surprising, but makes sense.
You find even a bit more information on pp. 273ff of the current manual (available here, reflecting v3.3 build 3778).
Be wary ordering from createspace - I ordered & received manual version 3.2 instead of 3.3 &, after initially ignoring my complaints, they refused to replace the book with the version I had ordered.
The manual itself is excellent.
RE: WP 34s registers and integer modes - signals - 04-10-2015 02:16 AM
(04-08-2015 06:38 AM)walter b Wrote: You find even a bit more information on pp. 273ff of the current manual (available here, reflecting v3.3 build 3778).
So, I have been wondering... I don't know anything about createspace. If I order a hard copy of the manual do any of the proceeds go to the WP34s team? I don't really need a hard copy, but would be
willing to order one if it supports the work on the WP34s and its potential succesor.
RE: WP 34s registers and integer modes - walter b - 04-11-2015 02:43 PM
(04-10-2015 02:16 AM)signals Wrote:
(04-08-2015 06:38 AM)walter b Wrote: You find even a bit more information on pp. 273ff of the current manual (available here, reflecting v3.3 build 3778).
So, I have been wondering... I don't know anything about createspace. If I order a hard copy of the manual do any of the proceeds go to the WP34s team? I don't really need a hard copy, but would
be willing to order one if it supports the work on the WP34s and its potential succesor.
Please see the new thread about the pdf manual - if you don't need a hardcopy. Also sales of the printed books support "the work on the WP34s and its potential successor" - there's a 355 p. draft
manual for the 43S already on the HDD of the author of the printed texts.
RE: WP 34s registers and integer modes - signals - 04-11-2015 11:46 PM
(04-11-2015 02:43 PM)walter b Wrote: Please see the new thread about the pdf manual - if you don't need a hardcopy. Also sales of the printed books support "the work on the WP34s and its
potential successor" - there's a 355 p. draft manual for the 43S already on the HDD of the author of the printed texts.
Thanks for the info. Just wanted to make sure that the author would see part of the money I gave Createspace.
I said I didn't need a hardcopy, but that doesn't mean I don't want one. So I just placed an order.
|
{"url":"https://hpmuseum.org/forum/printthread.php?tid=3541","timestamp":"2024-11-11T06:41:57Z","content_type":"application/xhtml+xml","content_length":"11663","record_id":"<urn:uuid:ec33152e-5b31-46e9-a231-0c92df0ebc7d>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00544.warc.gz"}
|
Session 14 - Calculating Gross Margin %
In this session, we’ll use MDX to calculate the Gross Margin %. First, we’ll take an in depth look into the differences between SQL and MDX, then we’ll look at the different areas of the application
where we can apply MDX. And lastly, we’ll use a Member Formula to define the Gross Margin %.
[Gross Margin] / [Gross Sales After Returns]
In this session, we’ll use MDX to calculate the Gross Margin %.
First, we’ll take an in depth look into the differences between SQL and MDX, and the particular use-cases for each, then we’ll look at the different areas of the application where we can apply MDX.
And lastly, we’ll use a Member Formula to define the Gross Margin %.
In the Application, tables exist in a SQL Database, for example the Dimension Tables or Fact Tables. Cubes are the aggregations of table data in an OLAP Database.
When we want to manipulate table data, we use SQL. When we want to manipulate data in the Cube we use MDX.
Some calculations, like our Gross Margin %, require aggregated data to obtain the results. These calculations can’t be easily represented with SQL.
For example, in this sample Form, we’ve manually calculated Gross Margin and Gross Sales After Returns with SQL, and used those values to determine the Gross Margin %. While each individual
calculation is correct, the aggregation of these values is not.
In the second Form, we’ve defined the same calculation using MDX. Notice how as we change the filter selections, the dynamic aggregation is always calculated correctly. To get this value for all
possible aggregations in the Form, we need to use MDX.
There are three areas in the Application that we can apply MDX. We can write MDX directly into the definition of a Dimension Member as a Member Formula, we can add MDX scripts to a Model, or we can
edit the Native MDX tab within a Form.
It’s important to note that this is also the order in which the MDX will be executed in the Application. This means that MDX scripts on the Model can override the Member Formulas, and the Native MDX
tab can override both.
As Gross Margin % is always defined as the ratio of Gross Margin over Gross Sales After Returns, let’s add the MDX calculation to the Dimension Member.
In the Account Dimension, let’s turn on Member Formulas by checking the box here. Then we can locate Gross Margin % in the Account Hierarchy, and select the gear icon to define a formula. Here, we
can define it simply as Gross Margin / Gross Sales After Returns.
Then format it as a percent. Let’s Deploy the Application. In the P&L Report, we can confirm that it’s working. Now all the calculations in our Application are complete.
In this session, we examined the differences between SQL and MDX, and the particular use-cases for each, then we looked at the areas of the application where we can apply MDX. And lastly, we used a
Member Formula to define the Gross Margin %.
This concludes Part IV of the series. In the final part, we’ll go over some more administrative options and examine users, security, and workflow settings.
Next Steps
|
{"url":"https://help.kepion.com/hc/en-us/articles/360034936132-Session-14-Calculating-Gross-Margin","timestamp":"2024-11-05T16:55:57Z","content_type":"text/html","content_length":"33898","record_id":"<urn:uuid:d2f5f830-4660-492a-91b4-be9dad8e1d97>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00317.warc.gz"}
|
3.84 Meters to Feet
3.84 m to ft conversion result above is displayed in three different forms: as a decimal (which could be rounded), in scientific notation (scientific form, standard index form or standard form in the
United Kingdom) and as a fraction (exact result). Every display form has its own advantages and in different situations particular form is more convenient than another. For example usage of
scientific notation when working with big numbers is recommended due to easier reading and comprehension. Usage of fractions is recommended when more precision is needed.
If we want to calculate how many Feet are 3.84 Meters we have to multiply 3.84 by 1250 and divide the product by 381. So for 3.84 we have: (3.84 × 1250) ÷ 381 = 4800 ÷ 381 = 12.59842519685 Feet
So finally 3.84 m = 12.59842519685 ft
|
{"url":"https://unitchefs.com/meters/feet/3.84/","timestamp":"2024-11-03T21:27:58Z","content_type":"text/html","content_length":"22849","record_id":"<urn:uuid:9e6bdcf5-68d2-43bd-b1f4-9410710aff15>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00861.warc.gz"}
|
lightgbm.plot_tree(booster, ax=None, tree_index=0, figsize=None, dpi=None, show_info=None, precision=3, orientation='horizontal', example_case=None, **kwargs)[source]
Plot specified tree.
Each node in the graph represents a node in the tree.
Non-leaf nodes have labels like Column_10 <= 875.9, which means “this node splits on the feature named “Column_10”, with threshold 875.9”.
Leaf nodes have labels like leaf 2: 0.422, which means “this node is a leaf node, and the predicted value for records that fall into this node is 0.422”. The number (2) is an internal unique
identifier and doesn’t have any special meaning.
It is preferable to use create_tree_digraph() because of its lossless quality and returned objects can be also rendered and displayed directly inside a Jupyter notebook.
☆ booster (Booster or LGBMModel) – Booster or LGBMModel instance to be plotted.
☆ ax (matplotlib.axes.Axes or None, optional (default=None)) – Target axes instance. If None, new figure and axes will be created.
☆ tree_index (int, optional (default=0)) – The index of a target tree to plot.
☆ figsize (tuple of 2 elements or None, optional (default=None)) – Figure size.
☆ dpi (int or None, optional (default=None)) – Resolution of the figure.
☆ show_info (list of str, or None, optional (default=None)) –
What information should be shown in nodes.
■ 'split_gain' : gain from adding this split to the model
■ 'internal_value' : raw predicted value that would be produced by this node if it was a leaf node
■ 'internal_count' : number of records from the training data that fall into this non-leaf node
■ 'internal_weight' : total weight of all nodes that fall into this non-leaf node
■ 'leaf_count' : number of records from the training data that fall into this leaf node
■ 'leaf_weight' : total weight (sum of Hessian) of all observations that fall into this leaf node
■ 'data_percentage' : percentage of training data that fall into this node
☆ precision (int or None, optional (default=3)) – Used to restrict the display of floating point values to a certain precision.
☆ orientation (str, optional (default='horizontal')) – Orientation of the tree. Can be ‘horizontal’ or ‘vertical’.
☆ example_case (numpy 2-D array, pandas DataFrame or None, optional (default=None)) –
Single row with the same structure as the training data. If not None, the plot will highlight the path that sample takes through the tree.
☆ **kwargs – Other parameters passed to Digraph constructor. Check https://graphviz.readthedocs.io/en/stable/api.html#digraph for the full list of supported parameters.
ax – The plot with single tree.
Return type:
|
{"url":"https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.plot_tree.html","timestamp":"2024-11-12T19:56:33Z","content_type":"text/html","content_length":"17825","record_id":"<urn:uuid:3bc898c8-8f89-429f-84ee-3cdec427d787>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00849.warc.gz"}
|
On This Day in Math - September 18
Lisez Euler, lisez Euler, c'est notre maître à tous.
(Read Euler, read Euler, he is our master in everything.)
—~Pierre-Simon Laplace
The 261st Day of the Year
261 = 15^2 + 6^2. It is also 45^2 - 42^2 and 131^2 - 130^2
261 is the number of possible unfolded tesseract patterns. (
Charles Howard Hinton coined the term tesseract (4-dimensional "cube"). He is also the inventor of the baseball pitching gun
.) (see
Baseball and the Fourth Dimension)
If you draw diagonals in a 16 sided polygon, it is possible to dissect it into 7 quadrilaterals. There are 261 unique ways to make this dissection.
261 is the only three digit number n, for which 2^n - n is prime. *Prime Curios
261 is divisible by 9, the sum of its digits, so it is a Joy-Giver (Harshad) number.
261 Fearless, a non-profit organization started by Kathrine Switzer, who in 1967 wore bib number 261 when she became the first woman to run the Boston Marathon as a numbered entrant. *Wikipedia
This should have been the Day of the Year in 2021, as that's the base five representation for 261(2021)
A bracelet with 261 Blue beads and 3 Red beads can be ordered in 261 different ways.
261 = 4^4 + 4^1 + 4^0
My conjecture: There is no square that is made up of five Pythagorean triangles with a side shorter than 261, as shown *HT to @simon_gregg
1820 André Marie AMPÈRE describes electromagnetic effect. On 11 September 1820 he heard of H. C. Ørsted's discovery that a magnetic needle is acted on by a voltaic current. Only a week later, on 18
September, Ampère presented a paper to the Academy containing a much more complete exposition of that and kindred phenomena. On the same day, Ampère also demonstrated before the Academy that parallel
wires carrying currents attract or repel each other, depending on whether currents are in the same (attraction) or in opposite directions (repulsion). This laid the foundation of
Two of Ampère’s experimental set-ups to demonstrate electromagnetism, engraving, in Annales de chimie et de la physique, vol. 15, 1820 (Linda Hall Library)
In 1830, B&O locomotive Tom Thumb, the first locomotive built in America, lost in a 14-km race with a horse due to a boiler leak.*TIS
Peter Cooper, an American inventor and industrialist, first came on the engineering scene in 1830, when he assembled from spare parts a locomotive that he called Tom Thumb, and on behalf of which he
challenged a horse-drawn rail car to a race, which took place on Aug. 28, 1830 . Tom Thumb lost the race due to mechanical problems, but he won the marathon, since the locomotive’s superiority was
evident, and the Baltimore and Ohio Railroad decided to put their money on steam locomotives. Their rapid expansion enabled them to buy iron rails from Cooper's iron works in Maryland, which was the
whole idea behind Tom Thumb in the first place. In the early 20th century, the Bureau of Public Roads (now the Federal Highway Administration) commissioned a painting of the race by their staff
artist Carl Rakeman (as well as a hundred other events in highway and rail history). The original Tom Thumb does not survive, but in 1927 the Baltimore and Ohio Railroad commissioned a replica as
best they could (there are no images of the original) and it is a popular attraction in their Museum in Baltimore .
1846 Le Verrier transmits his most famous achievement, his prediction of the existence of the then unknown planet Neptune, using only mathematics and astronomical observations of the known planet
Uranus. Encouraged by physicist Arago Director of the Paris Observatory, Le Verrier was intensely engaged for months in complex calculations to explain small but systematic discrepancies between
Uranus's observed orbit and the one predicted from the laws of gravity of Newton. At the same time, but unknown to Le Verrier, similar calculations were made by John Couch Adams in England. Le
Verrier announced his final predicted position for Uranus's unseen perturbing planet publicly to the French Academy on 31 August 1846, two days before Adams's final solution, which turned out to be
12° off the mark, was privately mailed to the Royal Greenwich Observatory. Le Verrier transmitted his own prediction by 18 September letter to Johann Galle of the Berlin Observatory. The letter
arrived five days later, and the planet was found with the Berlin Fraunhofer refractor that same evening, 23 September 1846, by Galle and Heinrich d'Arrest within 1° of the predicted location near
the boundary between Capricorn and Aquarius.*Wik
Excerpt from the Hora XXI sky chart of the Berlin Science Academy completed by Carl Bremiker. The predicted location (square) and the observed location (circle) were noted in pencil, allegedly by
Galle, but at some time after the discovery.
1948 Alan Turing writes to Jack Good and mentions, "Chess machine designed by Champ and myself..". *Turing Archive
1752 Adrien-Marie Legendre (18 Sep 1752; 10 Jan 1833) French mathematician who contributed to number theory, celestial mechanics and elliptic functions. In 1794, he was put in charge of the French
government's department that was standardizing French weights and measures. In 1813, he took over as head of the Bureau des Longitudes upon the death of Lagrange, its former chief. It was in a paper
on celestial mechanics concerning the motion of planets (1784) that he first introduced the Legendre Polynomials. His provided outstanding work on elliptic functions (1786), and his classic treatise
on the theory of numbers (1798) and also worked on the method of least squares. *TIS
1819 Jean Bernard Léon Foucault (18 Sep 1819; 11 Feb 1868) French physicist whose Foucault Pendulum experimentally proved that the Earth rotates on its axis (6 Jan 1851). Using a long pendulum with a
heavy bob, he showed its plane rotated at a rate related to Earth's angular velocity and the latitude of the site. He studied medicine and physics and became an assistant at the Paris Observatory
(1855). He invented an accurate test of a lens for chromatic and spherical aberrations. Working with Fizeau, and also independently, he made accurate measurements of the absolute velocity of light.
In 1850, Foucault showed that light travels slower in water than in air. He also built a gyroscope (1852), the Foucault's prism (1857) and made improvements for mirrors of reflecting telescopes. *TIS
(a brief biography of Foucault is here)
1839 John Aitken (18 Sep 1839; 14 Nov 1919) Scottish physicist and meteorologist who, through a series of experiments and observations in which he used apparatus of his own design, elucidated the
crucial role that microscopic particles, now called Aitken nuclei, play in the condensation of atmospheric water vapour in clouds and fogs. Ill health prevented Aitken from holding any official
position; he worked instead in the laboratory in his home in Falkirk. Much of his work was published in the journals of the Royal Society of Edinburgh, of which he was a member.*TIS
1854 Sir Richard Tetley Glazebrook (18 Sep 1854; 15 Dec 1935) English physicist who was the first director of the UK National Physical Laboratory, from 1 Jan 1900 until his retirement in Sep 1919. At
first, the laboratory's income depended on much routine, commercial testing, but Glazebrook championed fundamental, industrially oriented research. With support from individual donors, buildings were
added for electrical work, metrology, and engineering. Data useful to the shipbuilding industry was collected in pioneering experimental work on models of ships made possible by a tank funded by
Alfred Yarrow (1908). From 1909, laboratory began work benefitting the embryonic aeronautics industry, at the request of the secretary of state for war. The lab to contributed substantially to
military needs during WW I *TIS
1863 William Henry Metzler (18 Sept 1863, 18 April 1943) was a Canadian mathematician who graduated from Toronto University and taught at Syracuse University and Albany Teachers Training College,
both in New York State. He published papers on the theory of matrices and determinants, several of them in the Proceedings of the EMS. *SAU
1907 Edwin Mattison McMillan (Sep 18, 1907 - September 7, 1991) McMillan was an American physicist and Nobel laureate credited with being the first-ever to produce a transuranium element, neptunium.
For this, he shared the Nobel Prize in Chemistry with Glenn Seaborg in 1951.
McMillan and his colleagues discovered the elements neptunium (Np) and plutonium (Pu), the two elements following uranium (U) in the periodic table. Their names were inspired by the position of the
planets in the solar system - Neptune is beyond Uranus and Pluto (before being declassified as a planet) is beyond Neptune. *http://www.rsc.org/
Three Nobel Prize winners (left to right): Edwin McMillan, Emilio Segre, Glenn Seaborg, photograph, 1959 (nara.getarchive.net) *Linda Hall Library
1909 Johannes Haantjes ( Itens , 18 September 1909 – Leiden , 8 February 1956 ) was a Dutch mathematician who mainly concerned himself with differential geometry . His last scientific position was
that of professor at Leiden University with teaching responsibility for geometry .
From the 1940s onwards, Haantjes was mainly concerned with metric geometry , in particular with dimension theory. Within that geometry, a curvature was named, partly after him: the Haantjes-Finsler
curvature . It is one of the many extensions of the ordinary curvature concept , with applications to wavelets , among others .
It should not go unmentioned that Haantjes has a total of 53 publications to his name , including several articles by the group of mathematicians (including A. Nijenhuis and G. Laman) who were
concerned with the so-called “ geometric object ”, and the fact that in 1954 he wrote a book that was used for many years in teacher training: Inleiding in de differentialometrkunde . *Wik
1926 James Cooley, (September 18, 1926 – June 29, 2016) co-creator of the fast Fourier transform, was born. Working with John Tukey, Cooley in 1965 worked out a vast improvement to a common
mathematical algorithm called the Fourier transform. Although the algorithm had been useful in computing, its complexity required too much time. While working at IBM, Cooley built on Tukey's ideas
for a swifter version. *CHM
Cooley received a B.A. degree in 1949 from Manhattan College, Bronx, NY, an M.A. degree in 1951 from Columbia University, New York, NY, and a Ph.D. degree in 1961 in applied mathematics from Columbia
University. He was a programmer on John von Neumann's computer at the Institute for Advanced Study, Princeton, NJ, from 1953 to 1956, where he notably programmed the Blackman–Tukey transformation.
He worked on quantum mechanical computations at the Courant Institute, New York University, from 1956 to 1962, when he joined the Research Staff at the IBM Watson Research Center, Yorktown Heights,
NY. Upon retirement from IBM in 1991, he joined the Department of Electrical Engineering, University of Rhode Island, Kingston, where he served on the faculty of the computer engineering program.
1783 Leonhard Euler dies (15 Apr 1707, 18 Sep 1783) . After having discussed the topics of the day, the Montgolfiers, and the discovery of Uranus, “He [Euler] ceased to calculate and to
live,”according to the oft-quoted words of de Condorcet. *VFR Swiss mathematician and physicist, one of the founders of pure mathematics. He not only made decisive and formative contributions to the
subjects of geometry, calculus, mechanics, and number theory but also developed methods for solving problems in observational astronomy and demonstrated useful applications of mathematics in
technology. At age 28, he blinded one eye by staring at the sun while working to invent a new way of measuring time. *TIS (Students who have not, should read Dunham's "Euler, The Master of us All")
1891 William Ferrel (born 29 Jan 1817, 18 Sep 1891) American meteorologist was an important contributor to the understanding of oceanic and atmospheric circulation. He was able to show the
interrelation of the various forces upon the Earth 's surface, such as gravity, rotation and friction. Ferrel was first to mathematically demonstrate the influence of the Earth's rotation on the
presence of high and low pressure belts encircling the Earth, and on the deflection of air and water currents. The latter was a derivative of the effect theorized by Gustave de Coriolis in 1835, and
became known as Ferrel's law. Ferrel also considered the effect that the gravitational pull of the Sun and Moon might have on the Earth's rotation and concluded (without proof, but correctly) that
the Earth's axis wobbles a bit. *TIS
1896 Armand Hippolyte Fizeau (23 Sep 1819, 18 Sep 1896) French physicist was the first to measure the speed of light successfully without using astronomical calculations (1849). Fizeau sent a narrow
beam of light between gear teeth on the edge of a rotating wheel. The beam then traveled to a mirror 8 km/5 mi away and returned to the wheel where, if the spin were fast enough, a tooth would block
the light. Knowing this time from the rotational speed of the wheel, and the mirror's distance, Fizeau directly measured the speed of light. He also found that light travels faster in air than in
water, which confirmed the wave theory of light, and that the motion of a star affects the position of the lines in its spectrum. With Jean Foucault, he proved the wave nature of the Sun's heat rays
by showing their interference (1847). *TIS
"Fizeau" is one of the 72 names inscribed on the frieze below the first platform of the Eiffel Tower, all of whom were French scientists, mathematicians, engineers, or industrialists from the hundred
years before the tower's public opening for the 1889 World's Fair. Of the 72, Fizeau is the only one who was still alive when the tower was opened. *Wik
1913 Samuel Roberts FRS (15 December 1827, Horncastle, Lincolnshire – 18 September 1913, London) was a British mathematician.
Roberts studied at Queen Elizabeth's Grammar School, Horncastle. He matriculated in 1845 at the University of London, where he earned in 1847 his bachelor's degree in mathematics and in 1849 his
master's degree in mathematics and physics, as first in his class. Next he studied law and became a solicitor in 1853. After a few years of law practice he abandoned his law career and returned to
mathematics, although he never had an academic position. He had his first mathematical paper published in 1848. In 1865 he was an important participant in the founding of the London Mathematical
Society (LMS). From 1866 to 1892 he acted as legal counsel for LMS, from 1872 to 1880 he was the organization's treasurer, and from 1880 to 1882 its president. In 1896 he received the De Morgan Medal
of the LMS. In 1878 he was elected FRS.
Roberts published papers in several fields of mathematics, including geometry, interpolation theory, and Diophantine equations.
Roberts and Pafnuty Chebyschev are jointly credited with the Roberts-Chebyshev theorem related to four-bar linkages *Wik
1967 Sir John Douglas Cockcroft (27 May 1897, 18 Sep 1967) British physicist, who shared (with Ernest T.S. Walton of Ireland) the 1951 Nobel Prize for Physics for pioneering the use of particle
accelerators to study the atomic nucleus. Together, in 1929, they built an accelerator, the Cockcroft-Walton generator, that generated large numbers of particles at lower energies - the first
atom-smasher. In 1932, they used it to disintegrate lithium atoms by bombarding them with protons, the first artificial nuclear reaction not utilizing radioactive substances. They conducted further
research on the splitting of other atoms and established the importance of accelerators as a tool for nuclear research. Their accelerator design became one of the most useful in the world's
laboratories. *TIS
1977 Paul Isaak Bernays (17 Oct 1888, 18 Sep 1977) Swiss mathematician and logician who is known for his attempts to develop a unified theory of mathematics. Bernays, influenced by Hilbert's
thinking, believed that the whole structure of mathematics could be unified as a single coherent entity. In order to start this process it was necessary to devise a set of axioms on which such a
complete theory could be based. He therefore attempted to put set theory on an axiomatic basis to avoid the paradoxes. Between 1937 and 1954 Bernays wrote a whole series of articles in the Journal of
Symbolic Logic which attempted to achieve this goal. In 1958 Bernays published Axiomatic Set Theory in which he combined together his work on the axiomatisation of set theory. *TIS
2002 Siobhán Vernon (née O'Shea) was the first Irish-born woman to get a PhD in pure mathematics in Ireland, in 1964.
Siobhán Vernon worked as a demonstrator in the Department of Mathematics at University College, Cork while she completed her M.Sc. and was then appointed Senior Demonstrator. Encouraged by Dr Patrick
Brendan Kennedy, Siobhán began to publish research in 1956 and was appointed to the full-time post of Assistant in 1957. Continuing her research career, she spent a year as a visiting lecturer in
Royal Holloway College, University of London, in 1962-63.
Returning to University College, Cork, she submitted her published papers for the award of PhD, which was awarded in 1964 by the National University of Ireland. She was appointed lecturer in 1965.
Following her marriage to geologist Peter Vernon, Siobhán reduced her teaching to half-time, as they raised their four children. She later returned to full time teaching, retiring in 1988.
In 1995 she was honoured with a Catherine McAuley award as a distinguished past pupil by the Convent of Mercy in Macroom. *Wik
Credits :
*CHM=Computer History Museum
*FFF=Kane, Famous First Facts
*NSEC= NASA Solar Eclipse Calendar
*RMAT= The Renaissance Mathematicus, Thony Christie
*SAU=St Andrews Univ. Math History
*TIA = Today in Astronomy
*TIS= Today in Science History
*VFR = V Frederick Rickey, USMA
*Wik = Wikipedia
*WM = Women of Mathematics, Grinstein & Campbell
|
{"url":"https://pballew.blogspot.com/2024/09/on-this-day-in-math-september-18.html","timestamp":"2024-11-05T00:55:26Z","content_type":"application/xhtml+xml","content_length":"164402","record_id":"<urn:uuid:4e7cab3f-0283-41a5-8204-855b36e700f7>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00781.warc.gz"}
|
Question [CLICK ON ANY CHOICE TO KNOW THE RIGHT ANSWER]
What is breakeven cash inflow of the project?
The maximum level of cash inflow needed for the project to be acceptable.
The Minimum level of cash inflow needed for the project to be acceptable.
The maximum level of cash outflow needed for the project to be acceptable.
The minimum level of cash Outflow needed for the project to be acceptable.
Detailed explanation-1: -Question: Breakeven cash inflow refers to the minimum level of cash inflow necessary for a project to be acceptable, that is, IRR < cost of capital. the minimum level of cash
inflow necessary for a project to be acceptable, that is, NPV > $0.
Detailed explanation-2: -Break Even Point is a versatile metric for understanding when your business will become profitable and at what point you have enough revenue to cover all of your expenses.
Break Even Point is essentially the minimum revenue or volume of sales needed to cover all operating expenses.
Detailed explanation-3: -So in the denominator we have 1 plus the rate interest which is 0.08 and that elevated to the 4 in the numerator, we have the future value. That is 1250 point. So using our
calculator remember that we need to answer with 2 decimal places, so that will be 918.79.
Detailed explanation-4: -Cash inflows include operating profits and cash shielded by tax savings and depreciation. Cash outflows include the principal and interest and possible tax repayments
associated with the project.
Detailed explanation-5: -The calculation is operating income before depreciation minus taxes and adjusted for changes in the working capital. Operating Cash Flow (OCF) = Operating Income (revenue –
cost of sales) + Depreciation – Taxes +/-Change in Working Capital.
There is 1 question to complete.
|
{"url":"https://education-academia.github.io/accounting/cost-accounting/capital-budgeting/what-is-breakeven-cash-inflow-of-the-project.html","timestamp":"2024-11-09T11:01:06Z","content_type":"text/html","content_length":"23086","record_id":"<urn:uuid:8ab46e66-56bb-4c73-b48d-480b5cf32ef5>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00311.warc.gz"}
|
Python Variables And Assignments – TecAdmin
Python Variables and Assignment
A variable is a name, which represents a value. To understand it, open a Python terminal and run below command.
id = 10
You will not see any result on screen. Then what happens here? This is an assignment statement, where id is a variable and 10 is the value assigned to the variable.
A single equal (=) is an assignment operator. Where left side of the operator is the variable and right side operator is a value.
Once a value is assigned to the variable, we can use variable name anywhere in a python program instead of using value.
Variable Names in Python
Similar to other programming languages, Python also have limitations on defining variable names:
• A variable name must start with a letter or the underscore.
• A variable name cannot start with a number.
• A variable name can have a single character or string up to 79 characters.
• A variable name contains only alpha-numeric characters and underscores (Like: A-Z, a-z, 0-9, and _ ).
• A Variable names are case-sensitive (id, Id and ID are different variables in Python).
Variable Assignment Examples
Below is the some valid python variable names”
_id = 10
name = "rahul"
name = 'rahul'
address_1 = "123/3 my address"
You can also assign the same value to multiple variables in single command.
a = b = c = "green"
Also you can assingle multiple values to multiple variables in a single command.
a, b, c = "green", "yellow", "blue"
Local vs Global Variable
A local variable is defined in a function block. It is accessible within the function only. Once the function execution completed, the variable is destroyed.
A Global variable is variable defined in Python program. It is not defined in any function block. It is accessible to entire Python program including functions. A Global variable destroyed only once
the script execution completed.
Below is a sample program to show you difference between local and global variables. Here “a” is a global variable and “b” is a local variable of function myfun().
a = "Rahul"
def myfun():
b = "TecAdmin"
print(b) ## You will get error - 'b' is not defined
|
{"url":"https://tecadmin.net/tutorial/python-variables","timestamp":"2024-11-02T14:18:30Z","content_type":"text/html","content_length":"64835","record_id":"<urn:uuid:15a6b0b0-bc3a-450e-84df-d25ff7259de8>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00608.warc.gz"}
|
TR24-150 | 2nd October 2024 21:51
Characterizing and Testing Principal Minor Equivalence of Matrices
Two matrices are said to be principal minor equivalent if they have equal
corresponding principal minors of all orders. We give a characterization of
principal minor equivalence and a deterministic polynomial time algorithm to
check if two given matrices are principal minor equivalent. Earlier such
results were known for certain special cases like symmetric matrices,
skew-symmetric matrices with {0, 1, -1}-entries, and matrices with no cuts
(i.e., for any non-trivial partition of the indices, the top right block or the
bottom left block must have rank more than 1).
As an immediate application, we get an algorithm to check if the
determinantal point processes corresponding to two given kernel matrices (not
necessarily symmetric) are the same. As another application, we give a
deterministic polynomial-time test to check equality of two multivariate
polynomials, each computed by a symbolic determinant with a rank 1 constraint
on coefficient matrices.
|
{"url":"https://eccc.weizmann.ac.il/report/2024/150/","timestamp":"2024-11-14T04:07:04Z","content_type":"application/xhtml+xml","content_length":"21382","record_id":"<urn:uuid:f35b40f8-d69c-4b48-8aaa-40220ba58eda>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00118.warc.gz"}
|
AcouVApp Lden - Lden Calculation
Lden definition
What is the Lden (Day-Evening-Night Level) in Acoustic Calculations?
The Lden indicator (Level day-evening-night) is a critical measurement used in environmental noise assessments. It represents the weighted average noise level over a 24-hour period, accounting for
varying noise sensitivities during different times of the day.
Formula to Calculate Lden:
\[ Lden = 10 \log_{10} \left( \frac{1}{24} \left( 12 \times 10^{\frac{LAeq(6h-18h)}{10}} + 4 \times 10^{\frac{LAeq(18h-22h)+5}{10}} + 8 \times 10^{\frac{LAeq(22h-6h)+10}{10}} \right) \right) \]
• LAeq (dBA): The equivalent continuous sound pressure level over a specific period (T).
Understanding the Lden Components:
As shown in the formula, noise levels are adjusted to reflect higher sensitivity to noise in the evening (by adding 5dB) and at night (by adding 10dB). This adjustment helps to better represent the
impact of noise pollution on people during these periods.
For more detailed information, visit our pages on exterior noise calculation and explore our acoustic product database.
|
{"url":"https://app.impulsion-acoustique.fr/lden","timestamp":"2024-11-03T07:13:40Z","content_type":"text/html","content_length":"14967","record_id":"<urn:uuid:d7c07e05-5a61-4470-b614-e9acac10214e>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00277.warc.gz"}
|
Tikalon Blog is now in archive mode. An easily printed and saved version of this article, and a link to a directory of all articles, can be found below:
This article
Directory of all articles
Georg Ohm and Ohm's Law
March 15, 2013 Tomorrow,
March 16
, is the
anniversary of the birth
Georg Simon Ohm
, discoverer of what's now called
Ohm's law
. Ohm, who was born on March 16, 1789, was a
German high school teacher
when he performed the
demonstrating that the
electrical voltage E
across a
increases in proportion to the applied
current I
. The
proportionality constant
is the
resistance R
, now measured in
; viz,
E = I R
Ohm may have inherited his facility for experiment from his father, who was a
. Ohm's family valued education, and his younger brother,
, became a
. Ohm studied mathematics on his own while he worked as a
private tutor
. His private study paid off. Quitting
, Ohm entered the
University of Erlangen
, and he received a
on October 25, 1811, just a few months after his arrival.
Ohm's law illustrated graphically as Georg Ohm equals Alessandro Volta over André-Marie Ampère.
(Source images, Ohm, Volta, and Ampère, from Wikimedia Commons.)
Ohm served as a
in mathematics at the university for a time at very little pay. Short on cash, he re-purposed himself as a high school mathematics teacher in
, and he taught briefly at two public schools. Ohm knew this was not the life for which he had hoped, so he spent time writing a
geometry textbook
, the
of which he sent to
King Frederick Wilhelm III of Prussia
. The king was suitably impressed, and this landed Ohm a better teaching position at the
Jesuit Gymnasium
(another high school) in the Fall of 1817. At this school, Ohm taught
as well as mathematics, and the school had an excellent physics
for the time. It was here that Ohm did his electrical experiments, which were summarized in the 1827 paper, "The Galvanic Circuit Investigated Mathematically," (Die galvanishe Kette, mathematisch
bearbeitet). Publication of what's now called "Ohm's Law" probably assisted Ohm in gaining a position at the Polytechnic School of
in 1833, and his becoming a
of experimental physics at the
University of Munich
in 1852. The resistor is one of the three (
perhaps four
) fundamental electrical components, the others being the
. The figure below shows the
electrical symbol
for a resistor, and how to calculate the equivalent resistance when resistors are connected in
Ohm's law illustrated using conventional electronic component symbols (left), and the equivalent resistance of series (middle) and parallel (right) combinations of resistors. (Illustration by the
author using Inkscape.)
The series and parallel resistance formulas are easy to understand when you relate them to the fundamental physics of resistance. The resistance
of a uniform
of a
resistivity ρ
, cross-sectional area
and length
, as shown in the figure, is just
R = ρ (L/A)
Resistance formula for a conductor with resistivity, ρ.
(Illustration by the author using Inkscape.)
You can see that when you put resistors in series, you add the virtual lengths of their internal conductors, so the series formula is obvious. In the parallel case, you're adding the cross-sectional
areas, so the math is a little different. You're adding the
, which is the
of the resistance. All this is textbook stuff, which is exciting to just a few students. There are combinations of resistors which are much more interesting, such as the
infinite lattice
of resistors shown in the figure. Whenever the idea of infinity enters a problem, it becomes more interesting and often more complex.
An infinite lattice of resistors in two dimensions. Measurements between some nodes will give an experimental value of pi (π). (Illustration by the author using Inkscape.)
A very nice analysis of this infinite array of resistors was published fifteen years ago,[1] and there have been quite a few papers on this lattice array, similar lattice arrays, and cubic arrays.
[1-8] The following table shows the calculated resistance values, in units of the identical resistance values in the array, between the origin (0,0) and some nearby lattice nodes.[1] The origin, of
course, can be anywhere, since this is an infinite array.
┃ (i,j) │ R │ │ (i,j) │ R ┃
┃ 0,0 │ 0 │ │ 2,2 │ (8/3π) ┃
┃ 0,1 │ 1/2 │ │ 3,3 │ (46/15π) ┃
┃ 1,0 │ 1/2 │ │ 4,4 │ (352/105π) ┃
┃ 1,1 │ 2/π │ │ 5,5 │ (1126/315π) ┃
The interesting thing here is that some nodes will give
rational numbers
, and other nodes will give values containing
, an
Permanent Link to this article
Linked Keywords: March 16; birthday; anniversary of the birth; Georg Simon Ohm; Ohm's law; Germany; German; high school; teacher; experiment; electrical voltage; electrical conductor; electric
current; proportionality constant; resistance; ohm; locksmith; Martin Ohm; mathematician; private tutor; Neuchâtel; Switzerland; University of Erlangen; Doctor of Philosophy; Ph.D.; mathematics;
Alessandro Volta; André-Marie Ampère; Wikimedia Commons; lecturer; Bavaria; geometry; textbook; manuscript; King Frederick Wilhelm III of Prussia; Jesuit; Gymnasium; Cologne; physics; laboratory;
Nuremberg; professor; University of Munich; memristor; capacitor; inductor; electronic symbol; electrical symbol; series and parallel circuits; Inkscape; rod; bar; material; resistivity; conductance;
multiplicative inverse; reciprocal; infinity; infinite; lattice; node; pi; rational number; irrational number; transcendental number.
Latest Books by Dev Gualtieri
Previews Available
at Tikalon Press
STEM-themed novel for middle-school students
Mathematics-themed novel for middle-school students
Complete texts of LGM, Mother Wode, and The Alchemists of Mars
Other Books
|
{"url":"http://www.tikalon.com/blog/blog.php?article=2013/Ohm","timestamp":"2024-11-10T11:53:24Z","content_type":"application/xhtml+xml","content_length":"20938","record_id":"<urn:uuid:1eb303c8-56d1-424c-baae-da79527cf9ae>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00606.warc.gz"}
|
Enhancing Student Learning in Math Classes—Estela Gavosto (2008)
An accessible version of the documents on this site will be made available upon request. Contact cte@ku.edu to request the document be made available in an accessible format.
With support from NIH, Professor Gavosto develops and implements enhanced math classes in order to assist students with weak mathematical backgrounds, and she discovers that students in these classes
excelled in this class structure above and beyond students in the traditional math courses, suggesting that all students might benefit from this type of instruction.
—Estela Gavosto (2008)
Portfolio Overview
As part of the Initiative for Maximizing Student Development Project, funded by the National Institutes of Health, enhanced math curricula were developed for the KU Department of Mathematics. The
initiative was designed to address the under-representation of minority groups in biomedical careers, which require advanced understanding of mathematics.
Initial attempts to increase the success of students with weak mathematical backgrounds were made through the offering of individual tutoring services in the most basic introductory courses:
Intermediate Algebra (Math 002), College Algebra (Math 101), and Calculus I (Math 115). However, students were unlikely to take advantage of the tutoring, and thus the success of this attempt was
limited. Therefore, a second attempt was made via the implementation of enhanced math classes, to assist students who had weaker mathematical backgrounds by including an additional class period in
the course structure and implementing a Treisman Seminar-type approach.
In the fall of 2002, Professor Gavosto designed and implemented the first enhanced math classes at KU. When redesigning the three courses of Intermediate Algebra, College Algebra, and Calculus I, it
was observed that students seemed to be more engaged with course material if the class met every day, as opposed to two or three class meetings per week. Thus, the design of the enhanced math classes
integrated two programs: the inclusion of an additional class period, where students took the same calculus course but with one extra class period to compensate for a weak background in calculus, and
the inclusion of the Treisman Seminar method, where students were encouraged to collaborate on challenging problems in an environment of high expectations.
In terms of overall performance on a common final exam, students in the enhanced calculus sections performed at a level that was at least one letter grade higher than their peers in the regular
course. This pattern parallels what other institutions are finding when they implement an enhanced math class format. Thus, it appears that this type of enhanced program is significantly increasing
students’ level of mathematical understanding.
To assess the types of skills that students attained in the enhanced and traditional sections of the course, the lecture material, group work exercises, homework assignments, and final exam questions
were analyzed according to Bloom’s taxonomy. While students in the enhanced sections showed overall increases across all areas of understanding, it was observed that all students were relatively weak
in the area of application, based on Bloom. It was found that this weakness in application skills corresponded to a relatively low amount of time spent in-class on application-related material, as
well as out of class via homework and group work assignments. To bolster application skills, the course design was then modified to increase its emphasis on application-based understanding, which in
turn increased students’ levels of application skills.
Overall, the enhanced math sections have been very successful. The extra dedication and resources allocated to these sections produced many positive outcomes, such as improved performance, more
students completing their math courses, and higher evaluations of the course instructors. Combining appropriate learning environments, dedicated teachers, and committed students brought about these
successes; this combination yielded excellent results.
While the enhanced math sections have been successful, there are still some challenges for the future. First, there are questions as to why students do so well in these classes and how students could
be prepared to do equally well without the extra support in subsequent classes. Second, there is a financial challenge, such that the cost of the enhanced sections does not currently allow the
extension of this course design to all students. However, as it becomes clear what can be reproduced from the enhanced math sections into other math classes, the implications are that all students
may benefit. So far, it is apparent that all students would benefit from instructors setting high expectations for all students, providing regular feedback and actively following up with students who
under-perform, modeling good study habits, and regularly reminding students of available resources.
To view another IMSD-funded project, see the Department of Molecular Biosciences portfolio.
The development of the enhanced math classes at the University of Kansas began in 2001 as a part of the Initiative for Maximizing Student Development (IMSD) Project, which was funded by the National
Institutes of Health (PI: Jim Orr, KU Department of Molecular Biosciences). The overall goal of the IMSD was to increase the number of under-represented minorities who enter careers in biomedical
research, which has been a worrisome trend for decades. Recent surveys indicate that the percentage of students in underrepresented groups who take advanced math courses in high school, which prepare
them to study biomedical disciplines in college and beyond, is much lower than for other groups, thus indicating that in order to improve the participation of under-representation of minorities in
biomedical careers, there is a need to address the preparation of students in introductory math coursers. (See recent national report.) In an attempt to increase the number of underrepresented
students who succeed in completing math requirements at KU, Professor Gavosto worked with the Offices for Diversity in Science Training to develop enhanced math curricula in the mathematics
department. Other efforts to enhance performance in biology and chemistry introductory classes are also supported by IMSD.
Intermediate Algebra (Math 002), College Algebra (Math 101), and Calculus I (Math 115) are prerequisites for students to complete the math requirement. The KU math requirement consists of completing
a course beyond college algebra, for example Calculus I, Math 115. Math 115 is a coordinated course, taught mainly in small classes of 30-35 students with common gateway, midterm, and final exams.
Students who have a math ACT of 26 or equivalent, or those who have taken College Algebra, may enroll in the course. If students do not meet the prerequisites for Calculus I, they are required to
take one course, College Algebra (Math ACT 22-25), or two courses, Intermediate Algebra (Math ACT below 22) first and then Math 101. The College Algebra and Intermediate Algebra courses have smaller
average classes sizes than Calculus I, with 20-24 students per section.
Several types of assistance are provided to students enrolled in any of these three courses. Calculus I has a consulting room offering free tutoring, with tutors available for more than 35 hours per
week. Likewise, students in Intermediate or College Algebra take part in the Kansas Algebra Program, which has a help room offering free tutoring for more than 60 hours per week, skill tests, videos
of classes, un-timed exams and re-take exams in a testing room. Although the program is successful, many students who could benefit from the additional support provided from the Kansas Algebra
Program do not take advantage of the help available.
When developing a program out of the IMSD project, the first attempt at increasing the likelihood of students’ taking advantage of additional assistance included the provision of individualized
tutoring to underrepresented students. However, individualized tutoring did not work very well, because students still did not fully take advantage of this service, coupled with the fact that it was
also difficult to identify students who needed extra assistance.
For a second attempt, Professor Gavosto drew on observations that students seemed to be more engaged with course material if the class met every day, as opposed to meeting two or three days a week.
Thus, the revision to the enhanced math sections included the addition of two extra class periods to the course structure, such that these courses would meet every day for a class period of 50
minutes. These additional class periods mirrored the approach of programs at other universities, where students take the same calculus course but with extra class periods to compensate for a weak
background in calculus. Moreover, the additional time periods would be used in the spirit of a Treisman seminar. The Treisman seminars are based on Uri Treisman’s work with African-American students
at Berkeley in the 1980s. His approach was to replace the remedial work that these students were participating in with honors-level work; this level of work encouraged students to collaborate on
challenging problems in an environment of high expectations. His method has been adopted in many programs with great success. (Check here for more information on the Treisman method12.) During these
additional class meetings, students would be able to strengthen their background through “enhanced” work, not “remedial” work. To see the difference between these class meetings, please see our
breakdown of how the enhanced sections’ daily work compared to the regular sections’ daily work. Furthermore, the class size was restricted to 20 students, compared to the normal class size of 20-35
students in each section. This was a particularly significant decrease in class size for the Calculus I course.
The requirements for students to enroll in the enhanced math classes were first year students of any ethnicity who:
• Had a relatively weak background or apprehension about mathematics, and
• Expressed a desire to improve their math skills
What did the extra time provide to students enrolled in the enhanced sections?
• A more comfortable and enjoyable learning environment
• Increased opportunities to interact with the teacher and other students in the class, through:
□ Individualized help and mentoring
□ Group work and peer learning
A deeper grasp of the course content, through:
• More detailed explanations of concepts and related background
• More complicated problems, multiple approaches and solutions to problem solving
• Extended use of the technology and detailed training on the use of the graphing calculator
• Study skills, time management, and college and career planning advice
For the enhanced sections, instructors were carefully selected. The instructors, who were primarily GTAs, were chosen based on teaching experience both in the classroom and in the type of
environments where they had previously taught. Instructors who had previous experiences with students from a diverse background were preferred. We also looked for instructors with experience at KU
who had knowledge of our program and the university in general.
For more information on our approach, see our discussion in CTE’s Teaching Handbook.
Several instructors have provided comments on the implementation of the enhanced math sections:
Amy Kim (taught Calculus I during the Spring 2005 and Fall 2005 semesters):
“I have taught both the regular and enhanced sections. In the enhanced sections, I always try to help students develop good organization skills and study habits. I expect students to keep notes and
homework well organized. In addition, during the first week of classes I have students map out a typical week of classes, work and extra curricular activities to help them find times to study
mathematics, as well as their other subjects. Students feel like they have more control over time if they manage it appropriately, and they don’t feel so overwhelmed in college.
“Cooperative learning and group work is a big part in the enhanced math sections. Students become more engaged with the topics if the atmosphere in the classroom varies. This way, when students are
home completing assignments, they may be able to reflect back on the group discussions and answer their own questions. In addition to group work, the graphing calculator is an awesome tool to enhance
the learning process. Once students have a firm grasp on the material in a particular section, it is always great to allow them to explore these problems graphically and numerically.
“One motto I follow for the class is ‘Practice make perfect.’ If students can work up to the challenging problems in the textbook, then they have confidence and can tackle any problem they are given
with the tools they have created for themselves.”
Benjamin Pera (taught Intermediate Algebra during Fall 2006):
“The enhanced sections offered students more time for in-class practice of homework, more time for questions, substantially smaller class sizes with more one-on-one help, and more time for examples
than a regular section. In addition, the overall pace through the material was at a slower rate than the regular sections, which allowed students to absorb more information. The extra time also
allowed me to have more group study time, where students worked on homework and projects in groups and pairs. This helped them build confidence in themselves, in each other, and helped them develop
their teamwork skills.
There are several indicators that suggest that these enhanced sections were beneficial. These indicators include students’ performance across the course, student performance on the final essay exams
and samples of students’ work, drop-out rates, course evaluations, and instructor comments. Each of these measures is outlined below.
1. Students’ performance: Course grades
In comparisons of overall performance between students in the enhanced sections to students in the traditional sections of the courses, students in the enhanced sections performed at a level that was
at least one letter grade higher than their peers in the traditional course. This is a similar pattern to what other institutions are finding when they implement an enhanced math class format. For
example, UT Austin’s Emerging Scholar Program in Calculus found that students in the enhanced sections of calculus were on average obtaining a B+ in their first semester of calculus, which is a 0.5
grade point average better than their peers in traditional classes. Furthermore, in their second semester of calculus, the emerging scholar students are obtaining an A average, which is 0.75 grade
points better than the average in the regular second semester calculus classes. Similarly, Wisconsin’s Emerging Scholars Calculus program found that in students’ first two semesters of calculus, the
students in the enhanced classes are performing half a grade point higher than students in the regular sections, and this is when pre-college math abilities are statistically controlled for. Thus, it
appears that this type of enhanced program is significantly increasing students’ level of mathematical understanding. Students also report taking pride in their work they did. For some students, this
is the first math course for which they have ever earned an A. Often the grade that they earn in the enhanced math class is the highest grade that they have ever earned in a math course.
Students in Math 115 (both the enhanced and traditional sections) took the same final exam over the course material. This provided an equal base of comparison across the two types of classes. First,
an examination of the overall exam performances between the enhanced Math 115 course and two traditional Math 115 classes was conducted. While the traditional classes scored an average of 68% (median
= 67%) and 74% (median = 75%) on the final exit exam, the enhanced Math 115 class scored an average grade of 83% (median = 85%), indicating increased understanding in the enhanced class.
A follow-up analysis addressed whether there was a particular type of question on which the enhanced section excelled. To answer this, each question on the Fall 2007 final exam was categorized in
terms of the type of skills and understanding that it required, using Bloom’s taxonomy (pdf). This table indicates the raw number and overall percentage of questions from each chapter that were
included on the final exam. As can be observed in this chart, the exam mostly emphasized knowledge and comprehension questions, with the fewest number of questions asking students to engage in
After the classification of the exam questions, the percent of each type of question answered correctly was then examined for each class section. See this bar graph for more information. As evidenced
by this data, the enhanced Math 115 section outperformed the traditional Math 115 classes on each type of question. However, all students, including those in the enhanced class section, performed
more poorly on the application-directed questions than on the knowledge/comprehension- or the analysis-directed questions.
In an attempt to determine why lower levels of performance were observed when students were asked to apply the course material, the type of preparation and practice that students in the
enhanced-classes were asked to engage in across the semester was assessed. In terms of class work, the amount of time spent in each class on knowledge/comprehension, application, and analysis
material was examined, the results of which can be seen in this break-down of the daily material, in terms of Bloom’s taxonomy, covered in the enhanced-sections. We've also provided a condensed table
of the amount of time spent lecturing on material related to the three Bloom’s taxonomy categories. Since the time spent in each chapter varies, overall percentage averages were calculated as well:
43% of the total lectures covered topics and examples in the knowledge and comprehension categories, 22% were focused on application problems and examples, and 35% of all lectures discussed and
examined analysis questions and examples.
The percent of homework and group work problems completed by students in each taxonomic category was examined, as well. Since the time spent and number of problems assigned per chapter varied,
overall percentage averages were also calculated: 52% of the problems assigned to the students covered topics in the knowledge and comprehension categories, 18% were focused on application problems
and 30% of all the assignments were analysis questions. This bar graph illustrates the relative percentages of each category of material, comparing the lecture topics to the problems assigned. Based
on the amount of time spent in the analysis category inside and outside of the classroom, it was not surprising that the students did well in this category on the final exam. The more time focused on
this category versus the application problems may have affected the results.
Two examples of analysis questions from the final are included here:
Analysis Example 1: Once the student obtained the volume function, it appears that they graphically found the correct solution.
Analysis Example 2: This student had no trouble analyzing the problem, creating a diagram, and properly labeling the variables.
Overall, the students seemed to struggle with the knowledge and comprehension questions, as well. They appeared to have a general understanding of the underlying ideas of calculus, but sometimes they
lost track of the details in their computation. For example, this student’s response to a comprehension question does not show use of the correct rule of differentiation. This student does not
initially apply the properties of logarithms, which simplified the question being asked.
In an attempt to address students’ relative weakness in the area of application according to Bloom’s taxonomy, the focus of the course itself was shifted, with more time spent in and out of class on
application-related understanding. As can be seen in these bar graphs (pdf), more time was spent both in-class and out-of-class on the working of application problems in the Spring 2008 semester than
in Fall 2007. There was also a slight increase in the number of application-type questions asked on the Spring 2008 Final Exam relative to the Fall 2007 version.
To examine whether an increased emphasis on application questions affected student understanding, the final exams for the Spring 2008 courses were analyzed. Although students in the enhanced math
sections answered more questions correctly than students in the traditional math sections overall, what is most interesting is the increase in application-type question performance. While students in
the Fall 2007 enhanced sections exhibited a U-shaped function, with the lowest levels of understanding demonstrated on application-based questions, the Spring 2008 enhanced math section students
exhibit a much more linear and steady understanding across the three components. Therefore, it is apparent that when an increased focus is applied to an aspect of student understanding, that learning
can be bolstered.
3. Pre- and post-test assessments of mathematics accessibility and utility
Students in the enhanced and traditional Math 101 and Math 115 classes were asked how closely the following statements matched themselves, at the beginning and at the end of the semester:
1. Mathematics is enjoyable and stimulating to me.
2. Communicating with other students helps me have a better attitude towards mathematics.
3. The skills I learn in this class will help in other classes for my major.
The results were as follows:
Math 101e
Q1 Beg: 1.93
Q1 End: 2.40
Q2 Beg: 3.23
Q2 End: 3.50
Q3 Beg: 3.38
Q3 End: 3.50
Math 101
Q1 Beg: 2.65
Q1 End: 2.55
Q2 Beg: 3.16
Q2 End: 3.72
Q3 Beg: 2.89
Q3 End: 2.90
Math 115e
Q1 Beg: 2.71
Q1 End: 2.90
Q2 Beg: 3.64
Q2 End: 3.80
Q3 Beg: 4.07
Q3 End: 4.30
Math 115
Q1 Beg: 3.27
Q1 End: 3.50
Q2 Beg: 3.39
Q2 End: 3.68
Q3 Beg: 3.91
Q3 End: 3.53
These results indicate that initially students in the enhanced sections of each class considered math to be less enjoyable and stimulating than did their peers in the traditional sections. However,
at the end of the semester, students in the 101 class ranked math as enjoyable as did students in the traditional class. Across the board, there does not appear to be much difference in how helpful
students viewed communication with their peers. Finally, in terms of the long-term application of the skills that they learned in their math class, there were modest improvements for students in both
enhanced classes across the semester, but the class invoked no difference in evaluation of utility by the traditional Math 101 students, and the Math 115 students actually viewed their mathematical
training as being less helpful for future classes as the semester wore on. Therefore, there seem to be some positive trends in the students in the enhanced classes in particular, especially in the
areas of how enjoyable they find the subject and how useful they think that their current knowledge base will be for future courses. As one of the goals of this initiative was to increase interest in
pursuing careers in which mathematical skills will be required, the observed increases in student engagement with math suggests that they may be more likely to continue in a math-related field as
compared to their peers who view their math knowledge as having less future utility.
4. Drop rates
Another indication of student success with the enhanced math classes is the decrease in the number of students who dropped the courses, compared to the traditional courses. In particular, the drop
rate for those enrolled in Intermediate Algebra was noticeably lower than the overall drop rate for that course, and the drop rate in Math 115 has been significantly lower in the past three years,
since the enhanced math classes have been implemented.
Course evaluations
In terms of course evaluations, we also see that students are positively responding to the enhanced math classes. In particular, the average C&I score for item #8 (“Overall, (s)he is an effective
teacher”) for the instructors teaching the enhanced courses since the 2002 Fall Semester are (out of 5 points):
• Intermediate Algebra: 4.73
• College Algebra: 4.74
• Calculus I: 4.94
These numbers were significantly higher than the average response for the instructors in the regular sections during the same period.
Instructor observations on student performance
Finally, the instructors have also noticed significant improvement in their students. For example, Amy Kim reports,
“The enhanced math program has been very successful for students. In the Fall 2005 semester, no one received less than a B on the Math 115 midterm and the enhanced section earned the highest midterm
and final grades across all sections. In addition, all of the enhanced Math 115 students passed the gateway exam on the second day of the testing period.”
Overall, the enhanced math sections have been very successful. The extra dedication and resources allocated to these sections produced many positive outcomes, such as improved performance, more
students completing their math courses, and higher evaluations of the course instructors. Combining appropriate learning environments, dedicated teachers, and committed students brought about these
successes; this combination yielded excellent results.
The instructors of the enhanced math classes have also expressed that their courses were successful. For example, Brian Lindaman says, “ Teaching the enhanced section was really fulfilling—the
students were motivated, [and] having class every day allowed me more flexibility in deciding how much content to present in any one day. The extra time also allowed me to help students review more
for tests.” Another instructor, Benjamin Pera, stated, “One of the students said that [they] finally knew what it takes to do well in a math class. I think that is the biggest payoff for the students
in the enhanced class—the keys to succeeding in math, and academia for that matter. I think it enables them to really envision themselves as strong students, because they know what kind of standards
they need to be willing to set for themselves in order to get there.”
Therefore, it appears that the Treisman approach is beneficial. In particular, it may be most beneficial to students when they are making a transition in understanding and expectations. These
difficult transitions of knowledge can occur when students shift from K-12 to a university, when students transfer from two-year to four-year universities, or when students are shifting from courses
that focus on computational skills to theoretical understandings. (For a discussion of examining student transitions in understanding, see Baxter-Magolda, 2004 (pdf)).
The IMSD grant has been renewed by NIH twice. In the proposal reviews, there have been favorable reports and evaluations of the curriculum enhancements supported by the initiative. In addition, an
external review of the IMSD program conducted by experts concluded that the preliminary data indicate that the enhanced mathematics courses are increasing student performance and recommended that the
College of Liberal Arts and Sciences should increase the amount of institutional funds committed to supplemental instruction in biology, chemistry, and mathematics courses.
While these enhanced math sections have been successful, there are still some challenges for the future. First, it would be beneficial to better understand why the students do so well in these
classes and to investigate how students could be prepared to do equally well without the extra support. Professor Gavosto is currently working with a graduate student on the development of a survey
that will help determine how the experience in these sections changes students’ study habits and attitudes towards mathematics courses, which may provide some insight into the types of changes that
are being produced in these students.
Second, there is a financial challenge, such that the cost of the enhanced sections does not currently allow the extension of these courses to for all students. However, the aspects of learning that
can be reproduced in other classes are becoming apparent, so that all students may benefit. So far, a few key points learned from the enhanced sections that could be used for training of GTAs in all
the regular sections are:
• Finding ways to acknowledge all students enrolled, not just the ones who always come to class, and setting high expectations for all students.
• Giving regular feedback and actively following up with students who under-perform.
• Actively encouraging classroom attendance and participation.
• Describing and modeling good study habits in mathematics.
• Periodically reminding students of available resources and approaching deadlines, especially during the fall semester for first year students.
The work in the enhance classes could not have been done without the dedication of many devoted teachers who expected more from their students. Erin Carmody collected and analyzed the data of the pre
and post test assessments of mathematics accessibility and utility. Amy Kim collected and analyzed the data using Bloom’s taxonomy in her classes. Many thanks to everyone who assisted with this
|
{"url":"https://cte.ku.edu/gavosto/2008","timestamp":"2024-11-06T11:36:32Z","content_type":"text/html","content_length":"95820","record_id":"<urn:uuid:8dcf19dc-959e-4ee0-9a87-074a2a5c62af>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00489.warc.gz"}
|
On a third order iterative method for solving polynomial operator equationsOn a third order iterative method for solving polynomial operator equations
We present a semilocal convergence result for a Newton-type method applied to a polynomial operator equation of degree (2).
The method consists in fact in evaluating the Jacobian at every two steps, and it has the r-convergence order at least (3). We apply the method in order to approximate the eigenpairs of matrices.
We perform some numerical examples on some test matrices and compare the method with the Chebyshev method. The norming function we have proposed in a previous paper shows a better convergence of the
iterates than the classical norming function for both the methods.
Emil Cătinaş
(Tiberiu Popoviciu Institute of Numerical Analysis)
Ion Păvăloiu
(Tiberiu Popoviciu Institute of Numerical Analysis)
E. Cătinaş, I. Păvăloiu, On a third order iterative method for solving polynomial operator equations, Rev. Anal. Numér. Théor. Approx., 31 (2002) no. 1, pp. 21-28. https://doi.org/10.33993/
[1] M.P. Anselone and L.B. Rall,
The solution of characteristic value-vector problems by Newton method
, Numer. Math., 11(1968), pp. 38–45.
[2] I.K. Argyros, Quadratic equations and applications to Chandrasekhar’s and related equations , Bull. Austral. Math. Soc., 38 (1988), pp. 275–292.
[3] E. Catinas and I. Pavaloiu, On the Chebyshev method for approximating the eigenvalues of linear operators, Rev. Anal. Numer. Theor. Approx., 25 (1996) nos. 1–2, pp. 43-56.
[4] E. Catinas and I. Pavaloiu, On a Chebyshev-type method for approximating the solutions of polynomial operator equations of degree 2, Proceedings of International Conference on Approximation and
Optimization, Cluj-Napoca, July 29 – august 1, 1996, vol. 1, pp. 219-226.
[5] E. Catinas and I. Pavaloiu, On approximating the eigenvalues and eigenvectors of linear continuous operators, Rev. Anal. Numer. Theor. Approx., 26 (1997) nos. 1–2, pp. 19–27.
[6] E. Catinas and I. Pavaloiu, On some interpolatory iterative methods for the second degree polynomial operators (I), Rev. Anal. Numer. Theor. Approx., 27(1998) no. 1, pp. 33-45.
[7] E. Catinas and I. Pavaloiu, On some interpolatory iterative methods for the second degree polynomial operators (II) , Rev. Anal. Numer. Theor. Approx., 28 (1999) no. 2, pp. 133-143.
[8] L. Collatz, Functionalanalysis und Numerische Mathematik, Springer-Verlag, Berlin,1964.
[9] A. Diaconu, On the convergence of an iterative method of Chebyshev type, Rev. Anal. Numer. Theor. Approx. 24 (1995) nos. 1–2, pp. 91–102.
[10] J.J. Dongarra, C.B. Moler and J.H. Wilkinson, Improving the accuracy of the computed eigenvalues and eigenvectors , SIAM J. Numer. Anal., 20 (1983) no. 1, pp. 23–45.
[11] S.M. Grzegorski, On the scaled Newton method for the symmetric eigenvalue problem, Computing, 45 (1990), pp. 277–282.
[12] V.S. Kartisov and F.L. Iuhno, O nekotorih Modifikat ̧ah Metoda Niutona dlea Resenia Nelineinoi Spektralnoi Zadaci, J. Vicisl. matem. i matem. fiz., 33 (1973) no. 9, pp. 1403–1409 (in Russian).
[13] J.M. Ortega and W.C. Rheinboldt, Iterative Solution of Nonlinear Equations in Several Variables , Academic Press, New York, 1970.
[14] I. Pavaloiu, Sur les procedes iteratifs a un order eleve de convergence, Mathematica (Cluj), 12 (35) (1970) no. 2, pp. 309–324.
[15] Pavaloiu, I., Introduction to the Theory of Approximating the Solutions of Equations, Ed. Dacia, Cluj-Napoca, Romania, 1986 (in Romanian).
[16] I. Pavaloiu and E. Catinas, Remarks on some Newton and Chebyshev-type methods for approximating the eigenvalues and eigenvectors of matrices, Computer Science Journal of Moldova, 7(1999) no. 1,
pp. 3–17.
[17] G. Peters and J.H. Wilkinson, Inverse iteration, ill-conditioned equations and Newton’s method, SIAM Review, 21(1979) no. 3, pp. 339–360.
[18] M.C. Santos, A note on the Newton iteration for the algebraic eigenvalue problem, SIAM J. Matrix Anal. Appl., 9 (1988) no. 4, pp. 561–569.
[19] R.A. Tapia and L.D. Whitley, The projected Newton method has order 1+√2 for the symmetric eigenvalue problem, SIAM J. Numer. Anal., 25 (1988) no. 6, pp. 1376–1382.
[20] F. Tisseur, Newton’s method in floating point arithmetic and iterative refinement of generalized eigenvalue problems, SIAM J. Matrix Anal. Appl., 22 (2001) no. 4, pp. 1038–1057.
[21] K. Wu, Y. Saad and A. Stathopoulos, Inexact Newton preconditioning techniques for large symmetric eigenvalue problems, Electronic Transactions on Numerical Analysis, 7 (1998) pp. 202–214.
[22] T. Yamamoto, Error bounds for computed eigenvalues and eigenvectors, Numer. Math., 34 (1980), pp. 189–199.
|
{"url":"https://ictp.acad.ro/third-order-iterative-method-solving-polynomial-operator-equations/","timestamp":"2024-11-04T17:31:36Z","content_type":"text/html","content_length":"127748","record_id":"<urn:uuid:4b085cfb-0116-48a4-9f40-6f8258f8194a>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00411.warc.gz"}
|
Algebra 1 ph school answers
I am very much relieved to see my son doing well in Algebra. He was always making mistakes, as his concepts of arithmetic were not at all clear. Then I decided to buy Algebra Professor. I was
amazed to see the change in him. Now he enjoys his mathematics and the mistakes are considerably reduced.
Anne Mitowski, TX
If it wasnt for Algebra Professor, I never would have been confident enough in myself to take the SATs, let alone perform so well in the mathematical section (especially in algebra). I have the
chance to go to college, something no one in my family has ever done. After the support and love of both my mother and father, I think wed all agree that I owe the rest of my success as a student
to your software. It really is remarkable!
G.O., Kansas
Thank you for the responses. You actually make learning Algebra sort of fun.
C.B., Iowa
Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among
Search phrases used on 2010-10-09:
• logarithmic equations calculator
• implicit differentiation calculator
• GRE Calculating COmbinations
• prentice hall advanced algebra, an algebra 2 course, answers
• 11th grade math games
• formula number to fraction
• algebra simplifying radical fractions
• simplify radical expression calculator
• parabolas for kids
• LCM finder
• south carolina algebra 1 eoc practice tests
• TI 83 plus emulator
• difference quotient examples
• simplify exponent calculator
• how to do pre college algebra
• KS3 Year 7 Algebra Worksheet
• test questions for grade schoolers
• ecology math book pdf free download
• formula root mean square
• simultaneous equations questions and answers
• math trivias
• absolute value integers connect the dots
• linear interpolation for Pythagoras calculate distance of vector
• mathematica on formulation of linear programming problem
• graph linear inequalities worksheet
• Paul A Foerster
• Americans Workbook Answer Key - McDougal Littell
• alebraic trivia facts
• Convert base 6 fraction to decimal
• hard math equations
• solving for multivariables ti89
• subtracting square roots
• kumon answers
• decimal 6.7 to a mixed number
• Sixth Grade Math worksheets-Free Printable
• Converting mixed numbers to fractions free worksheets
• dividing polynomials answer
• graph the domain variable exponent
• trigonometric identity solver
• vertex ti-89
• square root variable calculator
• online math calculator problem solver
• exponent multiplication worksheets free
• TI-83 calculator tricks for solving college algebra problems
• solving of nonlinear differential equations
• gcse math games
• algebra revision activities (grade9)
• LCM of polynomial calculator
• binomial TI 83
• factoring trinomials calculator equations
• square root simplifier
• Topics to revise for GCSE Reading Arabic paper
• free SOL review sheets for teachers
• solving square root inequalities
• ti-83 plus calculator program chemistry equation
• math ks2 algebra
• rational expressions gas mileage
• multiplying square roots with same radicals
• algabra
• algebra refresh
• cool ti-89 programs economics
• calculus + algebraic substitution
• how to calculate LCM
• differential equations interpreting word problems
• logarithm + book + free
• free prentice hall pre algebra answer key
• clep exam college algebra
• simple aptitude questions
• ti 89 circuit creator
• intermediate 2 maths questions
• secrets to college algebra
• how to factor and solve a quadratic equation
• Algebra Cheat Sheet
• McDougal Littell Biology Georgia worksheets
• aptitude questions & answers with calculations
• answer key to advance algebra homework
• online sats papers which you can do online
• powerpoint equation
• ti-89 pdf converter
• lcm solver
• Prentice Hall Algebra 2
• answer key conceptual physics tenth edition
• Teacher's Edition Texas Algebra 1 Chapter 10 Section 2
• How to solve double variable algebra equations
• Chapter 6 Resource Book Algebra 1 answer keys
• Rational Expressions and their Graphs
• a revision sheet or maths ks2
• radical expression
• learning math for third thru six grade free online print out
• slope worksheets
• basic business statistics concepts and applications ppt thomson
|
{"url":"https://algebra-net.com/algebra-net/function-definition/algebra-1-ph-school-answers.html","timestamp":"2024-11-06T09:04:03Z","content_type":"text/html","content_length":"87228","record_id":"<urn:uuid:f3004cee-8091-4e2d-9473-97b9a730d7e1>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00195.warc.gz"}
|
Python Program to Find Square Root Under Modulo k (When k is in Form of 4*i + 3) - BTech Geeks
Python Program to Find Square Root Under Modulo k (When k is in Form of 4*i + 3)
In the previous article, we have discussed Python Program for Array/List Elements that Appear More than Once
Given the number N, prime number k and the task is to find the square root of the given number under modulo k(When k is in form of 4*i + 3). Here i is an integer.
Given prime number k=5
Given Number = 4
The Square root of the given number{ 4 } under modulo k= 2
Given prime number k=3
Given Number = 5
No, we cannot find the square root for a given number
Program to Find Square Root Under Modulo k (When k is in Form of 4*i + 3) in Python
Below are the ways to find the square root of the given number under modulo k (When k is in form of 4*i + 3) in python:
Method #1: Using For Loop (Static Input)
• Give the prime number k as static input and store it in a variable.
• Give the number as static input and store it in another variable.
• Pass the given number and k as the arguments to the sqrt_undermodk function.
• Create a function to say sqrt_undermodk which takes the given number and k as the arguments and returns the square root of the given number under modulo k (When k is in form of 4*i + 3).
• Calculate the value of the given number modulus k and store it in the same variable given number.
• Loop from 2 to the given k value using the for loop.
• Multiply the iterator value with itself and store it in another variable.
• Check if the above result modulus k is equal to the given number using the if conditional statement.
• If it is true then print the iterator value.
• Return and exit the for loop.
• Print “No, we cannot find the square root for a given number”.
• The Exit of the Program.
Below is the implementation:
# Create a function to say sqrt_undermodk which takes the given number and k as
# the arguments and returns the square root of the given number under modulo k
# (When k is in form of 4*i + 3).
def sqrt_undermodk(gvn_numb, k):
# Calculate the value of the given number modulus k and store it in the same variable
# given number.
gvn_numb = gvn_numb % k
# Loop from 2 to the given k value using the for loop.
for itror in range(2, k):
# Multiply the iterator value with itself and store it in another variable.
mul = (itror * itror)
# Check if the above result modulus k is equal to the given number using the if
# conditional statement.
if (mul % k == gvn_numb):
# If it is true then print the iterator value.
"The Square root of the given number{", gvn_numb, "} under modulo k=", itror)
# Return and exit the for loop.
# Print "No, we cannot find the square root for a given number".
print("No, we cannot find the square root for a given number")
# Give the prime number k as static input and store it in a variable.
k = 5
# Give the number as static input and store it in another variable.
gvn_numb = 4
# Pass the given number and k as the arguments to the sqrt_undermodk function.
sqrt_undermodk(gvn_numb, k)
The Square root of the given number{ 4 } under modulo k= 2
Method #2: Using For loop (User Input)
• Give the prime number k as user input using the int(input()) function and store it in a variable.
• Give the number as user input using the int(input()) function and store it in another variable.
• Pass the given number and k as the arguments to the sqrt_undermodk function.
• Create a function to say sqrt_undermodk which takes the given number and k as the arguments and returns the square root of the given number under modulo k (When k is in form of 4*i + 3).
• Calculate the value of the given number modulus k and store it in the same variable given number.
• Loop from 2 to the given k value using the for loop.
• Multiply the iterator value with itself and store it in another variable.
• Check if the above result modulus k is equal to the given number using the if conditional statement.
• If it is true then print the iterator value.
• Return and exit the for loop.
• Print “No, we cannot find the square root for a given number”.
• The Exit of the Program.
Below is the implementation:
# Create a function to say sqrt_undermodk which takes the given number and k as
# the arguments and returns the square root of the given number under modulo k
# (When k is in form of 4*i + 3).
def sqrt_undermodk(gvn_numb, k):
# Calculate the value of the given number modulus k and store it in the same variable
# given number.
gvn_numb = gvn_numb % k
# Loop from 2 to the given k value using the for loop.
for itror in range(2, k):
# Multiply the iterator value with itself and store it in another variable.
mul = (itror * itror)
# Check if the above result modulus k is equal to the given number using the if
# conditional statement.
if (mul % k == gvn_numb):
# If it is true then print the iterator value.
"The Square root of the given number{", gvn_numb, "} under modulo k=", itror)
# Return and exit the for loop.
# Print "No, we cannot find the square root for a given number".
print("No, we cannot find the square root for a given number")
# Give the prime number k as user input using the int(input()) function and
# store it in a variable.
k = int(input("Enter some random number = "))
# Give the number as user input using the int(input()) function and
# store it in another variable.
gvn_numb = int(input("Enter some random number = "))
# Pass the given number and k as the arguments to the sqrt_undermodk function.
sqrt_undermodk(gvn_numb, k)
Enter some random number = 7
Enter some random number = 2
The Square root of the given number{ 2 } under modulo k= 3
Find a comprehensive collection of Examples of Python Programs ranging from simple ones to complex ones to guide you throughout your coding journey.
|
{"url":"https://btechgeeks.com/python-program-to-find-square-root-under-modulo-k-when-k-is-in-form-of-4i-3/","timestamp":"2024-11-11T11:03:52Z","content_type":"text/html","content_length":"67389","record_id":"<urn:uuid:24357375-a0eb-4736-bc08-c7fa083df7fc>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00011.warc.gz"}
|
The Yield Frontier: Profiting from G-Sec & Sensex Yield Dynamics
(category)CM Strategy
The Yield Frontier: Profiting from G-Sec & Sensex Yield DynamicsThe Yield Frontier: Profiting from G-Sec & Sensex Yield Dynamics
Who doesn't want to time the market? Everyone looks out for the perfect time to shift from debt to equities and vice versa. In this post, we delve into the tactical allocation between equities and
debt, as well as the role of interest rate cycles. Do we see a pattern here? Let's find out
Krishna Appala•
"At all times, in all markets, in all parts of the world, the tiniest change in interest rates changes the value of every financial asset." - Warren Buffett
A multitude of investors have read this quote, but only a small percentage of them might have truly understood its meaning. For those investing in equities long-term, it's as important to grasp the
ups and downs of interest rate cycles as it is to understand the intricacies of financial statements.
Regardless of a company's outstanding growth prospects, the credibility of its promoters, strong financials, or the exceptional quality of its products, one crucial factor should guide your return
expectations is the trajectory of interest rates.
The trend of interest rates will shed light on the direction of money flow, which in turn provides insights into:
• Whether money is becoming more expensive or cheap,
• If liquidity is entering or exiting the market,
• Is the cost of capital increasing or decreasing,
• The aggressiveness or caution of private companies regarding capital expenditures, and
• The potential re-rating or de-rating of a stock's PE
I think you got the gist. With this context in mind, let's delve into the correlation between interest rates and equities in the Indian market. By doing so, let's look at an approach to generate
alpha by tactically transitioning between these two asset classes.
Problem Statement
1. What is the correlation between long-term interest rates and equity returns in the Indian financial market?
2. How can investors leverage this correlation to generate alpha and enhance their portfolio performance?
• TIRS (Timing Interest Rate Spread), as we call it, is a strategy that involves two asset classes: Equity (represented by NiftyBees) and Debt (represented by Gilt Mutual Funds). At any given time,
the entire investment is allocated to either Equity or Debt.
• We have to identify the optimal Entry & Exit criteria (yield spread) based on historical data.
• Once we derive the optimal spreads, test the strategy with different starting dates.
• Initially, for every trade, the portfolio is invested in Gilt Mutual Funds.
• If the entry criteria is met, the investment is shifted entirely to Equity by purchasing NiftyBees.
• If the exit criteria is met, the investment is moved out of Equity and reallocated entirely to Debt by purchasing SBI Magnum Gilt Fund.
• Rebalancing between NiftyBees and Gilts can occur whenever buy or sell signals are triggered. The strategy accounts for a T+2-day settlement period for both stock and bond transactions.
Here's a depiction of our approach. Using historical data, we establish the upper and lower boundaries of the Spread (Gsec - Sensex Earnings Yield). If the spread reaches the upper limit, we move to
Debt; conversely, if the spread touches the lower limit, we shift to Nifty.
For this study, we have made these assumptions and taken into consideration the following instruments:
1. 10-Year Gsec: This risk-free instrument tracks India's long-term interest rate cycle.
2. SBI Magnum Gilt Fund: We have chosen SBI Magnum Gilt Fund (Growth), which invests in Gilt instruments and is one of the longest-running funds in this category as an investment vehicle for Debt.
3. Earnings Yield: Earnings yield is the inverse of the PE ratio. In other words, the earnings yield helps investors understand how much they earned per share in percentage terms. For example, if a
company has an earnings yield of 10%, it means that the investor has earned Rs. 10 for every Rs. 100 worth of shares owned. This provides an apple-to-apple comparison between Gilt Yields and
Equity Yields. In our analysis, we used the Sensex Earnings Yield instead of the Nifty EY, as the Nifty started reporting consolidated EPS only in 2021, whereas the Sensex has been reporting
consolidated EPS since 2004, making the Sensex PE (and its Sensex EY) more reliable for historical analysis in consolidated terms.
4. NiftyBees: We considered NiftyBees, a widely traded instrument whose dividends are reinvested, as an investment vehicle for Equity.
5. We took the last 15 years of data (since 2008) for backtesting purposes
6. All the reported returns are on a pre-tax basis
Analyzing the Spread
To understand the relationship between G-Sec and Nifty 50 Earnings Yield, we calculated the spread between G-Sec and Sensex Earnings Yield (G-Sec minus Sensex Earnings Yield). The resulting spread
shows the relative attractiveness of G-Secs and equities at any given point in time.
To further analyze this spread, we did a backtest using various trade scenarios, incorporating Entry (buying Equity) and Exit (selling Equity & buying Debt) for different spreads at intervals of
For instance, the maximum spread observed in the past 15 years was 4%, while the minimum was -3.7%. Consequently, we tested multiple combinations, such as entering into equities at -3.7% and exiting
at -3.6%, entering at -3.7% and exiting at -3.5%, entry at 1% and exiting at 4%, and entry at 2% and exiting at 3.5%, etc. There were over 2700 such combinations since 2008.
Let's plot each of these trades and see if have a trend here.
*Click on the image to enlarge
How to interpret this chart?
• This scatter plot illustrates various trade combinations of Entry and Exit, with a spread difference of 0.1% for each combination
• X-axis: Entry spread
• Y-axis: Exit spread
• The distribution of CAGR is divided into 10 deciles, with light green representing the lowest decile and blue signifying the highest decile. Red indicates the spread combinations that
underperformed Nifty.
Similarly, let's have a look at all the 90th percentile trades since 2008 and analyze their Entry & Exit spreads.
As you can observe, the historical data indicates that the highest CAGR (within the top 90th percentile) was attained with an entry spread of 1.6% and an exit spread of 3%.
Another way to check this data is to hold either the entry or exit variable constant and see if the CAGR is at its best. This is what it looks like:
If we keep the entry constant at 1.6% (which is the media), we can notice that the best CAGR returns for different starting points are roughly around 3%.
On the flip side, if we hold the exit constant at 3%, it turns out that the best CAGR returns happen when the entry is around 1.6%.
By examining the one-year forward returns of Nifty for these ranges, we can further validate the optimal spread, as shown in the chart below. The below chart indicates that the most favorable returns
are now concentrated within the 1.5% to 2% range, once outliers are disregarded. It is important to note that as the spread widens, the one-year forward returns diminish and may even turn negative.
*Click on the image to enlarge
Now that we've identified & verified the ideal entry and exit spreads, let's put them to the test, beginning in 2008.
You might be wondering, why revisit these spreads when they already represent the median of our best-performing trades? Well, it's not just about evaluating the CAGR, but examining the entire
process. We know the final outcome is promising, but let's also explore the journey to see how smooth it has been.
The strategy generated a higher CAGR with lower drawdown & volatility with respect to Nifty.
One key metric to assess the robustness of any strategy is by considering rolling returns. On a 1-year basis, the strategy outperformed Nifty about 73% of the time. This outperformance further
increased to nearly 91% when viewed on a 3-year rolling return basis.
It gets better.
The drawdowns are not only lower at the overall trade level, but they also show the same trend on a year-wise basis.
Examining the calendar year-wise data of this strategy reveals periods of significant outperformance (36%, 32%, 35%) occurring occasionally, which have contributed to pushing the alpha in our favor.
In other years, the strategy has largely followed the benchmark Nifty.
*Click on the image to enlarge
Do start dates matter? - No
Now: what if you just got lucky and invested at the perfect starting date? To remove the starting point dependency, we executed the strategy with various starting dates ranging from 2008 to 2013,
providing at least 10 years or more to observe the results.
*Click on the image to enlarge
Few observations from the chart:
• The strategy consistently outperformed NiftyBees. However, the outperformance varied depending on your starting point.
• By investing the same amount at the beginning of every month and holding the investments until now, the SIP investor managed to outpace NiftyBees, generating an alpha of 5-6%
• Thanks to its debt component, the strategy's volatility was lower than that of NiftyBees.
• The strategy also experienced a lower drawdown compared to NiftyBees.
Limitations of TIRS strategy
• This is a passive strategy, where each trade continued for around 1.5 to 2 years before the next entry/exit trigger.
• Sometimes, it takes years before the strategy shows any outperformance over the Nifty. For example, if we consider the start year as 2009 and 2012, it took around 3 years for the Strategy NAV to
outperform Nifty NAV.
• The backtested results are from the interest rate regime (G-Secs) that ranged between as high as 9.4% and as low as 5.8%. If interest rates move beyond this range, then we will be in alien
What’s the bottom line?
• A lower spread suggests that equities are cheap (high earnings yield implies low PE) or that debt yields have fallen, making them less attractive. In both cases, it makes sense to shift to
• A higher spread suggests that debt yields are rising, making them more attractive, or that equities have become expensive (low earnings yield = high PE). Thus, it would be wise to move to debt.
• Historically, equities have become more attractive when the spread reaches around the level of 1.6% and become overvalued when it reaches approximately 3%
• Drawdowns cannot be completely avoided. The strategy's drawdowns are still high at -29%, -22%, etc. However, they are better compared to Nifty during the same time.
• Returns come in bulk. If we miss those years, the performance may be muted. For example, as shown in the table above, the bulk of the outperformance came in 3 of the last 15 years, generating
more than 30% alpha each year.
Further Reading:
Credit where it's due:
I first encountered the central concept of the relationship between G-Sec and Earnings Yield spread through my friend and former colleague, Navid Virani. Navid, who manages Bastion Research, can be
found on both Twitter and LinkedIn.
A Study by Md. Mahmudul Alam & Md. Gazi Salah Uddin in this paper Relationship between Interest Rate and Stock Price discussing stock price dependencies on Interest rates across developing counties.
A study by David Blitz talks about Expected Stock Returns When Interest Rates Are Low and equity risk premiums across countries.
Disclaimer: This post is for information only and should not be considered a recommendation to buy or sell stocks. The analysis is based on data from reliable sources, but do point out any
discrepancies you find.
|
{"url":"https://www.capitalmind.in/insights/the-yield-frontier-profiting-from-g-sec-sensex-yield-dynamics","timestamp":"2024-11-07T09:10:13Z","content_type":"text/html","content_length":"115034","record_id":"<urn:uuid:20442547-c658-4642-9817-25fc058bcc7a>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00844.warc.gz"}
|
An investigation into the possibilities of sex and age determination of Eurasian woodcock (Scolopax rusticola L.) based on biometric parameters, using conditional inference trees and minimal important differences
How to translate text using browser tools
14 December 2023 An investigation into the possibilities of sex and age determination of Eurasian woodcock (Scolopax rusticola L.) based on biometric parameters, using conditional inference trees and
minimal important differences
Attila Bende, Richárd László, Sándor Faragó, István Fekete
Morphometric characteristics of Eurasian woodcock collected during spring hunting (March) in Hungary between 2010 and 2014 were investigated to evaluate the accuracy of methods for determining the
sex of live birds. We analysed the size dimorphism of biometric traits by sex, age, and sex and age, with sex determination (n = 13,226) performed by destructive methods and age determination based
on wing examination (n = 8,905). Using the minimal important differences (MID) method, we demonstrated that, during spring migration, adult females have significantly greater mass and bill length
than juvenile females and adult males, as well as a significant difference in body length compared to juvenile females. No biologically relevant differences were demonstrated between the sexes or age
classes for other morphometric parameters. Conditional inference trees were applied to test whether body size parameters could be used to separate the age and sex of individuals. Based on posterior
probabilities (55.4%), we suggest that biometric parameters no longer provide a sufficiently reliable method to separate age classes during the spring migration. Separation of sexes showed the best
results for adult birds, with bill length (85.4%) and body mass (85.2%) proving the best predictors. The inclusion of additional morphometric variables (tarsus, tail, body and wing length) in the
model did not increase the reliability of sex segregation, confirming the results obtained using MID, i.e. that there is no statistically verifiable biologically relevant difference between adult
male and female birds for these parameters. A methodological innovation in this study was using MIDs for comparisons to determine biological thresholds for differences, the procedure helping to
exclude Type I errors and determine biological significance.
As there are only slight differences between the sexes in Eurasian woodcock (Scolopax rusticola), it can be difficult to separate the sexes based on appearance traits such as plumage colouration and
markings or leg colour alone (Clausager 1973, Cramp & Simmons 1983, Ferrand & Gossman 2009). Nevertheless, several attempts have been made to separate the sexes based on biometric and/or
morphological parameters. Clausager (1973) was the first to point out the possibility of using the quotient of central tail feathers and bill length for separating the sexes. Subsequently, several
studies (MacCabe & Brackbill 1973, Artmann & Schroeder 1976, Rockford & Wilson 1982) attempted to determine sex based on the size of individual body parts (e.g. bill, tail, wing measurement or body
weight), though none of these allowed the sexes to be distinguished with sufficient reliability. According to Glutz von Blotzheim et al. (1977), a woodcock with a bill longer than 77 mm and a tarsus
longer than 38 mm was most likely to be a female; however, no information was given on the reliability of the method. One of the most widely known and cited formulas for the separation of woodcock
sexes based on morphological characteristics is that developed by Stronach et al. (1974), based on the formula (I = [0.2952X] – [0.1566Y]), where X is the length of the bill (in mm), and Y is the
length of the tail (in mm). In this case, if the value of I is > 8.364, then the bird is a female (75% correct), and if the value of I is < 8.364, then the bird is a male (72% correct). The
probability of error was 28% if birds that were not yet adults were included in the analysis. Birds of < 12 months of age may be excluded when examining the tips and proximal edges of outer primaries
(ragged outline in first years; smooth on older birds, at least until April) and the terminal lighter bar on primary coverts (broader and browner on young birds). However, when all birds that had not
yet undergone full moulting were excluded, only 2-5% accuracy was achieved (Shorten 1975). Considering the above criteria, it can be concluded that the applicability of the method is severely
limited. To address this problem, the present paper aims to provide a morphological basis for sex determination by employing contemporary, biologically pertinent and statistically advanced methods to
a large sample.
Ferrand & Gossmann (2009) obtained even worse results in a similar study. Their results showed that males, on average, have shorter bills and longer tails than females. However, the authors pointed
out that there was so much overlap between the data that it was impossible to determine the sex for most birds reliably. Based on their data, a bill length > 80 mm represented a female, and a tail
length > 88 mm represented a male. Further, adult birds with a tail/bill ratio ≥ 1.20 were males and females if the ratio was ≤ 1.10, while juvenile birds with a tail/bill ratio ≥ 1.20 were males and
females if the ratio was ≤ 1.00. As the overlap was high, the method was only 45% accurate for adults and 25% for juveniles (Ferrand & Gossmann 2009).
Detailed statistical studies based on differences in morphometric data for other charadriiform species, such as linear models or discriminant and principal component analysis, have also not provided
definite results (Remisiewicz & Wennerberg 2006, Schroeder et al. 2008). According to Hoodless (1994), the difference in body weight between sexes during the laying phase of the nesting period could
be suitable for sexing some woodcock; however, the method has not proved sufficiently reliable, even during this narrow time interval (Aradis et al. 2015). Furthermore, Aradis et al. (2015) reported
that discriminant function analysis applied to a set of woodcock morphometric traits failed to achieve 80% confidence in the case of juveniles and 79.6% and 77.1% for adult females and males,
Between 1983 and 1999, Faragó et al. (2000) conducted a study that drew conclusions from 1,008 birds collected during the spring hunt in Hungary. However, some biometric parameters examined were only
available in sample sizes below 100. Hence, their results cannot be considered representative due to the low number of annual observations. The year 2009 marked a turning point in woodcock research
when spring woodcock hunting in Hungary was put at risk due to the enforcement of the EU Birds Directive (79/409 EEC). As a condition for an exception from the Directive, the Hungarian Hunters
National Association launched the Hungarian woodcock monitoring programme in 2009. In 2010, the Institute of Game Management and Vertebrate Zoology of the University of West Hungary joined the
monitoring with a biometric testing module. As more than 5,000 data providers contributed data collected according to a standard protocol between 2010 and 2014, this ‘new’ national woodcock
monitoring programme provided an unprecedented opportunity for a time-series analysis of woodcock migration based on a large sample size (n = 13,471). In a statistical analysis of this large
biometric dataset, we seek to answer whether age and sex determination based on biometric traits is possible in live woodcock and, if so, how reliably these parameters indicate age and sex.
Material and Methods
Since the spring of 2010, woodcock bag monitoring, coordinated by the Hungarian Hunters National Association, has formed the basis for a nationwide, large-sample, age- and sex-differentiated study of
woodcock biometrics. Biometric data were collected in March each year between 2010 and 2014 from all 19 counties of Hungary, the monitoring program targeting up to 5,600 bagged woodcock per year (for
the number of birds collected per county, see Table 1). For each sample, the person responsible recorded the place where the bird was bagged (municipality and recording code), the exact time of
sampling (month, day, hour and minute), and the sex of the bird. For age determination purposes, each hunter was required to send in at least 25%, and from 2011, 40%, of wings from the woodcocks he
had killed, stretched and prepared. Age determination was carried out according to the widely used methodology for woodcock, based on the state and degree of moulting and the characteristic features
(moulted or unmoulted) of each feather group (Glutz von Blotzheim et al. 1977, Cramp & Simmons 1983, Ferrand & Gossmann 2009). The birds were separated into ‘juvenile’ and ‘adult’ age groups, with no
further detailed classification applied (Bende 2021). The recording of biometric parameters (body weight, bill length, body length, wing length, tail length and tarsus length) and the choice of
instruments used for measurement were in accordance with conventional ornithological methods. Body weight (1 g accuracy) was measured using a balance scale or a letter scale, while length
measurements were obtained using a standard ruler (tail length), tape measure (wing length) or calliper (bill and tarsus length) (Faragó et al. 2000). All data were sent to the Institute of Wildlife
Biology and Management of the University of Sopron on standard sampling forms, together with the wing samples.
Table 1.
Summary statistics for six biometric body measurements for female and male woodcock during the spring migration in Hungary (mean ± SD; min-max = range). All measurements are given in mm, except
weight in g. ‘n’ = valid cases, the entire dataset containing 13,471 observations.
Statistical analysis
Statistical analyses were conducted in R Studio version 4.3.1 (2020), built on the R platform version 4.2.3 (R Development Core Team 2022). We first performed descriptive statistics, such as minimum,
maximum, median, range, and standard deviation of the sample mean (SD) and valid cases (n), with SD values > +4 or < –4 from the mean excluded from further analysis. The cleaned dataset contained
13,471 individuals ordered in rows, with each observation being an individual.
We provide a rigorous justification for employing parametric statistical measures such as mean and SD. The central limit theorem posits that, given a sufficiently large sample size, the sampling
distribution of the mean for any independent, identically distributed random variable will be approximately normally distributed, irrespective of the original distribution of the variable (Efron &
Tibshirani 1993, Lumley et al. 2002, Hoekstra et al. 2014). Hence, for large samples, the mean and SD can serve as robust parameters for statistical inference without the necessity of assuming a
specific (e.g. normal or t-) distribution, particularly when one considers large samples (e.g. n ≥ 10,000). Additionally, large samples tend to mitigate the influence of extreme values, which further
justifies the use of SD (Rousseeuw & Croux 1993, Wilcox 2012, Kwak & Kim 2017). We excluded extreme values using the ±4 SD criterion mentioned earlier. Therefore, any limitation of using SD to assume
a normal or t-distribution does not hold empirical ground when large sample sizes are involved.
Given the high number of observations and the fact that the sampling region was counterbalanced, our dataset is statistically representative of the bird population in Hungary. Specifically, each of
the 19 counties within Hungary is represented in our sample in a manner commensurate with its proportion in Hungary's overall migratory bird population. To ensure the statistical representativeness
of the findings, the current investigation utilised the dataset furnished by Szemethy et al. (2014). Their research posited an estimated migratory population range of 1.48 to 6.89 million woodcock
transiting through Hungary during the spring season, which aligns temporally with the period under scrutiny in the present study. For rigour, the upper population estimate of 6.9 million woodcock was
adopted as the stringent criterion for population size. Using this parameter, the minimum requisite sample size was computed using the frequentist framework (e.g. Rosner 2015). Employing a highly
rigorous confidence level of 98%, an estimated population proportion of 0.5 (as this value maximises the required sample size) and an exceptionally stringent margin of error of 1.1% (which signifies
that the true population parameter is anticipated to lie within ±1.1% of the observed value), the calculated minimum sample size for achieving statistical representativeness was determined to be
11,199 individuals or more, i.e. 11,199 or more individuals are needed to have a confidence level of 98% that the real value is within ±1.1% of the measured value.
The six biometric variables examined were treated as numeric variables, while age, sex, sampling year (2010-2014), sampling month (first or second half of March) and county were treated as factor
variables. All measurements of numeric variables are given in mm, except weight, which is in g. When investigating interactions between time (year and month in our study) and other variables,
treating time as a factor may provide more precise estimates of these effects (Kutner et al. 2005). Second, when months or years represent distinct periods showing cyclic or seasonal trends, as in
our study, treating them as factors may capture these differences effectively (Box et al. 2015). In other words, when the relationship between, for instance, ‘year’ and the dependent variable (i.e.
the biometric parameters) is non-linear, it becomes imperative to treat ‘year’ as a categorical factor. This approach can be critical in capturing effects such as biological changes with abrupt
impacts not captured by a linear term. When ‘year’ is treated as a continuous variable, the assumption is made that the gap between each year has an identical impact on the dependent variable.
However, this assumption might be flawed; hence, treating ‘year’ and ‘month’ as factor variables mitigates this issue.
Treating the variable ‘year’ as a factor variable with four levels (representing the four consecutive years in our study) can be further justified as advantageous. For instance, post hoc tests can be
conducted to compare the different years with each other. These tests can offer valuable insights into which years are statistically different from each other (Hsu 1996). Finally, decision-tree
models often benefit from categorical variables due to their decision-tree foundation, hence improving predictive power (Breiman 2001).
Given the large sample size of more than 10,000 individuals in the present study and the relatively high number of biometric predictors, traditional statistical procedures could lead to Type I
errors, aligning with research highlighting the issue of ‘p-hacking’ or inflation of Type I error rates in large samples (Ioannidis 2005, Benjamini et al. 2006, Button et al. 2013). Hence, large
samples may detect statistically significant but trivial effects, especially when multiple predictors are involved, thereby increasing the risk of false positives (Maxwell et al. 2008).
For pairwise comparisons (six biometric parameters grouped by sex, age and sexes by age groups), we considered that, given the large sample size, the likelihood of Type I error was very high (
Sullivan & Feinn 2012, Lin et al. 2013), causing even a biologically irrelevant and negligible difference to become statistically significant. As such, post hoc tests would lead to Type I errors, as
mentioned earlier. There are multiple solutions to combat this issue, such as bootstrapping or measurement of the Bayes factor; however, to avoid this issue, we computed estimates of minimal
important difference (MID) (e.g. see Jaeschke et al. 1989, Norman et al. 2004).
We opted for this method because we aimed to determine threshold values for each biometric parameter, which the previously mentioned statistical procedures do not perform. Our second motivation for
using MIDs was that, over past decades, there has been a shift from statistical significance to practical significance, or practical relevance, in the interpretation of study results (e.g. Terwee et
al. 2011). Specifically, we employed the SD criterion (Crosby et al. 2003, Engel et al. 2018, Revicki et al. 2008). In the present study, MID is a measure for the smallest difference in a biological
parameter that is biologically relevant, significant, meaningful, or considered biologically important. In this way, we can detect results that are the product of Type I errors and, crucially,
unravel biologically meaningful/ significant differences. MID can be conceived as a cut-off point or threshold value for a biologically significant difference, with the latter being above statistical
While Copay et al. (2007) and Norman et al. (2004) both suggest an SD criterion of 0.5, Farivar et al. (2004) and Eton et al. (2004) suggest an SD of 0.3 (i.e. ⅓ SD). To keep the MID as low as
possible, we adopted the most liberal 0.2 SD criterion in the literature (e.g. Samsa et al. 1999, Mouelhi et al. 2020), equivalent to a small effect, allowing us to detect even minimally biologically
relevant differences. To our knowledge, MIDs have not been employed in ornithology research, mainly in human medicine or studies outside the natural sciences (e.g. Fekete et al. 2018).
We first computed the pooled SD from the two independent groups (e.g. female and male or adult and juvenile), in line with recommendations on the computation of MID suggested by Watt et al. (2021)
for every comparison. Next, we compared the MIDs to the estimated differences (δ) derived by Tukey HSD post hoc tests (Tukey multiple comparisons of means). For the computation of the pooled SD, we
employed the formula in Cohen (1988), i.e. σ[pooled] = √([(σ1[2]) + (σ2[2])]/2), where the two symbols represent the SDs of two independent groups, e.g.
maleandfemale.Theestimateddifference,designated as δ, was computed using the Tukey HSD post hoc test (i.e. the estimated mean difference), which is considered more reliable than Tukey HSD P-values,
which are subject to P-inflation (Type I error). For this study, P-values < 0.05 were considered statistically significant. The MID was computed using the formula MID = 0.2 * σ[pooled] (see Watt et
al. 2021 for the pooled SD). In summary, P-values may show Type I errors; hence, MIDs should be taken as a benchmark to interpret the meaningfulness (i.e. biological relevance) of the estimated
Subsequently, we evaluated whether sex and age could be ascertained through the biometric parameters present in the dataset. Theoretically, multivariate methods could prove informative in such cases,
and indeed, logistic regression on multiple traits has proved helpful in predicting sex in previous studies on Charadriiformes, with Hallgrimsson et al. (2008), for example, successfully applying
general linear models (GLM) on purple sandpipers (Calidris maritima), and Katrínardóttir et al. (2013) on Eurasian whimbrels (Numenius phaeopus). However, Hallgrimsson et al. (2008) only used a
sample of 222 adult birds, and Katrínardóttir et al. (2013) used an even smaller sample of 50 whimbrels. In contrast, large datasets like ours, with many predictors, could lead to models that are too
complex, capturing noise rather than the underlying data structure, which is a form of overfitting (Babyak 2004, Harrell 2015). Such overfit models, in turn, lack ‘generalisability’ and could result
in misleading conclusions (Harrell et al. 1996).
A secondary issue concerning the use of logistic regression with a set of six predictors arises from model saturation, as outlined by Hosmer et al. (2013, section 9.2). According to these authors, a
saturated model incorporates all conceivable main effects and interactive terms among the independent variables. Hosmer et al. (2013) further asserted that such saturated models are inherently
unsuitable for hypothesis testing due to their inherent capacity to fit the data perfectly.
To avoid these problems with logistic regression, we employed conditional inference tree models (henceforth ctree; Hothorn et al. 2006), a feature selection decision-tree approach (Hothorn et al.
2006 or Levshina 2020). This statistical technique models the distribution of an outcome variable using a set of independent variables (predictors), which, in our case, are the biometric parameters.
Ctrees can explain the outcome variable via the combination of these predictors. Ctrees have already been employed in Hungarian ornithological research (e.g. Vili et al. 2013, Kováts & Harnos 2015);
however, they have not been applied yet on datasets with large sample sizes, a methodological novelty of our study given the high likelihood of Type I errors in such large samples (Sullivan & Feinn
2012, Lin et al. 2013). In large samples, ctrees yield more accurate and consistent estimates of predictor importance than logistic regression, converging towards true population parameters (Bühlmann
& Yu 2003, Couronné et al. 2018).
As missing values in the outcome variable are not allowed in ctrees, we only analysed those observations with a valid value. This served as our single exclusion criterion in the ctree analysis. We
also opted for using this non-parametric statistical framework as it can predict the outcome variable via a multi-hierarchy of numerous independent variables, unlike other traditional statistical
procedures such as ANOVA, and because we obtain cut-off values, also known as splits or threshold values, on the significant predictors, again, unlike other traditional statistical approaches. For
example, a cut-off of 184 g on body weight indicates that the sample can be partitioned into two subsamples with a cut-off of 184 g. Independent variables not appearing in the ctree do not improve
the model's accuracy in the presence of the rest of the significant independent variables. Most importantly, this statistical approach can be used without additional cross-validation (Hothorn et al.
2015). Given this latter condition, the large sample size and the statistical representativeness of our dataset, ctree models also serve as predictive models for the bird population in Hungary,
highlighting a further novelty of our study. A further advantage of ctrees is that they can explain and/or predict the outcome variable without overfitting the model (Hothorn et al. 2006, 2015).
Here, we built ctrees using the standard options, but increased the minimum criterion from 0.95 to 0.99 to avoid overfitting (Levshina 2020, p. 623), then applied Bonferroni-correction was applied to
reduce Type I error.
In the tree representation, the classification of observations starts at the topmost node, also called node 1, which shows the strongest association with the outcome variable. The nodes at the bottom
of the ctree are termed terminal nodes and display the predictions based on the model, also called posterior class probabilities or conditioned frequencies (Hothorn et al. 2015). The total number of
observations on the ‘routes’ is represented by ‘n’ at the bottom of every node of the ctree. To conduct the ctree analysis, we used the ‘party’ R-package with the ctree function (Hothorn et al. 2006
), confining the ctree analysis to the adult sub-population, as the juvenile sub-population had yet to attain their terminal biometric parameters.
Table 2.
Results of two-way ANOVA on six biometric parameters examined separately (main effects of sex and age and their interaction as independent variables in each model, with the biometric parameter as the
dependent measure). *P < 0.05; **P < 0.01; ***P < 0.001; n.s. = not significant; ‘n’ = valid cases, the entire dataset containing 13,471 observations; ‘df’ = degrees of freedom for the main effects.
Table 3.
Comparative analysis of male and female body measurements during the spring migration of woodcock in Hungary. δ refers to the estimated absolute difference between the means computed by Tukey HSD
post hoc comparisons. MID is computed by taking 0.2 of the pooled SD, relying on the 0.2 * SD criterion (e.g. Mouelhi et al. 2020). MIDs are rounded to two decimal places. P-values are derived from
Tukey HSD post hoc tests.
Comparative analysis of body size
Sex determination was undertaken on 13,226 specimens, of which 10,995 were male, 2,231 female and 8,905 unknown (Table 1). When analysed by sex andage,two-wayANOVAforallbiometricparameters except
tarsus length indicated significant differences between mean values for each age group (Table 2). Given the large number of samples, deviations were not accepted unconditionally; instead, MID was
used to undertake a differential analysis of the biometric parameters. Tukey HDS results showed significant differences between male and female woodcock for body weight, wing length and bill length;
however, the δ values for these parameters were less than the MID values, suggesting that, while the results were statistically significant, they were not biologically relevant (Table 3).
Table 4.
Comparative analysis of adult female and adult male body sizes during the spring migration of woodcock in Hungary. All measurements are given in mm, except weight in g. Differences in measurements
between sexes by age group tested using Tukey HSD post hoc tests. δ indicates estimated differences from Tukey HSD post hoc tests.
Table 5.
Comparative analysis of juvenile female and juvenile male body sizes during the spring migration of woodcock in Hungary. All measurements are given in mm, except weight in g. Differences in
measurements between sexes by age group tested using Tukey HSD post hoc tests. δ indicates estimated differences from Tukey HSD post hoc tests.
Table 6.
Comparative analysis of adult female and juvenile male body sizes during the spring migration of woodcock in Hungary. All measurements are given in mm, except weight in g. Differences in measurements
between sexes by age group tested using Tukey HSD post hoc tests. δ indicates estimated differences from Tukey HSD post hoc tests.
For the adult age group, while we again recorded a significant difference in body weight and bill length between males and females (Table 4; P < 0.001 and δ > MID), significant differences in other
body size parameters between adult males and females did not reach the biologically relevant threshold (i.e. δ < MID). While no significant differences were observed for any biometric parameter
between juvenile males and females (Table 5), we recorded significant differences in body weight, body length, and bill length between adult and juvenile females, which proved to be biologically
relevant (Table 6).
Comparative analysis by sex and by sex and age showed that the differences observed could only be confirmed at the age group level and only for three biometric variables. Even small differences
between biometric parameters of juvenile and adult birds, typically below the level of biologically relevant significance,wereenoughtomaskdifferencesbetween the sexes, and thus only age-class
differences are detected. In addition to the two biometric parameters above, average body weight and bill length in adult females were always significantly different, as was body length compared to
juvenile females.
Predicting age in the entire sample, using a conditional inference tree based on six potential biometric explanatory variables
While investigating the extent to which small morphological differences allowed separation of sexes and age classes, six biometric predictors were entered into the ctree model, i.e. body weight, body
length, wing length, tail length, bill length and tarsus length, with age at two levels, ‘adult’ and ‘juvenile’, serving as binary dependent variables. Since ctrees do not allow missing values on the
outcome measure (i.e. age in the present analysis), we removed those cases where age was missing. After removing such cases, 8,905 observations remained in the ctree analysis (Fig. 1).
Thepredictions,i.e.theposteriorprobabilitiesofbeing ‘adult’ or ‘juvenile’, from the ctree analysis indicated body weight as the only statistically significant predictor (node 1; Fig. 1), this
displaying the strongest association with age (P < 0.001, Bonferroni-corrected) in the presence of the other biometric variables. In other words, adding more variables to the model from the set of
variables entered did not improve the predictive accuracy of the ctree model. The cut-off for body weight in predicting age yielded a value of 292 g (Body weight ≤ 292 g; criterion = 1, statistic =
82.916), i.e. birds weighing > 292 g (6,958 observations in our sample) were statistically more likely to be adults than juveniles, with a posterior probability of 55.4% (see Fig. 1). Node 2, which
contained 1,947 observations of both adult and juvenile birds, indicated a 44.6% posterior probability of being an adult (Fig. 1). Crucially, the body weight of these 1,947 observations was ≤ 292 g.
The other ‘branch’ (node 3, Fig. 1) could be interpreted by the same logic. While our results demonstrate that, of all the biometric parameters for age determination included in the analysis, only
body weight showed a small but significant difference between juvenile and adult age groups. Nevertheless, this variation lacked enough empirical weight to be a reliable discriminator between the two
age groups (Fig. 1).
Predicting sex in adult woodcock using a conditional inference tree based on six potential biometric explanatory variables
Of the 13,471 total observations, 73 were removed as sex evaluations were missing, giving 13,398 observations for the ctree analysis. A comparative analysis by sex and sex plus age indicated that
differences could only be confirmed at the age group level and only for three biometric variables. For adult females, body weight and bill length always showed a significant difference between
averages, and a significant difference was also recorded for body length when compared to juvenile females. The adult sub-sample in the ctree analysis contained 4,712 observations, of which 3,944
were male and 768 female. Ctree analysis was used to explain and predict the distribution of sexes as the outcome variable using the same set of biometric variables as in the age analysis, i.e. body
weight, body length, wing length, tail length, bill length and tarsus length, but with sex serving as the binary dependent variable.
As preliminary studies have shown a significant difference between some biometric parameters of juvenile and adult birds, we decided to investigate the possibility of sex separation in adult birds
only as small morphometric differences between age classes could bias morphometric differences between the sexes. Once again, the ctree analysis revealed that only body weight separated females from
males (Fig. 2; P < 0.001, Bonferroni-corrected, criterion = 1, statistic = 44.901), with no other statistically significant biometric predictors of sex in the model. Adding factor variables for month
(two levels) and year of sampling had no effect on the model outcomes.
Ctree node 2 comprised 4,098 observations and indicated an 85.2% posterior probability of a bird being a male, while node 3 comprised 614 observations and indicated a 73.6% posterior probability of
being a male (Fig. 2). The posterior probabilities for being a female were 14.8% if body weight was ≤ 343 g (node 2) and 26.4% if body weight was > 343 g (node 3; Fig. 2).
Predicting sex in the adult sample using a conditional inference tree with bill length and tail length as potential biometric explanatory variables
While we had 4,712 observations in the ctree analysis (adult subsample), there was a class imbalance on the distribution of the sexes, with 3,944 males and 768 females. As in the previous analysis,
we aimed to explain and predict sex as the outcome variable using the same set of biometric variables as in the previous analyses. Two potential biometric predictors were fed into the model, i.e.
tail length and bill length, with sex (two levels, ‘female’ and ‘male’) serving as the binary dependent variable. The ctree analysis revealed a significant difference between female and male bill
length only (P < 0.001, Bonferroni-corrected, criterion = 1, statistic = 41.796; Fig. 3), tail length having no effect on the model’s accuracy. Moreover, tail length proved not to be a significant
predictor of sex (Fig. 3). Node 2 of the ctree analysis comprised 3,972 observations, indicating an 85.4% posterior probability of a bird being male, while node 3 comprised 740 observations and
indicated a 74.5% posterior probability of being a male (Fig. 3). The posterior probability of being a female was 14.6% if bill length was ≤ 76 mm (node 2) and 25.5% if bill length was > 76 mm (node
3; Fig. 3).
Aradis et al. (2015), who compared a small number of woodcocks (n = 259) during the overwintering period to explore the extent of variation between sexes and age classes, found that while several
morphometric traits differed noticeably between sexes (wing, bill, tarsus length) and age classes (wing), no significant differences were observed between sexes, ages or their interaction (orthogonal
contrasts). Using the same morphometric traits, we examined 13,226 samples from the March hunting bag in Hungary and found that only age-differentiated analyses demonstrated biologically significant
differences. The results of post hoc tests showed that adult female body weight and bill length were significantly higher than those for both juvenile females and male age groups. In previous
Hungarian investigations (1996-1999), significant mass differences could not be consistently confirmed for smaller samples of between 78 and 364 birds (Faragó et al. 2000). In 1999, however, Faragó
et al. (2000) recorded a significant difference in body length in favour of females older than one year compared to younger females (P < 0.01). For younger birds, the same authors found no
significant differences in morphometric characteristics between the sexes (Faragó et al. 2000). In the study of Aradis et al. (2015), no significant differences in body weight were observed between
sexes or age groups in wintering areas in Italy. Nevertheless, other studies suggest that differences in weight between the sexes may be due to the start of egg growth (e.g. Hoodless 1994). Our study
detected an initial follicle production stage during destructive sex determination, indicating that egg formation had not yet begun; hence, this did not affect the sex differences. We obtained the
same results for bill length as Aradis et al. (2015).
Application of MID confirmed biologically relevant morphometric differences in body weight, bill length and body length parameters indicated by the predictor variables defined in the ctree analysis,
the morphometric parameters with highest variance, selected according to the decision rules they define, and the results of the segregation into the groups they divide. The results were consistent
despite the post hoc tests examining differences in means between the two groups and the statistical significance of this difference (i.e. they focus on group-level comparison). At the same time,
ctrees capture nonlinear relationships between variables that post hoc tests fail to indicate due to linear assumptions.
In the case of woodcock, there is not enough sexual dimorphism to separate the sexes through visual inspection (Cramp & Simmons 1983); hence morphometric parameters have typically been used to
separate the sexes in this species (Stronach et al. 1974, Rochford & Wilson 1982, Hoodless 1994). The first results on the identification of sexes based on morphometric differences were published by
Stronach et al. (1974), who, based on an equation, reported 75% reliability for female identification and 72% for male identification. Using a linear model with empirical multipliers calculated from
bill and tail length from our data, we were able to determine sex with relatively low confidence, the model reliability for adult birds being 59.0% (n = 4,702) and that for juveniles 58.4% (n =
4,121). Glutz von Blotzheim et al. (1977), using a simpler approach for biometric sex identification, stated that if a woodcock's beak was > 77 mm long and the tarsus > 38 mm long, the specimen was
typically female. However, our results show no statistically verifiable difference in tarsus length between adult males and females (P = 0.83; δ = 0.22 < MID = 0.71). Ferrand & Gossmann (2009) found
that male bills were, on average, shorter (male bill length > 80 mm) than those of females but that the rectrices were longer (male tail length > 88 mm). Based on the results of our large-sample
investigation, we found that adult females had significantly longer bill lengths than adult males (P < 0.001; δ = 0.95 > MID = 0.77), but no significant difference in bill length between juvenile
birds (P = 0.58; δ = 0.29 < MID = 0.76). In addition, attempts were made to separate individual sexes based on the ratio of morphometric parameters (tail length/bill length ≤ 1.20 = female); however,
even when restricted to the adult age group, reliability for sex determination based on morphometric parameters was no better than 45%. In comparison, the model developed and applied by Aradis et al.
(2015) was applicable with a confidence level of 77.1% for adult male birds and 79.6% for females. Our validated ctree model and MID results produced the same conclusion, i.e. no morphometric
variable or combination of variables could predict age with high confidence. Instead, body weight was the best predictor in the total sample of known age (n = 8,905), with a separation point at 292
g. For birds > 292 g (n = 6,958), the model predicted age with 55.4% confidence and 44.6% confidence for birds weighing < 292 g (n = 1,947).
To separate the sexes, the ctree analysis was performed on a dataset restricted to adult birds (n = 4,712) while also taking account of MID results. In this study, several biometric parameters could
indicate sex with high confidence, with bill length found to be the strongest predictor, the sexes separating at a cut-off value of 76 mm. Our results further indicated that if the bill length was ≤
76 mm (n = 3,972), the model had an 85.4% probability of correctly predicting sex, and if the bill length was > 76 mm (n = 740), the model had 74.5% reliability. In addition to bill length, body
weight proved a strong predictor, separating the sample with a cut-off value at 343 g. In our sample, if body weight was > 343 g (n = 4,098), the model predicted sex with 85.2% confidence, while
confidence was 73.6% for birds of ≤ 343 g (n = 614). However, while body weight was a significant predictor, its contribution to enhancing the model's predictive power was not substantial.
Even when using a large number of samples, we could not achieve a result of more than 85% confidence in age estimation, even when using the best morphometric predictor variables, despite the novelty
of the statistical procedures used in this ornithological application. On the other hand, our results confirmed that there is statistically verifiable and biologically relevant morphometric variation
in woodcock. However, the extent of this variation is not sufficient to separate the sexes with adequate reliability. In ornithological work (e.g. ringing, telemetry transmitters), knowledge of the
bird's sex is highly desirable; however, morphometric characteristics do not allow us to determine this with sufficient reliability using any of the methods presently available. This finding suggests
that using semi-invasive techniques may still be relevant in ornithology, e.g. DNA analysis of blood and feather samples, as these allow sex segregation with 100% confidence (Bende et al. 2023).
We consider the present study important due to its methodological novelty in using MIDs, which helped us determine thresholds/cut-off values of biological significance for estimated differences
beyond mere statistical significance; this method has been underemployed in ornithology to date. Furthermore, MIDs allowed us to rule out Type I errors during the analysis. Future ornithological
research should incorporate MIDs to determine meaningful differences in large samples.
The evaluation of woodcock biometrics was made possible through the monitoring program coordinated by the Hungarian Hunters National Association. Special thanks go to the hunters who participated in
providing data, particularly those who, in addition to collecting bagging data, contributed to Hungarian Woodcock Bag Monitoring by submitting wing samples for age determination. This project was
supported under project ÚNKP-23-4-IISOE-138 of the New National Excellence Program of the Ministry of Culture and Innovation, under the framework of the National Research, Development and Innovation
This is an open access article under the terms of the Creative Commons Attribution Licence (CC BY 4.0), which permits use, distribution and reproduction in any medium provided the original work is
properly cited.
Author Contributions
S. Faragó planned and organised the national-scale research and acquired funding. A. Bende and R. László compiled the database and carried out age determination based on the wing samples. I. Fekete
conceptualised and executed the statistical analysis. A. Bende and I. Fekete wrote the paper. All authors approved the final version of the manuscript.
Aradis A., Landucci G., Tagliavia M. & Bultrini M. 2015: Sex determination of Eurasian woodcock
Scolopax rusticola
: a molecular and morphological approach.
. Google Scholar
Artmann J.W. & Schroeder L.D. 1976: A technique for sexing woodcock by wing measurement.
J. Wildl. Manage.
. Google Scholar
Babyak M.A. 2004: What you see may not be what you get: a brief, nontechnical introduction to overfitting in regression-type models.
Psychosom. Med.
. Google Scholar
Bende A. 2021: Spring migration dynamics, age and sex ratio, and breeding biology of the woodcock (
Scolopax rusticola
L.) in Hungary.
PhD thesis, University of Sopron
Sopron, Hungary . ( in Hungarian with English abstract ) Google Scholar
Bende A., Pálinkás-Bodzsár N., Boa L. & László R. 2023: Sex determination of Eurasian woodcock (
Scolopax rusticola
L.) by genetic and imaging diagnostic methods.
Biodiversity & Environment
. Google Scholar
Benjamini Y., Krieger A.M. & Yekutieli D. 2006: Controlling the false discovery rate in large-scale multiple testing.
J. R. Stat
Soc. B: Stat. Methodol.
. Google Scholar
Box G.E.P., Jenkins G.M., Reinsel G.C. & Ljung G.M. 2015: Time series analysis: forecasting and control, 5
John Wiley & Sons
Hoboken, New Jersey, USA . Google Scholar
Breiman L. 2001: Random forests.
Mach. Learn.
. Google Scholar
Button K.S., Ioannidis J.P., Mokrysz C. et al. 2013: Power failure: why small sample size undermines the reliability of neuroscience.
Nat. Rev. Neurosci.
. Google Scholar
Bühlmann P. & Yu B. 2003: Boosting with the L2 loss: regression and classification.
J. Am. Stat. Assoc.
. Google Scholar
Clausager I. 1973: Age and sex determination of the woodcock,
Scolopax rusticola
Dan. Rev. Game Biol.
. Google Scholar
Cohen J. 1988: Statistical power analysis for the behavioral sciences, 2
Lawrence Erlbaum Associates
Hillsdale, New Jersey, USA . Google Scholar
Copay A.G., Subach B.R., Glassman S.D. et al. 2007: Understanding the minimum clinically important difference: a review of concepts and methods.
Spine J
. 7
. Google Scholar
Couronné R., Probst P. & Boulesteix A.L. 2018: Random forest versus logistic regression: a large-scale benchmark experiment.
BMC Bioinformatics
. Google Scholar
Cramp S. & Simmons K.E.L. 1983: Handbook of the birds of Europe, the Middle East and North America: the birds of the Western Palearctic. Waders to gulls, vol. 3.
Oxford University Press
Oxford, UK . Google Scholar
Crosby R.D., Kolotkin L.R. & Williams G.R. 2003: Defining clinically meaningful change in health-related quality of life.
J. Clin. Epidemiol.
. Google Scholar
Efron B. & Tibshirani R.J. 1993: An introduction to the bootstrap.
Chapman & Hall
New York, USA . Google Scholar
Engel L.D., Beaton E. & Touma Z. 2018: Minimal clinically important difference: a review of outcome measure score interpretation.
Rheum. Dis. Clin. N. Am.
. Google Scholar
Eton D.T., Cella D., Yost K.J. et al. 2004: A combination of distribution- and anchor-based approaches determined minimally important differences (MIDs) for four endpoints in a breast cancer scale.
J. Clin. Epidemiol.
. Google Scholar
Faragó S., László R. & Sándor Gy. 2000: Body dimensions, sex and age relationships of the woodcock
(Scolopax rusticola)
in Hungary between 1990-1999.
Hungarian Waterfowl Publications
. ( in Hungarian with English abstract ) Google Scholar
Farivar S.S., Liu H. & Hays R.D. 2004: Half standard deviation estimate of the minimally important difference in HRQOL scores?
Expert Rev. Pharmacoeconomics Outcomes Res.
. Google Scholar
Fekete I., Schulz P. & Ruigendijk E. 2018: Exhaustivity in single bare wh-questions: a differential-analysis of exhaustivity.
. Google Scholar
Ferrand Y. & Gossmann F. 2009: Ageing and sexing series 5: ageing and sexing the Eurasian woodcock
Scolopax rusticola
Wader Study Group Bull
. 116
. Google Scholar
Glutz von Blotzheim U.N., Bauer K.M. & Bezzel R. 1977: Handbuch der Vögel Mitteleuropas, vol. 7.
AULA Verlag
Wiesbaden, Germany . Google Scholar
Hallgrimsson G.T., Palsson S. & Summers R.W. 2008: Bill length: a reliable method for sexing purple sandpipers.
J. Field Ornithol.
. Google Scholar
Harrell F.E., Jr. 2015: Regression modeling strategies: with applications to linear models, logistic and ordinal regression, and survival analysis, 2
New York, USA . Google Scholar
Harrell F.E., Jr., Lee K.L. & Mark D.B. 1996: Multivariable prognostic models: issues in developing models, evaluating assumptions and adequacy, and measuring and reducing errors.
Stat. Med.
. Google Scholar
Hoekstra R., Morey R.D., Rouder J.N. & Wagenmakers E.J. 2014: Robust misinterpretation of confidence intervals.
Psychon. Bull. Rev.
. Google Scholar
Hoodless A.N. 1994: Aspects of the ecology of the European woodcock
Scolopax rusticola
PhD thesis, Durham University
UK . Google Scholar
Hosmer D.W., Jr., Lemeshow S. & Sturdivant R.X. 2013: Applied logistic regression, 4
John Wiley & Sons
Hoboken, New Jersey, USA . Google Scholar
Hothorn T., Hornik K. & Zeileis A. 2006: Unbiased recursive partitioning: a conditional inference framework.
J. Comput. Graph. Stat.
. Google Scholar
Hsu J.C. 1996: Multiple comparisons: theory and methods.
Chapman & Hall
London, UK . Google Scholar
Ioannidis J.P. 2005: Why most published research findings are false.
PLOS Med
. 2
. Google Scholar
Jaeschke R., Singer J. & Guyatt G.H. 1989: Measurement of health nature: ascertaining the minimal clinically important difference.
Control. Clin. Trials
. Google Scholar
Katrínardóttir B., Pálsson S., Gunnarsson T.G. & Sigurjónsdóttir H. 2013: Sexing Icelandic whimbrels numenius
Phaseolus islandicus
with DNA and biometrics.
Ringing Migr
. 28
. Google Scholar
Kováts D. & Harnos A. 2015: Morphological classification of conspecific birds from closely situated breeding areas – a case study of the common nightingale.
Ornis Hung
. 23
. Google Scholar
Kutner M.H., Nachtsheim C.J., Neter J. & Li W. 2005: Applied linear statistical models, 5
McGraw Hill/Irwin
New York, USA . Google Scholar
Kwak S.K. & Kim H. 2017: Statistical data analysis using SAS: intermediate statistical methods.
SAS Institute
Cary, USA . Google Scholar
Levshina N. 2020: Conditional inference trees and random forests. In: Paquot M. & Gries S.T. (eds.), A practical handbook of corpus linguistics.
Cham, Germany :
. Google Scholar
Lin M., Lucas H.C. & Shmueli G. 2013: Too big to fail: large samples and the
-value problem.
Inf. Syst. Res.
. Google Scholar
Lumley T., Diehr P., Emerson S. & Chen L. 2002: The importance of the normality assumption in large public health data sets.
Annu. Rev. Public Health
. Google Scholar
MacCabe R.A. & Brackbill M. 1973: Problems in determining sex and age of European woodcock.
Proceeding of the 10^th International Congress of Game Biology
Office National de la Chasse
Paris :
. Google Scholar
Maxwell S.E., Kelley K. & Rausch J.R. 2008: Sample size planning for statistical power and accuracy in parameter estimation.
Annu. Rev. Psychol.
. Google Scholar
Mouelhi Y., Jouve E., Castelli C. & Gentile S. 2020: How is the minimal clinically important difference established in health-related quality of life instruments? Review of anchors and methods.
Health Qual. Life Outcomes
. Google Scholar
Norman G.R., Sloan J.A. & Wyrwich K.W. 2004: The truly remarkable universality of half a standard deviation: confirmation through another look.
Expert Rev. Pharmacoeconomics Outcomes Res.
. Google Scholar
R Development Core Team 2022: A language and environment for statistical computing.
R Foundation for Statistical Computing
Vienna, Austria . Google Scholar
Rochford J.M. & Wilson H.J. 1982: Value of biometric data in the determination of age and sex in the woodcock
(Scolopax rusticola)
United States Fish and Wildlife Service, Research Report 14
Pennsylvania, USA :
. Google Scholar
Remisiewicz M. & Wennerberg L. 2006: Differential migration strategies of the wood sandpiper (
Tringa glareola
): genetic analyses reveal sex differences in morphology and spring migration. phenology.
Ornis Fenn
. 83
. Google Scholar
Revicki D.A., Hays R.D., Cella D. & Sloan J. 2008: Recommended methods for determining responsiveness and minimally important differences for patient-reported outcomes.
J. Clin. Epidemiol.
. Google Scholar
Rosner B. 2015: Fundamentals of biostatistics, 8
Cengage Learning
Boston, USA . Google Scholar
Rousseeuw P.J. & Croux C. 1993: Alternatives to the median absolute deviation.
J. Am. Stat. Assoc.
. Google Scholar
Samsa G., Edelman D., Rothman M.L. et al. 1999: Determining clinically important differences in health status measures: a general approach with illustration to the Health Utilities Index Mark II.
. Google Scholar
Schroeder J., Lourenço P.M., van der Velde M. et al. 2008: Sexual dimorphism in plumage and size in black-tailed godwits
Limosa limosa limosa
. Google Scholar
Shorten M. 1975. Woodcock research group (IWRB).
Wader Study Group Bull
. 15
. Google Scholar
Stronach B., Harrington D. & Wilhsnes N. 1974: An analysis of Irish woodcock data.
Proceedings of the 5^th American Woodcock Workshop
University of Georgia
Athens, USA . Google Scholar
Sullivan G.M. & Feinn R. 2012: Using effect size - or why the
value is not enough.
J. Grad. Med. Educ.
. Google Scholar
Szemethy L., Schally G., Bleier N. et al. 2014: Results of Hungarian woodcock monitoring.
Rev. Agric. Rural Dev.
. Google Scholar
Terwee C.B., Terluin B., Knol D.L. & de Vet H.C. 2011: Combining clinical relevance and statistical significance for evaluating quality of life changes in the individual patient.
J. Clin. Epidemiol.
; author reply 1467–1468 . Google Scholar
Vili N., Nemesházi E., Kovács S. et al. 2013: Factors affecting DNA quality in feathers used for noninvasive sampling.
J. Ornithol.
. Google Scholar
Watt J.A., Veronik A.A., Tricco A.C. et al. 2021: Using a distribution-based approach and systematic review methods to derive minimum clinically important differences.
BMC Med. Res. Methodol.
Google Scholar
Wilcox R.R. 2012: Introduction to robust estimation and hypothesis testing, 3
Academic Press
Amsterdam/Boston, USA . Google Scholar
Attila Bende, Richárd László, Sándor Faragó, and István Fekete "An investigation into the possibilities of sex and age determination of Eurasian woodcock (Scolopax rusticola L.) based on biometric
parameters, using conditional inference trees and minimal important differences," Journal of Vertebrate Biology 73(23068), 23068.1-15, (14 December 2023). https://doi.org/10.25225/jvb.23068
Received: 4 August 2023; Accepted: 19 October 2023; Published: 14 December 2023
biologically relevant significance
|
{"url":"https://complete.bioone.org/journals/journal-of-vertebrate-biology/volume-73/issue-23068/jvb.23068/An-investigation-into-the-possibilities-of-sex-and-age-determination/10.25225/jvb.23068.full","timestamp":"2024-11-06T05:29:24Z","content_type":"text/html","content_length":"282908","record_id":"<urn:uuid:c10417ae-8cda-4e21-88b4-9d24b1adc2cd>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00331.warc.gz"}
|
OUTLIERS DISCOVERY FROM SMART METERS DATA USING ASTATISTICAL BASED DATA MINING APPROACH - Nexgen Technology
by nexgentech | Oct 25, 2017 | ieee project
The paper presents a statistical approach used fordetection of outliers from load curves recorded on the electricsubstation of distribution networks. The load curves provided bysmart meters were
processed and their main indicators werecalculated. By outliers elimination, the remaining data have ledto the discovery of accurate patterns that characterized verywell the load curves
characteristics through indicators. Theproposed approach was tested using a real database with 60substations from a rural area. With the help of these patterns,the operation and planning of electric
distribution systems couldbe made more efficient.
The analysis of data represents the starting point for manyapplications, in the design or operation phase for onlinecontrol or complex processes. Nowadays, the need to extendthe capabilities of human
analysis for handling theoverwhelming quantity of data that we are able to collect hasbecome increasingly necessary. Since computers haveenabled the possibility of storing large amounts of data, it
isonly natural to resort to computational techniques to help usdiscover meaningful patterns and massive structures volume data.The load curve plays a fundamental role in the operation and planning of
the power systems. Unfortunately, due tovarious random factors, the load curves always containabnormal, deviation, unrepresentative, noisy, strange,anomalous and missing data. It is fundamental for
powersystems operators to detect and repair anomalous or abnormaldata before the use of load curve in planning and modellingprocess. The extraction of useful load curve information (loadfactor, loss
duration, loss factor, fill factor, etc.) from some large databases, the clustering or statistical techniques can beused.
in the paper it proposes an approach fordetection of outliers using two stages. In first step, a datamining technique for extraction of the load curves mainindicators computed with information
provided by SmartMeters was used. The second stage coincide with thedetection of outliers from the above computed indicators,using a statistical processing. The proposed approach was tested using a
real databasewith 60 substations that served rural area. The resultshighlight the ability of proposed approach to be efficientlyused by distribution operators in decision making. Thus, inthe
operation and planning of the distribution systems can behighly useful the characteristic information obtained with thehelp of data mining techniques from the load curves providedby the smart meters.
The paper presents a comprehensive method that use a statistical based data mining for load curves characterizationby detection of outliers using information provided by SmartMeters in real
distribution networks. A database with 60 ruralsubstations was tested. The results demonstrate by outlierseliminating, the ability of the proposed approach to beefficiently used by distribution
operators in accurate patternsdiscovery of the load curves characteristics with the help ofwhich the operation and planning of power systems, could bemade. The load curve characteristic information
provided bya large database from smart meters by data mining techniquesare useful both for distribution operators and users. From theanalysis of results, it can be observed that there are
somecharacteristics with a more important influence on thedetection of the outliers using statistical approach i.e.maximum load duration (ML), load factor (T), loss duration(LD), loss factor (LS),
irregularity factor (I), fill factor (K).Thus, the planning of the development of the distributionsubstations knowing only few indicators of the load curvescould simplify the work of distribution
[1] G. Hebrail, “Practical Data Mining in Large Utility Company”,www.upcommons.upc.edu/revistes/bitstream/2099/4160/4/article. pdf.
[2] G. Grigoras, Fl. Scarlatache, B.C. Neagu, “Clustering in powersystems. Applications”, LAP LAMBERT Academic Publishing,Saarbrücken, Germany, 2016.
[3] Z. Guo, “X-outlier detection and periodicity detection in load curvedata in power systems.” Diss. Thesis in Applied Science: School ofComputing Science, 2011.
[4] R. Godina, E. M. G. Rodrigues, J. C. O. Matias, J. P. S. Catalão,“Effect of Loads and Other Key Factors on Oil-Transformer Ageing:Sustainability Benefits and Challenges”, in Energies, vol. 8, no.
10, pp.12147-12186, 2015.
[5] W. Chen, K. Zhou, S. Yang, C. Wu, “Data quality of electricityconsumption data in a smart grid environment”, Renew. Sustain.Energy Rev., 2017, in press.
[6] E. Rakhshani, I. Sariri, K. Rouzbehi, “Application of data mining onfault detection and prediction in boiler of power plant using artificialneural network”. Int. J. Electr. Power Energy Syst.,
pp. 473-478, 2009.
[7] S. Cateni, V. Colla, M. Vannucci, “A fuzzy logic-based method foroutliers detection.” in Artif. Intellig. and App., pp. 605–610, 2007.
[8] A. Loureiro, L. Torgo, C. Soares, “Outlier Detection using ClusteringMethods: a Data Cleaning Application”, in Proc. of KDNet Symp. onKnowledge-based Syst. for the Public Sector, Bonn, Germany,
[9] S. Kiware, “Detection of outliers in time series data.” Master’s thesis,Marquette University, Department of Mathematics, Statistics andComputer Science, Milwaukee, March 2010.
[10] B. Neagu, G. Georgescu, M. Gusa, “Load Curves Characteristics ofConsumers Supplied From Electricity Repartition and DistributionPublic Systems”, Buletinul Institutului Politehnic din Iasi Tomul
LVII(LXI) Fasc 1, pp. 141-157, 2011.
[11] W. J. Frawley, G. Piatetsky-Sh
|
{"url":"https://nexgenproject.com/outliers-discovery-smart-meters-data-using-astatistical-based-data-mining-approach/","timestamp":"2024-11-08T21:27:28Z","content_type":"text/html","content_length":"91897","record_id":"<urn:uuid:4f032785-8918-4443-a82d-ccdc82859aed>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00498.warc.gz"}
|
Kinematics Review
One-Dimensional Kinematics Review
Navigate to:
Review Session Home - Topic Listing 1D Kinematics - Home
Printable Version
Questions and Links
Answers to Questions:
[ #1 | #2 | #3 | #4 | #5 | #6 | #7 ]
Part A: Multiple TRUE/FALSE
1. Which of the following statements about vectors and scalars are TRUE? List all that apply.
1. A vector is a large quantity and a scalar is a small quantity.
2. A scalar quantity has a magnitude and a vector quantity does not.
3. A vector quantity is described with a direction and a scalar is not.
4. Scalar quantities are path dependent quantities and vector quantities are not.
5. A scalar quantity depends only upon the initial and final values of the quantity; this is not the case for vector quantities.
6. The quantity 20 m/s, north is a speed and as such is a scalar quantity.
7. The quantity 9.8 m/s/s is an acceleration value and as such is a vector quantity.
Answer: CD
a. FALSE - This would never be the case. Vectors simply are direction-conscious, path-independent quantities which depend solely upon the initial and final state of an object. Vectors are always
expressed fully by use of a magnitude and a direction.
b. FALSE - Both scalar and vector quantities have a magnitude or value expressed with a given unit; additionally, a vector quantity requires a direction in order to fully express the quantity.
c. TRUE - Vectors are fully described by magnitude AND direction; scalars are not described with a direction.
d. TRUE - Scalars such as distance would depend upon the path taken from initial to final location. If you run around the track one complete time, your distance will be different than if you take a
step forward and a step backwards. The path MATTERS; distance (like all scalars) depends upon it. On the other hand, the displacement (a vector quantity) is the same for both paths.
e. FALSE - Vectors are the types of quantities which depend only upon initial and final state of the object. For instance, the vector quantity displacement depends only upon the starting and final
f. FALSE - This is certainly not a speed quantity; though the unit is appropriate for speed, the statement of the direction is inconsistent with speed as a scalar quantity.
g. FALSE (a rather picky FALSE) - If a direction was included, then this would be an acceleration value. The unit is characteristic of acceleration but the lack of direction is inconsistent with
acceleration being a vector quantity.
[ #1 | #2 | #3 | #4 | #5 | #6 | #7 ]
2. Which of the following statements about distance and/or displacement are TRUE? List all that apply.
1. Distance is a vector quantity and displacement is a scalar quantity.
2. A person makes a round-trip journey, finishing where she started. The displacement for the trip is 0 and the distance is some nonzero value.
3. A person starts at position A and finishes at position B. The distance for the trip is the length of the segment measured from A to B.
4. If a person walks in a straight line and never changes direction, then the distance and the displacement will have exactly the same magnitude.
5. The phrase "20 mi, northwest" likely describes the distance for a motion.
6. The phrase "20 m, west" likely describes the displacement for a motion.
7. The diagram below depicts the path of a person walking to and fro from position A to B to C to D. The distance for this motion is 100 yds.
8. For the same diagram below, the displacement is 50 yds.
Answer: BDF
a. FALSE - Distance is the scalar and displacement is the vector. Know this one!
b. TRUE - Displacement is the change in position of an object. An object which finishes where it started is not displaced; it is at the same place as it started and as such has a zero displacement.
On the other hand, the distance is the amount of ground which is covered. And if it was truly a journey, then there is definitely a distance.
c. FALSE - This would only be the case if the person walk along a beeline path from A to B. But if the person makes a turn and veers left, then right and then ..., then the person has a distance
which is greater than the length of the path from A to B. Distance refers to the amount of ground which is covered.
d. TRUE - If a person never changes direction and maintains the same heading away from the initial position, then every step contributes to a change in position in the same original direction. A 1 m
step will increase the displacement (read as out of place-ness) by 1 meter and contribute one more meter to the total distance which is walked.
e. FALSE - Distance is a scalar and is ignorant of direction. The "northwest" on this quantity would lead one to believe that this is a displacement (a vector quantity) rather than a distance.
f. TRUE - The unit is an appropriate displacement unit (length units) and the direction is stated. Since there is both magnitude and direction expressed, one would believe that this is likely a
g. FALSE - The distance from A to B is 35 yds; from B to C is 20 yds; and from C to D is 35 yds. The total distance moved is 90 yds.
h. FALSE (a rather picky FALSE) - Technically, this is not a displacement since displacement is a vector and fully described by both magnitude and direction. The real expression of displacement is 50
yds, left (or west or -)
[ #1 | #2 | #3 | #4 | #5 | #6 | #7 ]
3. Which of the following statements about velocity and/or speed are TRUE? List all that apply.
1. Velocity is a vector quantity and speed is a scalar quantity.
2. Both speed and velocity refer to how fast an object is moving.
3. Person X moves from location A to location B in 5 seconds. Person Y moves between the same two locations in 10 seconds. Person Y is moving with twice the speed as person X.
4. The velocity of an object refers to the rate at which the object's position changes.
5. For any given motion, it is possible that an object could move very fast yet have an abnormally small velocity.
6. The phrase "30 mi/hr, west" likely refers to a scalar quantity.
7. The average velocity of an object on a round-trip journey would be 0.
8. The direction of the velocity vector is dependent upon two factors: the direction the object is moving and whether the object is speeding up or slowing down.
9. The diagram below depicts the path of a person walking to and fro from position A to B to C to D. The entire motion takes 8 minutes. The average speed for this motion is approximately 11.3 yds/
10. For the same diagram below, the average velocity for this motion is 0 yds/min.
Answer: ADEGI
a. TRUE - Yes! Speed is a scalar and velocity is the vector. Know this one!
b. FALSE - Speed refers to how fast an object is moving; but velocity refers to the rate at which one's motion puts an object away from its original position. A person can move very fast (and thus
have a large speed); but if every other step leads in opposite directions, then that person would not have a large velocity.
c. FALSE - Person Y has one-half the speed of Person X. If person Y requires twice the time to do the same distance, then person Y is moving half as fast.
d. TRUE - Yes! That is exactly the definition of velocity - the rate at which position changes.
e. TRUE - An Indy Race car driver is a good example of this. Such a driver is obviously moving very fast but by the end of the race the average velocity is essentially 0 m/s.
f. FALSE - The presence of the direction "west" in this expression rules it out as a speed expression. Speed is a scalar quantity and direction is not a part of it.
g. TRUE - For a round trip journey, there is no ultimate change in position. As such, the average velocity is 0 m/t seconds. Regardless of the time, the average velocity will be 0 m/s.
h. FALSE - The direction of the velocity vector depends only upon the direction that the object is moving. A westward moving object has a westward velocity.
i. TRUE - As discussed in #2g, the distance traveled is 90 meters. When divided by time (8 minutes), the average speed is 11.25 yds/min.
j. FALSE - The average velocity would be 0 yds/min only if the person returns to the initial starting position. In this case, the average velocity is 50 yds/8 min, west (6.25 yds/min, west).
[ #1 | #2 | #3 | #4 | #5 | #6 | #7 ]
4. Which of the following statements about acceleration are TRUE? List all that apply.
1. Acceleration is a vector quantity.
2. Accelerating objects MUST be changing their speed.
3. Accelerating objects MUST be changing their velocity.
4. Acceleration units include the following; m/s^2, mi/hr/sec, cm/s^2, km/hr/m.
5. The direction of the acceleration vector is dependent upon two factors: the direction the object is moving and whether the object is speeding up or slowing down.
6. An object which is slowing down has an acceleration.
7. An object which is moving at constant speed in a circle has an acceleration.
8. Acceleration is the rate at which the velocity changes.
9. An object that is accelerating is moving fast.
10. An object that is accelerating will eventually (if given enough time) be moving fast.
11. An object that is moving rightward has a rightward acceleration.
12. An object that is moving rightward and speeding up has a rightward acceleration.
13. An object that is moving upwards and slowing down has an upwards acceleration.
Answer: ACEFGHL (and maybe J)
a. TRUE - Yes it is. Acceleration is direction-conscious.
b. FALSE - Accelerating objects could be changing their speed; but it is also possible that an accelerating object is only changing its direction while maintaining a constant speed. The race car
drivers at Indy might fit into this category (at least for certain periods of the race).
c. TRUE - Accelerating object MUST be changing their velocity -either the magnitude or the direction of the velocity.
d. FALSE - The first three sets of units are acceleration units - they include a velocity unit divided by a time unit. The last set of units is a velocity unit divided by a length unit. This is
definitely NOT an acceleration.
e. TRUE - This is the case and something important to remember. Consider its application in the last three parts of this question.
f. TRUE - Accelerating objects are either slowing down, speeding up or changing directions.
g. TRUE - To move in a circle is to change one's direction. As such, there is a change in the velocity (not magnitude, but the direction part); this constitutes an acceleration.
h. TRUE - This is the very definition of acceleration. Know this one - its the beginning point of all our thoughts about acceleration.
i. FALSE - Accelerating objects are not necessarily moving fast; they are merely changing how fast they are moving (or the direction they are moving).
j. FALSE - If the accelerating object is slowing down, then it will eventually stop and not reach a fast speed. And if that doesn't convince you, then consider an object that is accelerating by
moving in a circle at constant speed forever; it will accelerate the entire time but never being going any faster than at the beginning.
k. FALSE - If an object is moving rightward and slowing down, then it would have a leftward acceleration.
l. TRUE - If an object is speeding up, then the direction of the acceleration vector is in the direction which the object is moving.
m. FALSE - If an object is slowing down, then the acceleration vector is directed opposite the direction of the motion; in this case the acceleration is directed downwards.
[ #1 | #2 | #3 | #4 | #5 | #6 | #7 ]
5. Which of the following statements about position-time graphs are TRUE? List all that apply.
1. Position-time graphs cannot be used to represent the motion of objects with accelerated motion.
2. The slope on a position-time graph is representative of the acceleration of the object.
3. A straight, diagonal line on a position-time graph is representative of an object with a constant velocity.
4. If an object is at rest, then the position-time graph will be a horizontal line located on the time-axis.
5. Accelerated objects are represented on position-time graphs by curved lines.
6. An object with a positive velocity will be represented on a position-time graph by a line with a positive slope.
7. An object with a negative velocity will be represented on a position-time graph by a line with a negative slope.
8. An object with a positive acceleration will be represented on a position-time graph by a line which curves upwards.
9. An object with a negative acceleration will be represented on a position-time graph by a line which curves downwards.
Answer: CEFG
a. FALSE - Position-time graphs represent accelerated motion by curved lines.
b. FALSE - The slope of a position-time graph is the velocity of the object. Some things in this unit are critical things to remember and internalize; this is one of them.
c. TRUE - A straight diagonal line is a line of constant slope. And if the slope is constant, then so is the velocity.
d. FALSE - Not necessarily true. If the object is at rest, then the line on a p-t graph will indeed be horizontal. However, it will not necessarily be located upon the time axis.
e. TRUE - Accelerating objects (if the acceleration is attributable to a speed change) are represented by lines with changing slope - i.e., curved lines.
f. TRUE - Since slope on a p-t graph represents the velocity, a positive slope will represent a positive velocity.
g. TRUE - Since slope on a p-t graph represents the velocity, a negative slope will represent a negative velocity.
h. FALSE - (This is confusing wording here since we might not all agree on what "curving up" means.) A line that slopes upward and has a curve (perhaps you call that "curving up" as I do) has a
positive velocity (due to its positive slope). If the curve is "concave down" (you might say leveling off to a horizontal as time progresses) then the object is slowing down and the acceleration is
i. FALSE - (Once more, there is confusing wording here since we might not all agree on what "curving downwards" means.) A line that slopes downwards and has a curve (perhaps you call that "curving
downwards " as I do) has a negative velocity (due to its negative slope). If the curve is "concave up" (you might say leveling off to a horizontal as time progresses) then the object is slowing down
and the acceleration is positive.
[ #1 | #2 | #3 | #4 | #5 | #6 | #7 ]
6. Which of the following statements about velocity-time graphs are TRUE? List all that apply.
1. The slope on a velocity-time graph is representative of the acceleration of the object.
2. The area on a velocity -time graph is representative of the change in position of the object.
3. An accelerated object's motion will be represented by a curved line on a velocity-time graph.
4. Objects with positive acceleration will be represented by upwardly-curved lines on a velocity-time graph.
5. If an object is at rest, then the velocity-time graph will be a line with zero slope.
6. A line with zero slope on a velocity-time graph will be representative of an object which is at rest.
7. A line with a negative slope on a velocity-time graph is representative of an object with negative velocity.
8. If an object changes its direction, then the line on the velocity-time graph will have a changing slope.
9. An object which is slowing down is represented by a line on a velocity-time graph which is moving in the downward direction.
Answer: ABE (and almost D)
a. TRUE - Now this is important! It is the beginning point of much of our discussion of velocity-time graphs. The slope equals the acceleration.
b. TRUE - This is equally important. The area is the displacement.
c. FALSE - An object which has an acceleration will be represented by an line that has a slope. It may or may not curve, but it must have a slope other than zero.
d. FALSE - An object with positive acceleration will have an positive or upward slope on a v-t graph. It does not have to be a curved line. A curved line indicates an object that is accelerating at a
changing rate of acceleration.
e. TRUE - An object that is at rest has a 0 velocity and maintains that zero velocity. The permanence of its velocity (not the fact that it is zero) gives the object a zero acceleration. and as such,
the line on a v-t graph would have a slope of 0 (i.e., be horizontal).
f. FALSE - A line with zero slope is representative of an object with an acceleration of 0. It could be at rest or it could be moving at a constant velocity.
g. FALSE - A negative slope indicates a negative acceleration. The object could be moving in the positive direction and slowing down (a negative acceleration).
h. FALSE - An object which changes its direction will be represented by a line on a v-t graph that crosses over the time-axis from the + velocity region into the - velocity region.
i. FALSE - An object which is slowing down has a velocity which is approaching 0 m/s. And as such, on a v-t graph, the line must be approaching the v=0 m/s axis.
[ #1 | #2 | #3 | #4 | #5 | #6 | #7 ]
7. Which of the following statements about free fall and the acceleration of gravity are TRUE? List all that apply.
1. An object that is free-falling is acted upon by the force of gravity alone.
2. A falling skydiver which has reached terminal velocity is considered to be in a state of free fall.
3. A ball is thrown upwards and is rising towards its peak. As it rises upwards, it is NOT considered to be in a state of free fall.
4. An object in free fall experiences an acceleration which is independent of the mass of the object.
5. A ball is thrown upwards, rises to its peak and eventually falls back to the original height. As the ball rises, its acceleration is upwards; as it falls, its acceleration is downwards.
6. A ball is thrown upwards, rises to its peak and eventually falls back to the original height. The speed at which it is launched equals the speed at which it lands. (Assume negligible air
7. A very massive object will free fall at the same rate of acceleration as a less massive object.
8. The value of g on Earth is approximately 9.8 m/s^2.
9. The symbol g stands for the force of gravity.
Answer: ADFGH
a. TRUE - Yes! This is the definition of free fall.
b. FALSE - Skydivers which are falling at terminal velocity are acted upon by large amounts of air resistance. They are experiencing more forces than the force of gravity. As such, they are NOT
c. FALSE - Any object - whether rising, falling or moving horizontally and vertically simultaneously - can be in a state of free fall if the only force acting upon it is the force of gravity. Such
objects are known as projectiles and often begin their motion while rising upwards.
d. TRUE - The unique feature of free-falling objects is that the mass of the object does not effect the trajectory characteristics. The acceleration, velocity, displacement, etc. is independent of
the mass of the object.
e. FALSE - The acceleration of all free-falling objects is directed downwards. A rising object slows down due to the downward gravity force. An upward-moving object which is slowing down is said to
have a downwards acceleration.
f. TRUE - If the object is truly in free-fall, then the speed of the object will be the same at all heights - whether its on the upward portion of its trajectory or the downwards portion of its
trajectory. For more information, see the Projectiles page at The Physics Classroom.
g. TRUE - The acceleration of free-falling objects (referred to as the acceleration of gravity) is independent of mass. On Earth, the value is 9.8 m/s/s (the direction is down). All objects - very
massive and less massive - experience this acceleration value.
h. TRUE - Yes! Know this one!
i. FALSE - Nope. A careful physics teacher will never call g the force of gravity. g is known as the acceleration of gravity. It might be best to call it the acceleration caused by gravity. When it
comes to the force of gravity, we have yet another symbol for that - F[grav]. But that's a topic to be discussed in a later unit.
[ #1 | #2 | #3 | #4 | #5 | #6 | #7 ]
Navigate to:
Review Session Home - Topic Listing 1D Kinematics - Home
Printable Version
Questions and Links
Answers to Questions:
You Might Also Like ...
Users of The Review Session are often looking for learning resources that provide them with practice and review opportunities that include built-in feedback and instruction. If that is what you're
looking for, then you might also like the following:
1. The Calculator Pad
The Calculator Pad includes physics word problems organized by topic. Each problem is accompanied by a pop-up answer and an audio file that explains the details of how to approach and solve the
problem. It's a perfect resource for those wishing to improve their problem-solving skills.
Visit: The Calculator Pad Home | Calculator Pad - Kinematics
2. Minds On Physics the App Series
Minds On Physics the App ("MOP the App") is a series of interactive questioning modules for the student that is serious about improving their conceptual understanding of physics. Each module of
the series covers a different topic and is further broken down into sub-topics. A "MOP experience" will provide a learner with challenging questions, feedback, and question-specific help in the
context of a game-like environment. It is available for phones, tablets, Chromebooks, and Macintosh computers. It's a perfect resource for those wishing to refine their conceptual reasoning
abilities. Part 1 of the series includes Kinematic Concepts and Kinematic Graphing.
Visit: MOP the App Home || MOP the App - Part 1
|
{"url":"https://www.physicsclassroom.com/reviews/1D-Kinematics/Kinematics-Review-Answers-1","timestamp":"2024-11-13T09:37:31Z","content_type":"application/xhtml+xml","content_length":"241251","record_id":"<urn:uuid:47806dbe-ec6a-46f6-9c46-60a732951482>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00343.warc.gz"}
|
STACK Documentation
Documentation home
Category index
Site map
Authoring quick start 6: multipart questions
1 - First question | 2 - Question variables | 3 - Feedback | 4 - Randomisation | 5 - Question tests | 6 - Multipart questions | 7 - Simplification | 8 - Quizzes
This part of the authoring quick start guide deals with authoring multipart questions. The following video explains the process:
Consider the following examples:
Example 1
Find the equation of the line tangent to \(x^3-2x^2+x\) at the point \(x=2\).
1. Differentiate \(x^3-2x^2+x\) with respect to \(x\).
2. Evaluate your derivative at \(x=2\).
3. Hence, find the equation of the tangent line. \(y=...\)
Since all three parts refer to one polynomial, if randomly generated questions are being used, then each of these parts needs to reference a single randomly generated equation. Hence parts 1.-3.
really form one item. Notice here that part 1. is independent of the others. Part 2. requires both the first and second inputs. Part 3. could easily be marked independently, or take into account
parts 1 & 2. Notice also that the teacher may choose to award "follow on" marking.
Example 2
Consider the following question, asked to relatively young school students.
Expand \((x+1)(x+2)\).
In the context it is to be used, it is appropriate to provide students with the opportunity to "fill in the blanks" in the following equation:
(x+1)(x+2) = [?] x2 + [?] x + [?].
We argue this is really "one question" with "three inputs". Furthermore, it is likely that the teacher will want the student to complete all boxes before any feedback is assigned, even if separate
feedback is generated for each input (i.e. coefficient). This feedback should all be grouped in one place on the screen. Furthermore, in order to identify the possible causes of algebraic mistakes,
an automatic marking procedure will require all coefficients simultaneously. It is not satisfactory to have three totally independent marking procedures.
These two examples illustrate two extreme positions.
1. All inputs within a single multipart item can be assessed independently.
2. All inputs within a single multipart item must be completed before the item can be scored.
Devising multipart questions which satisfy these two extreme positions would be relatively straightforward. However, it is more common to have multipart questions which are between these extremes, as
in the case of our first example.
Authoring a multipart question
Start a new STACK question, and give the question a name, e.g. "Tangent lines". This question will have three parts. Start by copying the question variables and question text as follows. Notice that
we have not included any randomisation, but we have used variable names at the outset to facilitate this at a later stage.
Question variables:
Question text
Copy the following text into the editor.
Find the equation of the line tangent to {@exp@} at the point \(x={@pt@}\).
1. Differentiate {@exp@} with respect to \(x\). [[input:ans1]] [[validation:ans1]] [[feedback:prt1]]
2. Evaluate your derivative at \(x={@pt@}\). [[input:ans2]] [[validation:ans2]] [[feedback:prt2]]
3. Hence, find the equation of the tangent line. \(y=\)[[input:ans3]] [[validation:ans3]] [[feedback:prt3]]
Fill in the answer for ans1 (which exists by default) and remove the feedback tag from the "specific feedback" section. We choose to embed feedback within parts of this question, so that relevant
feedback is shown directly underneath the relevant part. Notice there is one potential response tree for each "part".
Update the form by saving your changes, and then ensure the Model Answers are filled in as ta1, ta2 and ta3.
STACK creates three potential response trees by detecting the feedback tags automatically. Next we need to edit potential response trees. These will establish the properties of the student's answers.
Stage 1: a working potential response tree
The first stage is to include the simplest potential response trees. These will simply ensure that answers are "correct". In each potential response tree, make sure to test that \(\text{ans}_i\) is
algebraically equivalent to \(\text{ta}_i\), for \(i=1,2,3\). At this stage we have a working question. Save it and preview the question. For reference, the correct answers are
ta1 = 3*x^2-4*x+1
ta2 = 5
ta3 = 5*x-8
Stage 2: follow-through marking
Next we will implement simple follow-through marking.
Look carefully at part 2. This does not ask for the "correct answer", only that the student has evaluated the expression in part 1 correctly at the right point. So the first task is to establish this
property by evaluating the answer given in the first part, and comparing it with the second part. Update node 1 of prt2 to the following:
Answer test: AlgEquiv
SAns: ans2
TAns: subst(x=pt,ans1)
Next, add a single node (to prt2) with the following:
Answer test: AlgEquiv
SAns: ans1
TAns: ta1
We now link the true branch of node 1 to node 2 (of prt2). This gives us three outcomes.
Node 1: did they evaluate their expression in part 1 correctly? If "yes", then go to node 2, else if "no", then exit with no marks.
Node 2: did they get part 1 correct? if "yes" then this is the ideal situation, full marks. If "no" then choose marks as suit your taste in this situation, and add some feedback, such as the
You have correctly evaluated your answer to part 1 at the given point, but your answer to part 1 is wrong. Please try both parts again.
Next step
You should now be able to make a multipart question in STACK. If you have been following this quick-start guide, you should already know some steps you can take to improve this question. For example,
you could add more specific feedback, randomise your question and add question tests.
The next part of the authoring quick start guide looks at turning simplification off.
Documentation home
Category index
Site map
Creative Commons Attribution-ShareAlike 4.0 International License.
|
{"url":"https://ja-stack.org/question/type/stack/doc/doc.php/AbInitio/Authoring_quick_start_6.md","timestamp":"2024-11-06T20:51:37Z","content_type":"text/html","content_length":"38314","record_id":"<urn:uuid:e0f35a51-9031-4975-8118-bcf2ee56a897>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00080.warc.gz"}
|
Modeling Risk: Applying Monte Carlo Simulation, Real Options Analysis, Forecasting, And Optimization Techniques (wiley Finance) [PDF] [nikn1fkp2kc0]
I needed to understand how to model business applications using Monte Carlo and this book does an excellent job. I recommend it highly.
Modeling Risk
Founded in 1807, John Wiley & Sons is the oldest independent publishing company in the United States. With offices in North America, Europe, Australia, and Asia, Wiley is globally committed to
developing and marketing print and electronic products and services for our customers’ professional and personal knowledge and understanding. The Wiley Finance series contains books written
specifically for finance and investment professionals as well as sophisticated individual investors and their financial advisors. Book topics range from portfolio management to e-commerce, risk
management, financial engineering, valuation, and financial instrument analysis, as well as much more. For a list of available titles, visit our Web site at www.WileyFinance.com.
Modeling Risk Applying Monte Carlo Simulation, Real Options Analysis, Forecasting, and Optimization Techniques
John Wiley & Sons, Inc.
Copyright © 2006 by Johnathan Mun. All rights reserved. Published by John Wiley & Sons, Inc., Hoboken, New Jersey. Published simultaneously in Canada. No part of this publication may be reproduced,
stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the
1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center,
Inc., 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400, fax 978-646-8600, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions
Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, 201-748-6011, fax 201-748-6008, or online at http://www.wiley.com/go/permissions. Limit of Liability/Disclaimer of Warranty:
While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this
book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales
materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be
liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. For general information on our other products and
services, or technical support, contact our Customer Care Department within the United States at 800-762-2974, outside the United States at 317-572-3993 or fax 317-572-4002. Designations used by
companies to distinguish their products are often claimed as trademarks. In all instances where John Wiley & Sons, Inc., is aware of a claim, the product names appear in initial capital or all
capital letters. Readers, however, should contact the appropriate companies for more complete information regarding trademarks and registration. Wiley also publishes its books in a variety of
electronic formats. Some content that appears in print may not be available in electronic books. For more information about Wiley products, visit our Web site at www.wiley.com. Library of Congress
Cataloging-in-Publication Data: Mun, Johnathan. Modeling risk : applying Monte Carlo simulation, real options analysis, forecasting, and optimization techniques / Johnathan Mun p. cm.—(Wiley finance
series) Includes index. ISBN-13 978-0-471-78900-0 (cloth/cd-rom) ISBN-10 0-471-78900-3 (cloth/cd-rom) 1. Risk assessment. 2. Risk assessment—Mathematical models. 3. Risk management. 4.
Finance—Decision making. I. Title. II. Series. HD61.M7942 2006 658.15—dc22 Printed in the United States of America. 10 9 8 7 6 5 4 3 2 1
To my wife Penny, the love of my life. In a world where risk and uncertainty abound, you are the only constant in my life.
Delight yourself in the Lord and he will give you the desires of your heart. Psalms 37:4
e live in an environment fraught with risk and operate our businesses in a risky world, as higher rewards only come with risks. Ignoring the element of risk when corporate strategy is being framed
and when tactical projects are being implemented would be unimaginable. In addressing the issue of risk, Modeling Risk provides a novel view of evaluating business decisions, projects, and strategies
by taking into consideration a unified strategic portfolio analytical process. This book provides a qualitative and quantitative description of risk, as well as introductions to the methods used in
identifying, quantifying, applying, predicting, valuing, hedging, diversifying, and managing risk through rigorous examples of the methods’ applicability in the decision-making process. Pragmatic
applications are emphasized in order to demystify the many elements inherent in risk analysis. A black box will remain a black box if no one can understand the concepts despite its power and
applicability. It is only when the black box becomes transparent so that analysts can understand, apply, and convince others of its results, value-add, and applicability, that the approach will
receive widespread influence. The demystification of risk analysis is achieved by presenting step-by-step applications and multiple business cases, as well as discussing real-life applications. This
book is targeted at both the uninitiated professional and those well versed in risk analysis—there is something for everyone. It is also appropriate for use at the second-year M.B.A. level or as an
introductory Ph.D. textbook. A CD-ROM comes with the book, including a trial version of the Risk Simulator and Real Options Super Lattice Solver software and associated Excel models.
JOHNATHAN MUN San Francisco, California
[email protected]
May 2006
he author is greatly indebted to Robert Fourt, Professor Morton Glantz, Dr. Charles Hardy, Steve Hoye, Professor Bill Rodney, Larry Pixley, Dr. Tom Housel, Lt. Commander Cesar Rios, Ken Cobleigh, Pat
Haggerty, Larry Blair, Andy Roff, and Tony Jurado for their business case contributions. In addition, a special word of thanks goes to Bill Falloon, senior editor at John Wiley & Sons, for his
support and encouragement.
J. M.
About the Author
r. Johnathan C. Mun is the founder and CEO of Real Options Valuation, Inc., a consulting, training, and software development firm specializing in real options, employee stock options, financial
valuation, simulation, forecasting, optimization, and risk analysis located in northern California. He is the creator of the Real Options Super Lattice Solver software, Risk Simulator software, and
Employee Stock Options Valuation software at the firm, as well as the risk-analysis training DVD, and he also holds public seminars on risk analysis and Certified Risk Analyst (CRA) programs. The
Real Options Super Lattice Solver software showcased in this book supersedes the previous Real Options Analysis Toolkit software, which he also developed. He has also authored numerous other books
including Real Options Analysis: Tools and Techniques, first and second editions (Wiley 2003 and 2005), Real Options Analysis Course: Business Cases (Wiley 2003), Applied Risk Analysis: Moving Beyond
Uncertainty (Wiley 2003), Valuing Employee Stock Options (Wiley 2004), and others. His books and software are being used around the world at top universities (including the Bern Institute in Germany,
Chung-Ang University in South Korea, Georgetown University, ITESM in Mexico, Massachusetts Institute of Technology, New York University, Stockholm University in Sweden, University of the Andes in
Chile, University of Chile, University of Pennsylvania Wharton School, University of York in the United Kingdom, and Edinburgh University in Scotland, among others). Dr. Mun is also currently a
finance and economics professor and has taught courses in financial management, investments, real options, economics, and statistics at the undergraduate and the graduate M.B.A. levels. He is
teaching and has taught at universities all over the world, from the U.S. Naval Postgraduate School (Monterey, California) and University of Applied Sciences (Switzerland and Germany) as full
professor, to Golden Gate University (California) and St. Mary’s College (California), and has chaired many graduate research thesis committees. He also teaches risk analysis, real options analysis,
and risk for managers public courses where participants can obtain the Certified Risk Analyst (CRA) designation on completion of the week-long program. He was formerly the vice president of analytics
at Decisioneering, Inc., where he headed up the development of real options
and financial analytics software products, analytical consulting, training, and technical support, and where he was the creator of the Real Options Analysis Toolkit software, the older predecessor of
the Real Options Super Lattice Software discussed in this book. Prior to joining Decisioneering, he was a consulting manager and financial economist in the Valuation Services and Global Financial
Services practice of KPMG Consulting and a manager with the Economic Consulting Services practice at KPMG LLP. He has extensive experience in econometric modeling, financial analysis, real options,
economic analysis, and statistics. During his tenure at Real Options Valuation, Inc., Decisioneering, and at KPMG Consulting, he had consulted on many real options, risk analysis, financial
forecasting, project management, and financial valuation projects for multinational firms (current and former clients include 3M, Airbus, Boeing, BP, Chevron Texaco, Financial Accounting Standards
Board, Fujitsu, GE, Microsoft, Motorola, U.S. Department of Defense, U.S. Navy, Veritas, and many others). His experience prior to joining KPMG included being department head of financial planning
and analysis at Viking Inc. of FedEx, performing financial forecasting, economic analysis, and market research. Prior to that, he had also performed some financial planning and freelance financial
consulting work. Dr. Mun received a Ph.D. in finance and economics from Lehigh University, where his research and academic interests were in the areas of investment finance, econometric modeling,
financial options, corporate finance, and microeconomic theory. He also has an M.B.A. in business administration, an M.S. in management science, and a B.S. in biology and physics. He is Certified in
Financial Risk Management (FRM), Certified in Financial Consulting (CFC), and Certified in Risk Analysis (CRA). He is a member of the American Mensa, Phi Beta Kappa Honor Society, and Golden Key
Honor Society as well as several other professional organizations, including the Eastern and Southern Finance Associations, American Economic Association, and Global Association of Risk
Professionals. Finally, he has written many academic articles published in the Journal of the Advances in Quantitative Accounting and Finance, the Global Finance Journal, the International Financial
Review, the Journal of Financial Analysis, the Journal of Applied Financial Economics, the Journal of International Financial Markets, Institutions and Money, the Financial Engineering News, and the
Journal of the Society of Petroleum Engineers.
Risk Identification CHAPTER 1 Moving Beyond Uncertainty A Brief History of Risk: What Exactly Is Risk? Uncertainty versus Risk Why Is Risk Important in Making Decisions? Dealing with Risk the
Old-Fashioned Way The Look and Feel of Risk and Uncertainty Integrated Risk Analysis Framework Questions
Risk Evaluation CHAPTER 2 From Risk to Riches Taming the Beast The Basics of Risk The Nature of Risk and Return The Statistics of Risk The Measurements of Risk Appendix—Computing Risk Questions
CHAPTER 3 A Guide to Model-Building Etiquette Document the Model Separate Inputs, Calculations, and Results
Protect the Models Make the Model User-Friendly: Data Validation and Alerts Track the Model Automate the Model with VBA Model Aesthetics and Conditional Formatting Appendix—A Primer on VBA Modeling
and Writing Macros Exercises
Risk Quantification CHAPTER 4 On the Shores of Monaco What Is Monte Carlo Simulation? Why Are Simulations Important? Comparing Simulation with Traditional Analyses Using Risk Simulator and Excel to
Perform Simulations Questions
CHAPTER 5 Test Driving Risk Simulator Getting Started with Risk Simulator Running a Monte Carlo Simulation Using Forecast Charts and Confidence Intervals Correlations and Precision Control
Appendix—Understanding Probability Distributions Questions
CHAPTER 6 Pandora’s Toolbox Tornado and Sensitivity Tools in Simulation Sensitivity Analysis Distributional Fitting: Single Variable and Multiple Variables Bootstrap Simulation Hypothesis Testing
Data Extraction, Saving Simulation Results, and Generating Reports Custom Macros Appendix—Goodness-of-Fit Tests Questions
Industry Applications CHAPTER 7 Extended Business Cases I: Pharmaceutical and Biotech Negotiations, Oil and Gas Exploration, Financial Planning with Simulation, Hospital Risk Management, and
Risk-Based Executive Compensation Valuation Case Study: Pharmaceutical and Biotech Deal Structuring Case Study: Oil and Gas Exploration and Production Case Study: Financial Planning with Simulation
Case Study: Hospital Risk Management Case Study: Risk-Based Executive Compensation Valuation
Risk Prediction CHAPTER 8 Tomorrow’s Forecast Today Different Types of Forecasting Techniques Running the Forecasting Tool in Risk Simulator Time-Series Analysis Multivariate Regression Stochastic
Forecasting Nonlinear Extrapolation Box–Jenkins ARIMA Advanced Time-Series Questions
CHAPTER 9 Using the Past to Predict the Future Time-Series Forecasting Methodology No Trend and No Seasonality With Trend but No Seasonality No Trend but with Seasonality With Seasonality and with
Trend Regression Analysis The Pitfalls of Forecasting: Outliers, Nonlinearity, Multicollinearity, Heteroskedasticity, Autocorrelation, and Structural Breaks Other Technical Issues in Regression
Analysis Appendix A—Forecast Intervals Appendix B—Ordinary Least Squares
Appendix C—Detecting and Fixing Heteroskedasticity Appendix D—Detecting and Fixing Multicollinearity Appendix E—Detecting and Fixing Autocorrelation Questions Exercise
Risk Diversification CHAPTER 10 The Search for the Optimal Decision What Is an Optimization Model? The Traveling Financial Planner The Lingo of Optimization Solving Optimization Graphically and Using
Excel’s Solver Questions
CHAPTER 11 Optimization Under Uncertainty Optimization Procedures Continuous Optimization Discrete Integer Optimization Appendix—Computing Annualized Returns and Risk for Portfolio Optimization
Question Exercise
Risk Mitigation CHAPTER 12 What Is So Real About Real Options, and Why Are They Optional?
What Are Real Options? The Real Options Solution in a Nutshell Issues to Consider Implementing Real Options Analysis Industry Leaders Embracing Real Options What the Experts Are Saying Criticisms,
Caveats, and Misunderstandings in Real Options Questions
CHAPTER 13 The Black Box Made Transparent: Real Options Super Lattice Solver Software Introduction to the Real Options Super Lattice Solver Software Single Asset Super Lattice Solver Multiple Super
Lattice Solver Multinomial Lattice Solver SLS Excel Solution SLS Functions Lattice Maker
More Industry Applications CHAPTER 14 Extended Business Cases II: Real Estate, Banking, Military Strategy, Automotive Aftermarkets, Global Earth Observation Systems, and Employee Stock Options Case
Study: Understanding Risk and Optimal Timing in a Real Estate Development Using Real Options Analysis Case Study: Using Stochastic Optimization and Valuation Models to Evaluate the Credit Risk of
Corporate Restructuring Case Study: Real Options and KVA in Military Strategy at the United States Navy Case Study: Manufacturing and Sales in the Automotive Aftermarket Case Study: The Boeing
Company’s Strategic Analysis of the Global Earth Observation System of Systems Case Study: Valuing Employee Stock Options Under the 2004 FAS 123R
Risk Management CHAPTER 15 The Warning Signs The Problem of Negligent Entrustment Management’s Due Diligence
Sins of an Analyst Reading the Warning Signs in Monte Carlo Simulation Reading the Warning Signs in Time-Series Forecasting and Regression Reading the Warning Signs in Real Options Analysis Reading
the Warning Signs in Optimization Under Uncertainty Questions
CHAPTER 16 Changing a Corporate Culture How to Get Risk Analysis Accepted in an Organization Change-Management Issues and Paradigm Shifts Making Tomorrow’s Forecast Today
Tables You Really Need
Standard Normal Distribution (partial area) Standard Normal Distribution (full area) Student’s t-Distribution (one tail and two tails) Durbin–Watson Critical Values (alpha 0.05) Normal Random Numbers
Random Numbers (multiple digits) Uniform Random Numbers Chi-Square Critical Values F-Distribution Critical Statistics Real Options Analysis Values
Answers to End of Chapter Questions and Exercises
About the CD-ROM
his book is divided into nine parts starting from a discussion of what risk is and how it is quantified, to how risk can be predicted, diversified, taken advantage of, hedged, and, finally, managed.
The first part deals with risk identification where the different aspects of business risks are identified, including a brief historical view of how risk was evaluated in the past. The second part
deals with risk evaluation explaining why disastrous ramifications may result if risk is not considered in business decisions. Part Three pertains to risk quantification and details how risk can be
captured quantitatively through step-by-step applications of Monte Carlo simulation. Part Four deals with industry applications and examples of how risk analysis is applied in practical day-to-day
issues in the oil and gas, pharmaceutical, financial planning, hospital risk management, and executive compensation problems. Part Five pertains to risk prediction where the uncertain and risky
future is predicted using analytical time-series methods. Part Six deals with how risk diversification works when multiple projects exist in a portfolio. Part Seven’s risk mitigation discussion deals
with how a firm or management can take advantage of risk and uncertainty by implementing and maintaining flexibility in projects. Part Eight provides a second installment of business cases where risk
analysis is applied in the banking, real estate, military strategy, automotive parts aftermarket, and global earth observation systems. Part Nine provides a capstone discussion of applying risk
management in companies, including how to obtain senior management’s buy-in and implementing a change of perspective in corporate culture as it applies to risk analysis. This book is an update of
Applied Risk Analysis (Wiley, 2004) to include coverage of the author’s own Risk Simulator software and Real Options Super Lattice Solver software. Following is a synopsis of the material covered in
each chapter of the book.
PART ONE—RISK IDENTIFICATION Chapter 1—Moving Beyond Uncertainty To the people who lived centuries ago, risk was simply the inevitability of chance occurrence beyond the realm of human control. We
have been
struggling with risk our entire existence, but, through trial and error and through the evolution of human knowledge and thought, have devised ways to describe and quantify risk. Risk assessment
should be an important part of the decision-making process; otherwise bad decisions may be made. Chapter 1 explores the different facets of risk within the realms of applied business risk analysis,
providing an intuitive feel of what risk is.
PART TWO—RISK EVALUATION Chapter 2—From Risk to Riches The concepts of risk and return are detailed in Chapter 2, illustrating their relationships in the financial world, where a higher-risk project
necessitates a higher expected return. How are uncertainties estimated and risk calculated? How do you convert a measure of uncertainty into a measure of risk? These are the topics covered in this
chapter, starting from the basics of statistics to applying them in risk analysis, and including a discussion of the different measures of risk.
Chapter 3—A Guide to Model-Building Etiquette Chapter 3 addresses some of the more common errors and pitfalls analysts make when creating a new model by explaining some of the proper modeling
etiquettes. The issues discussed range from file naming conventions and proper model aesthetics to complex data validation and Visual Basic for Applications (VBA) scripting. An appendix is provided
on some VBA modeling basics and techniques of macros and forms creation.
PART THREE—RISK QUANTIFICATION Chapter 4—On the Shores of Monaco Monte Carlo simulation in its simplest form is just a random number generator useful for forecasting, estimation, and risk analysis. A
simulation calculates numerous scenarios of a model by repeatedly picking values from the probability distribution for the uncertain variables and using those values for the event—events such as
totals, net profit, or gross expenses. Simplistically, think of the Monte Carlo simulation approach as repeatedly picking golf balls out of a large basket. Chapter 4 illustrates why simulation is
important through the flaw of averages example. Excel is used to perform rudimentary simulations, and simulation is shown as a logical next step extension to traditional approaches used in risk
Chapter 5—Test Driving Risk Simulator Chapter 5 guides the user through applying the world’s premier risk analysis and simulation software: Risk Simulator. With a few simple mouse clicks, the reader
will be on his or her way to running sophisticated Monte Carlo simulation analysis to capture both uncertainty and risks using the enclosed CD-ROM’s Risk Simulator trial software. In addition, the
interpretation of said analysis is also very important. The best analysis in the world is only as good as the analyst’s ability to understand, utilize, present, report, and convince management or
clients of the results.
Chapter 6—Pandora’s Toolbox Powerful simulation-related tools such as bootstrapping, distributional fitting, hypothesis test, correlated simulation, multidimensional simulation, tornado charts, and
sensitivity charts are discussed in detail in Chapter 6, complete with step-by-step illustrations. These tools are extremely valuable to analysts working in the realm of risk analysis. The
applicability of each tool is discussed in detail. For example, the use of nonparametric bootstrapping simulation as opposed to parametric Monte Carlo simulation approaches is discussed. An appendix
to this chapter deals with the technical specifics of goodness-of-fit tests.
PART FOUR—INDUSTRY APPLICATIONS Chapter 7—Extended Business Cases I: Pharmaceutical and Biotech Negotiations, Oil and Gas Exploration, Financial Planning with Simulation, Hospital Risk Management,
and Risk-Based Executive Compensation Valuation Chapter 7 contains the first installment of actual business cases from industry applying risk analytics. Business cases were contributed by a variety
of industry experts on applying risk analysis in the areas of oil and gas exploration, pharmaceutical biotech deal making, financial planning, hospital risk management, and executive compensation
PART FIVE—RISK PREDICTION Chapter 8—Tomorrow’s Forecast Today Chapter 8 focuses on applying Risk Simulator to run time-series forecasting methods, multivariate regressions, nonlinear extrapolation,
stochastic process forecasts, and Box-Jenkins ARIMA. In addition, the issues of seasonality and
trend are discussed, together with the eight time-series decomposition models most commonly used by analysts to forecast future events given historical data. The software applications of each method
are discussed in detail, complete with their associated measures of forecast errors and potential pitfalls.
Chapter 9—Using the Past to Predict the Future The main thrust of Chapter 9 is time-series and regression analysis made easy. Starting with some basic time-series models, including exponential
smoothing and moving averages, and moving on to more complex models, such as the Holt–Winters’ additive and multiplicative models, the reader will manage to navigate through the maze of time-series
analysis. The basics of regression analysis are also discussed, complete with pragmatic discussions of statistical validity tests as well as the pitfalls of regression analysis, including how to
identify and fix heteroskedasticity, multicollinearity, and autocorrelation. The five appendixes that accompany this chapter deal with the technical specifics of interval estimations in regression
analysis, ordinary least squares, and some pitfalls in running regressions, including detecting and fixing heteroskedasticity, multicollinearity, and autocorrelation.
PART SIX—RISK DIVERSIFICATION Chapter 10—The Search for the Optimal Decision In most business or analytical models, there are variables over which you have control, such as how much to charge for a
product or how much to invest in a project. These controlled variables are called decision variables. Finding the optimal values for decision variables can make the difference between reaching an
important goal and missing that goal. Chapter 10 details the optimization process at a high level, with illustrations on solving deterministic optimization problems manually, using graphs, and
applying Excel’s Solver add-in. (Chapter 11 illustrates the solution to optimization problems under uncertainty, mirroring more closely real-life business conditions.)
Chapter 11—Optimization Under Uncertainty Chapter 11 illustrates two optimization models with step-by-step details. The first model is a discrete portfolio optimization of projects under uncertainty.
Given a set of potential projects, the model evaluates all possible discrete combinations of projects on a “go” or “no-go” basis such that a budget constraint is satisfied, while simultaneously
providing the best level of returns subject to uncertainty. The best projects will then be chosen based on these criteria. The second model evaluates a financial portfolio’s continuous
allocation of different asset classes with different levels of risks and returns. The objective of this model is to find the optimal allocation of assets subject to a 100 percent allocation
constraint that still maximizes the Sharpe ratio, or the portfolio’s return-to-risk ratio. This ratio will maximize the portfolio’s return subject to the minimum risks possible while accounting for
the cross-correlation diversification effects of the asset classes in a portfolio.
PART SEVEN—RISK MITIGATION Chapter 12—What Is So Real about Real Options, and Why Are They Optional? Chapter 12 describes what real option analysis is, who has used the approach, how companies are
using it, and what some of the characteristics of real options are. The chapter describes real options in a nutshell, providing the reader with a solid introduction to its concepts without the need
for its theoretical underpinnings. Real options are applicable if the following requirements are met: traditional financial analysis can be performed and models can be built; uncertainty exists; the
same uncertainty drives value; management or the project has strategic options or flexibility to either take advantage of these uncertainties or to hedge them; and management must be credible to
execute the relevant strategic options when they become optimal to do so.
Chapter 13—The Black Box Made Transparent: Real Options Super Lattice Solver Software Chapter 13 introduces the readers to the world’s first true real options software applicable across all
industries. The chapter illustrates how a user can get started with the software in a few short moments after it has been installed. The reader is provided with hands-on experience with the Real
Options Super Lattice Solver to obtain immediate results—a true test when the rubber meets the road.
PART EIGHT—MORE INDUSTRY APPLICATIONS Chapter 14—Extended Business Cases II: Real Estate, Banking, Military Strategy, Automotive Aftermarkets, Global Earth Observing Systems, and Valuing Employee
Stock Options (FAS 123R) Chapter 14 contains the second installment of actual business cases from industry applying risk analytics. Business cases were contributed by a variety of
industry experts applying simulation, optimization, and real options analysis in the areas of real estate, banking, military strategy, automotive parts aftermarket, global earth observing systems,
and employee stock options.
PART NINE—RISK MANAGEMENT Chapter 15—The Warning Signs The risk analysis software applications illustrated in this book are extremely powerful tools and could prove detrimental in the hands of
untrained and unlearned novices. Management, the end user of the results from said tools, must be able to discern if quality analysis has been performed. Chapter 15 delves into the thirty-some
problematic issues most commonly encountered by analysts applying risk analysis techniques, and how management can spot these mistakes. While it might be the job of the analyst to create the models
and use the fancy analytics, it is senior management’s job to challenge the assumptions and results obtained from the analysis. Model errors, assumption and input errors, analytical errors, user
errors, and interpretation errors are some of the issues discussed in this chapter. Some of the issues and concerns raised for management’s consideration in performing due diligence include
challenging distributional assumptions, critical success factors, impact drivers, truncation, forecast validity, endpoints, extreme values, structural breaks, values at risk, a priori expectations,
back-casting, statistical validity, specification errors, out of range forecasts, heteroskedasticity, multicollinearity, omitted variables, spurious relationships, causality and correlation,
autoregressive processes, seasonality, random walks, and stochastic processes.
Chapter 16—Changing a Corporate Culture Advanced analytics is hard to explain to management. So, how do you get risk analysis accepted as the norm into a corporation, especially if your industry is
highly conservative? It is a guarantee in companies like these that an analyst showing senior management a series of fancy and mathematically sophisticated models will be thrown out of the office
together with his or her results, and have the door slammed shut. Change management is the topic of discussion in Chapter 16. Explaining the results and convincing management appropriately go hand in
hand with the characteristics of the analytical tools, which, if they satisfy certain change management requisites, can make acceptance easier. The approach that guarantees acceptance has to be three
pronged: Top, middle, and junior levels must all get in on the action. Change management specialists underscore that change comes more easily if
the methodologies to be accepted are applicable to the problems at hand, are accurate and consistent, provide value-added propositions, are easy to explain, have comparative advantage over
traditional approaches, are compatible with the old, have modeling flexibility, are backed by executive sponsorship, and are influenced and championed by external parties including competitors,
customers, counterparties, and vendors.
ADDITIONAL MATERIAL The book concludes with the ten mathematical tables used in the analyses throughout the book and the answers to the questions and exercises at the end of each chapter. The CD-ROM
included with the book holds 30-day trial versions of Risk Simulator and Real Options Super Lattice Solver software, as well as sample models and getting started videos to help the reader get a jump
start on modeling risk.
Risk Identification
Moving Beyond Uncertainty
A BRIEF HISTORY OF RISK: WHAT EXACTLY IS RISK? Since the beginning of recorded history, games of chance have been a popular pastime. Even in Biblical accounts, Roman soldiers cast lots for Christ’s
robes. In earlier times, chance was something that occurred in nature, and humans were simply subjected to it as a ship is to the capricious tosses of the waves in an ocean. Even up to the time of
the Renaissance, the future was thought to be simply a chance occurrence of completely random events and beyond the control of humans. However, with the advent of games of chance, human greed has
propelled the study of risk and chance to evermore closely mirror real-life events. Although these games initially were played with great enthusiasm, no one actually sat down and figured out the
odds. Of course, the individual who understood and mastered the concept of chance was bound to be in a better position to profit from such games of chance. It was not until the mid-1600s that the
concept of chance was properly studied, and the first such serious endeavor can be credited to Blaise Pascal, one of the fathers of modern choice, chance, and probability.1 Fortunately for us, after
many centuries of mathematical and statistical innovations from pioneers such as Pascal, Bernoulli, Bayes, Gauss, LaPlace, and Fermat, our modern world of uncertainty can be explained with much more
elegance through methodological applications of risk and uncertainty. To the people who lived centuries ago, risk was simply the inevitability of chance occurrence beyond the realm of human control.
Nonetheless, many phony soothsayers profited from their ability to convincingly profess their clairvoyance by simply stating the obvious or reading the victims’ body language and telling them what
they wanted to hear. We modern-day humans, ignoring for the moment the occasional seers among us, with our fancy technological achievements, are still susceptible to risk and uncertainty. We may be
able to predict the orbital paths of planets in our solar system with astounding accuracy or the escape velocity required to shoot a man from the Earth to the Moon, but when it comes to predicting a
revenues the following year, we are at a loss. Humans have been struggling with risk our entire existence but, through trial and error, and through the evolution of human knowledge and thought, have
devised ways to describe, quantify, hedge, and take advantage of risk. Clearly the entire realm of risk analysis is great and would most probably be intractable within the few chapters of a book.
Therefore, this book is concerned with only a small niche of risk, namely applied business risk modeling and analysis. Even in the areas of applied business risk analysis, the diversity is great. For
instance, business risk can be roughly divided into the areas of operational risk management and financial risk management. In financial risk, one can look at market risk, private risk, credit risk,
default risk, maturity risk, liquidity risk, inflationary risk, interest rate risk, country risk, and so forth. This book focuses on the application of risk analysis in the sense of how to adequately
apply the tools to identify, understand, quantify, and diversify risk such that it can be hedged and managed more effectively. These tools are generic enough that they can be applied across a whole
spectrum of business conditions, industries, and needs. Finally, understanding this text in its entirety together with Real Options Analysis, Second Edition (Wiley, 2005) and the associated Risk
Simulator and Real Options SLS software are required prerequisites for the Certified Risk Analyst or CRA certification (see www.realoptionsvaluation.com for more details).
UNCERTAINTY VERSUS RISK Risk and uncertainty are very different-looking animals, but they are of the same species; however, the lines of demarcation are often blurred. A distinction is critical at
this juncture before proceeding and worthy of segue. Suppose I am senseless enough to take a skydiving trip with a good friend and we board a plane headed for the Palm Springs desert. While airborne
at 10,000 feet and watching our lives flash before our eyes, we realize that in our haste we forgot to pack our parachutes on board. However, there is an old, dusty, and dilapidated emergency
parachute on the plane. At that point, both my friend and I have the same level of uncertainty—the uncertainty of whether the old parachute will open, and if it does not, whether we will fall to our
deaths. However, being the risk-adverse, nice guy I am, I decide to let my buddy take the plunge. Clearly, he is the one taking the plunge and the same person taking the risk. I bear no risk at this
time while my friend bears all the risk.2 However, we both have the same level of uncertainty as to whether the parachute will actually fail. In fact, we both have the same level of uncertainty as to
the outcome of the day’s trading on the New York Stock Exchange—which has absolutely no impact on whether we live or die
Moving Beyond Uncertainty
that day. Only when he jumps and the parachute opens will the uncertainty become resolved through the passage of time, events, and action. However, even when the uncertainty is resolved with the
opening of the parachute, the risk still exists as to whether he will land safely on the ground below. Therefore, risk is something one bears and is the outcome of uncertainty. Just because there is
uncertainty, there could very well be no risk. If the only thing that bothers a U.S.-based firm’s CEO is the fluctuation in the foreign exchange market of the Zambian kwacha, then I might suggest
shorting some kwachas and shifting his portfolio to U.S.-based debt. This uncertainty, if it does not affect the firm’s bottom line in any way, is only uncertainty and not risk. This book is
concerned with risk by performing uncertainty analysis—the same uncertainty that brings about risk by its mere existence as it impacts the value of a particular project. It is further assumed that
the end user of this uncertainty analysis uses the results appropriately, whether the analysis is for identifying, adjusting, or selecting projects with respect to their risks, and so forth.
Otherwise, running millions of fancy simulation trials and letting the results “marinate” will be useless. By running simulations on the foreign exchange market of the kwacha, an analyst sitting in a
cubicle somewhere in downtown Denver will in no way reduce the risk of the kwacha in the market or the firm’s exposure to the same. Only by using the results from an uncertainty simulation analysis
and finding ways to hedge or mitigate the quantified fluctuation and downside risks of the firm’s foreign exchange exposure through the derivatives market could the analyst be construed as having
performed risk analysis and risk management. To further illustrate the differences between risk and uncertainty, suppose we are attempting to forecast the stock price of Microsoft (MSFT). Suppose
MSFT is currently priced at $25 per share, and historical prices place the stock at 21.89% volatility. Now suppose that for the next 5 years, MSFT does not engage in any risky ventures and stays
exactly the way it is, and further suppose that the entire economic and financial world remains constant. This means that risk is fixed and unchanging; that is, volatility is unchanging for the next
5 years. However, the price uncertainty still increases over time; that is, the width of the forecast intervals will still increase over time. For instance, Year 0’s forecast is known and is $25.
However, as we progress one day, MSFT will most probably vary between $24 and $26. One year later, the uncertainty bounds may be between $20 and $30. Five years into the future, the boundaries might
be between $10 and $50. So, in this example, uncertainties increase while risks remain the same. Therefore, risk is not equal to uncertainty. This idea is, of course, applicable to any forecasting
approach whereby it becomes more and more difficult to forecast the future albeit the same risk. Now, if risk changes over time, the bounds of uncertainty get more complicated (e.g., uncertainty
bounds of sinusoidal waves with discrete event jumps).
In other instances, risk and uncertainty are used interchangeably. For instance, suppose you play a coin-toss game—bet $0.50 and if heads come up you win $1, but you lose everything if tails appear.
The risk here is you lose everything because the risk is that tails may appear. The uncertainty here is that tails may appear. Given that tails appear, you lose everything; hence, uncertainty brings
with it risk. Uncertainty is the possibility of an event occurring and risk is the ramification of such an event occurring. People tend to use these two terms interchangeably. In discussing
uncertainty, there are three levels of uncertainties in the world: the known, the unknown, and the unknowable. The known is, of course, what we know will occur and are certain of its occurrence
(contractual obligations or a guaranteed event); the unknown is what we do not know and can be simulated. These events will become known through the passage of time, events, and action (the
uncertainty of whether a new drug or technology can be developed successfully will become known after spending years and millions on research programs—it will either work or not, and we will know
this in the future), and these events carry with them risks, but these risks will be reduced or eliminated over time. However, unknowable events carry both uncertainty and risk that the totality of
the risk and uncertainty may not change through the passage of time, events, or actions. These are events such as when the next tsunami or earthquake will hit, or when another act of terrorism will
occur around the world. When an event occurs, uncertainty becomes resolved, but risk still remains (another one may or may not hit tomorrow). In traditional analysis, we care about the known factors.
In risk analysis, we care about the unknown and unknowable factors. The unknowable factors are easy to hedge—get the appropriate insurance! That is, do not do business in a war-torn country, get away
from politically unstable economies, buy hazard and business interruption insurance, and so forth. It is for the unknown factors that risk analysis will provide the most significant amount of value.
WHY IS RISK IMPORTANT IN MAKING DECISIONS? Risk should be an important part of the decision-making process; otherwise bad decisions may be made without an assessment of risk. For instance, suppose
projects are chosen based simply on an evaluation of returns; clearly the highest-return project will be chosen over lower-return projects. In financial theory, projects with higher returns will in
most cases bear higher risks.3 Therefore, instead of relying purely on bottom-line profits, a project should be evaluated based on its returns as well as its risks. Figures 1.1 and 1.2 illustrate the
errors in judgment when risks are ignored.
Moving Beyond Uncertainty
The concepts of risk and uncertainty are related but different. Uncertainty involves variables that are unknown and changing, but its uncertainty will become known and resolved through the passage of
time, events, and action. Risk is something one bears and is the outcome of uncertainty. Sometimes, risk may remain constant while uncertainty increases over time.
Figure 1.1 lists three mutually exclusive projects with their respectivecosts to implement, expected net returns (net of the costs to implement), and risk levels (all in present values).4 Clearly,
for the budget-constrained manager, the cheaper the project the better, resulting in the selection of Project X.5 The returns-driven manager will choose Project Y with the highest returns, assuming
that budget is not an issue. Project Z will be chosen by the risk-averse manager as it provides the least amount of risk while providing a positive net return. The upshot is that with three different
projects and three different managers, three different decisions will be made. Which manager is correct and why? Figure 1.2 shows that Project Z should be chosen. For illustration purposes, suppose
all three projects are independent and mutually exclusive,6 and that an unlimited number of projects from each category can be chosen but the budget is constrained at $1,000. Therefore, with this
$1,000 budget, 20 project Xs can be chosen, yielding $1,000 in net returns and $500 risks, and so forth. It is clear from Figure 1.2 that project Z is the best project as for the same level of net
returns ($1,000), the least amount of risk is undertaken ($100). Another way of viewing this selection is that for each $1 of returns obtained, only $0.1 amount of risk is involved on average, or
that for each $1 of risk, $10 in returns are obtained on average. This example illustrates the concept of bang for the buck or getting the best value with the
Name of Project
Project X Project Y Project Z
$50 $250 $100
$50 $200 $100
$25 $200 $10
Project X for the cost- and budget-constrained manager Project Y for the returns-driven and nonresource-constrained manager Project Z for the risk-averse manager Project Z for the smart manager
FIGURE 1.1
Why is risk important?
Looking at bang for the buck, X (2), Y (1), Z (10), Project Z should be chosen — with a $1,000 budget, the following can be obtained: Project X: 20 Project Xs returning $1,000, with $500 risk Project
Y: 4 Project Xs returning $800, with $800 risk Project Z: 10 Project Xs returning $1,000, with $100 risk Project X: For each $1 return, $0.5 risk is taken Project Y: For each $1 return, $1.0 risk is
taken Project Z: For each $1 return, $0.1 risk is taken Project X: For each $1 of risk taken, $2 return is obtained Project Y: For each $1 of risk taken, $1 return is obtained Project Z: For each $1
of risk taken, $10 return is obtained Conclusion: Risk is important. Ignoring risks results in making the wrong decision.
FIGURE 1.2
Adding an element of risk.
least amount of risk. An even more blatant example is if there are several different projects with identical single-point average net returns of $10 million each. Without risk analysis, a manager
should in theory be indifferent in choosing any of the projects.7 However, with risk analysis, a better decision can be made. For instance, suppose the first project has a 10 percent chance of
exceeding $10 million, the second a 15 percent chance, and the third a 55 percent chance. The third project, therefore, is the best bet.
DEALING WITH RISK THE OLD-FASHIONED WAY Businesses have been dealing with risk since the beginning of the history of commerce. In most cases, managers have looked at the risks of a particular
project, acknowledged their existence, and moved on. Little quantification was performed in the past. In fact, most decision makers look only to singlepoint estimates of a project’s profitability.
Figure 1.3 shows an example of a single-point estimate. The estimated net revenue of $30 is simply that, a single point whose probability of occurrence is close to zero.8 Even in the simple model
shown in Figure 1.3, the effects of interdependencies are ignored, and in traditional modeling jargon, we have the problem of garbage in, garbage out (GIGO). As an example of interdependencies, the
units sold are probably negatively correlated to the price of the product,9 and positively correlated to the average variable cost;10 ignoring these effects in a single-point estimate will yield
grossly incorrect results. For instance, if the unit sales variable becomes 11 instead of 10, the resulting revenue may not
Moving Beyond Uncertainty
• •
Unit Sales Sales Price
10 $10
• •
Total Revenue Variable Cost/Unit
$100 $5
• •
Total Fixed Cost Total Cost
$20 $70
Net Revenue
Interdependencies means GIGO
Single-Point Estimate How confident are you of the analysis outcome? This may be dead wrong!
FIGURE 1.3
Single-point estimate.
simply be $35. The net revenue may actually decrease due to an increase in variable cost per unit while the sale price may actually be slightly lower to accommodate this increase in unit sales.
Ignoring these interdependencies will reduce the accuracy of the model.
A rational manager would choose projects based not only on returns but also on risks. The best projects tend to be those with the best bang for the buck, or the best returns subject to some specified
One approach used to deal with risk and uncertainty is the application of scenario analysis, as seen in Figure 1.4. Suppose the worst-case, nominalcase, and best-case scenarios are applied to the
unit sales; the resulting three scenarios’ net revenues are obtained. As earlier, the problems of interdependencies are not addressed. The net revenues obtained are simply too variable, ranging from
$5 to $55. Not much can be determined from this analysis. A related approach is to perform what-if or sensitivity analysis as seen in Figure 1.5. Each variable is perturbed and varied a prespecified
amount and the resulting change in net revenues is captured. This approach is great for understanding which variables drive or impact the bottom line the most. A related approach is the use of
tornado and sensitivity charts as detailed in Chapter 6, Pandora’s Toolbox, which looks at a series of simulation tools. These approaches were usually the extent to which risk and uncertainty
• •
Unit Sales Sales Price
10 $10
• •
Total Revenue Variable Cost/Unit
$100 $5
• •
Total Fixed Cost Total Cost
$20 $70
Net Revenue
Best case: Most likely: Worst case:
Best case: Most likely: Worst case:
$55 $30 $5
Outcomes are too variable — which will occur? The best, most likely, and worst-case scenarios are usually simply wild guesses!
FIGURE 1.4
Scenario analysis.
analysis were traditionally performed. Clearly, a better and more robust approach is required. This is the point where simulation comes in. Figure 1.6 shows how simulation can be viewed as simply an
extension of the traditional approaches of sensitivity and scenario testing. The critical success drivers or the variables that affect the bottom-line net-revenue variable the most, which at the same
time are uncertain, are simulated. In simulation, the interdependencies are accounted for by using correlations. The uncertain variables are then simulated thousands of times to emulate all potential
permutations and combi-
• •
Unit Sales Sales Price
10 $10
• •
Total Revenue Variable Cost/Unit
$100 $5
• •
Total Fixed Cost Total Cost
$20 $70
What-If Analysis
Net Revenue
Take original $20 and change by $1
What-If Analysis Take original 10 and change by 1 unit
Captures the marginal impacts, but which condition will really occur? Great in capturing sensitivities!
FIGURE 1.5
What-if sensitivity analysis.
Moving Beyond Uncertainty
• •
Unit Sales Sales Price
10 $10
• •
Total Revenue Variable Cost/Unit
$100 $5
• •
Total Fixed Cost Total Cost
$20 $70
Net Revenue
Simulate Simulate Accounts for interrelationships Simulate
Simulate thousands of times for each variable
Results will include probabilities that a certain outcome will occur
FIGURE 1.6
Simulation approach.
nations of outcomes. The resulting net revenues from these simulated potential outcomes are tabulated and analyzed. In essence, in its most basic form, simulation is simply an enhanced version of
traditional approaches such as sensitivity and scenario analysis but automatically performed for thousands of times while accounting for all the dynamic interactions between the simulated variables.
The resulting net revenues from simulation, as seen in Figure 1.7, show that there is a 90 percent probability that the net
FIGURE 1.7
Simulation results.
revenues will fall between $19.44 and $41.25, with a 5 percent worst-case scenario of net revenues falling below $19.44. Rather than having only three scenarios, simulation created 5,000 scenarios,
or trials, where multiple variables are simulated and changing simultaneously (unit sales, sale price, and variable cost per unit), while their respective relationships or correlations are
THE LOOK AND FEEL OF RISK AND UNCERTAINTY In most financial risk analyses, the first step is to create a series of free cash flows (FCF), which can take the shape of an income statement or discounted
cash-flow (DCF) model. The resulting deterministic free cash flows are depicted on a time line, akin to that shown in Figure 1.8. These cash-flow figures are in most cases forecasts of the unknown
future. In this simple example, the cash flows are assumed to follow a straight-line growth curve (of course, other shaped curves also can be constructed). Similar forecasts
Year 0
Year 1
Year 2
Year 3
Year 4
Year 5
FCF5 = $900
FCF4 = $800
FCF3 = $700
FCF2 = $600
WACC = 30%
FCF1 = $500
Zero uncertainty = zero volatility
$800 $700 $600 $500 Time Year 1
Year 2
Year 3
Year 4
Year 5
This straight-line cash-flow projection is the basics of DCF analysis. This assumes a static and known set of future cash flows.
FIGURE 1.8
The intuition of risk—deterministic analysis.
Moving Beyond Uncertainty
can be constructed using historical data and fitting these data to a time-series model or a regression analysis.11 Whatever the method of obtaining said forecasts or the shape of the growth curve,
these are point estimates of the unknown future. Performing a financial analysis on these static cash flows provides an accurate value of the project if and only if all the future cash flows are
known with certainty—that is, no uncertainty exists. However, in reality, business conditions are hard to forecast. Uncertainty exists, and the actual levels of future cash flows may look more like
those in Figure 1.9; that is, at certain time periods, actual cash flows may be above, below, or at the forecast levels. For instance, at any time period, the actual cash flow may fall within a range
of figures with a certain percent probability. As an example, the first year’s cash flow may fall anywhere between $480 and $520. The actual values are shown to fluctuate around the forecast values
at an average volatility of 20 percent.12 (We use volatility here as a measure of uncertainty, i.e., the higher the volatility, the higher the level of uncertainty, where at zero uncertainty, the
outcomes are 100 percent certain13). Certainly this example provides a much more accurate view of the
Year 0
Year 1
Year 2
Year 3
Year 4
Year 5 FCF5 = $900 ±70
FCF4 = $800 ±50
Actual value
FCF3 = $700 ±35
FCF1 = $500 ±20
FCF2 = $600 ±30
WACC = 30%
DCF analysis undervalues project Volatility = 20%
$800 $700
DCF analysis overvalues project
$600 $500
Forecast value Time Year 1
Year 2
Year 3
Year 4
Year 5
This graph shows that in reality, at different times, actual cash flows may be above, below, or at the forecast value line due to uncertainty and risk.
FIGURE 1.9
The intuition of risk—Monte Carlo simulation.
Volatility = 5% $900 $800 $700 Volatility = 20%
$600 $500 Volatility = 0%
Time Year 1
Year 2
Year 3
Year 4
Year 5
The higher the risk, the higher the volatility and the higher the fluctuation of actual cash flows around the forecast value. When volatility is zero, the values collapse to the forecast
straight-line static value.
FIGURE 1.10
The intuition of risk—the face of risk.
true nature of business conditions, which are fairly difficult to predict with any amount of certainty. Figure 1.10 shows two sample actual cash flows around the straight-line forecast value. The
higher the uncertainty around the actual cash-flow levels, the higher the volatility. The darker line with 20 percent volatility fluctuates more wildly around the forecast values. These values can be
quantified using Monte Carlo simulation fairly easily but cannot be properly accounted for using more simplistic traditional methods such as sensitivity or scenario analyses.
INTEGRATED RISK ANALYSIS FRAMEWORK Before diving into the different risk analysis methods in the remaining chapters of the book, it is important to first understand the integrated risk analysis
framework and how these different techniques are related in a risk analysis and risk management context. This framework comprises eight distinct phases of a successful and comprehensive risk analysis
implementation, going from a qualitative management screening process to creating clear and concise reports for management. The process was developed by the author based on previous successful
implementations of risk analysis, forecasting, real options, valuation, and optimization projects both in the consulting arena and in industry-specific problems. These phases can be performed either
in isolation or together in sequence for a more robust integrated analysis.
Moving Beyond Uncertainty
Figure 1.11 shows the integrated risk analysis process up close. We can segregate the process into the following eight simple steps: 1. 2. 3. 4. 5. 6. 7. 8.
Qualitative management screening. Time-series and regression forecasting. Base case net present value analysis. Monte Carlo simulation. Real options problem framing. Real options modeling and
analysis. Portfolio and resource optimization. Reporting and update analysis.
1. Qualitative Management Screening Qualitative management screening is the first step in any integrated risk analysis process. Management has to decide which projects, assets, initiatives, or
strategies are viable for further analysis, in accordance with the firm’s mission, vision, goal, or overall business strategy. The firm’s mission, vision, goal, or overall business strategy may
include market penetration strategies, competitive advantage, technical, acquisition, growth, synergistic, or globalization issues. That is, the initial list of projects should be qualified in terms
of meeting management’s agenda. Often at this point the most valuable insight is created as management frames the complete problem to be resolved and the various risks to the firm are identified and
flushed out.
2. Time-Series and Regression Forecasting The future is then forecasted using time-series analysis or multivariate regression analysis if historical or comparable data exist. Otherwise, other
qualitative forecasting methods may be used (subjective guesses, growth rate assumptions, expert opinions, Delphi method, and so forth). In a financial context, this is the step where future
revenues, sale price, quantity sold, volume, production, and other key revenue and cost drivers are forecasted. See Chapters 8 and 9 for details on forecasting and using the author’s Risk Simulator
software to run time-series, extrapolation, stochastic process, ARIMA, and regression forecasts.
3. Base Case Net Present Value Analysis For each project that passes the initial qualitative screens, a discounted cash flow model is created. This model serves as the base case analysis where a net
present value (NPV) is calculated for each project, using the forecasted values from the previous step. This step also applies if only a single project is under evaluation. This net present value is
calculated using the traditional
Risk Identification
Risk Mitigation
The relevant projects are chosen for real options analysis and the project or portfolio-level real options are framed.
5. Framing Real Options
Start with a list of projects or strategies to be evaluated. These projects have already been through qualitative screening.
A B C D E Apply time-series forecasting and regression analysis to make projections of the future.
Time-Series Forecasting
2. Base Case Projections for Each Project
FIGURE 1.11
Traditional analysis stops here!
Defray cost +
High cost outlay –
Strategic competitiveness +
Strategic options value +
Cost reduction +
Revenue enhancement +
Stochastic optimization is the next optional step if multiple projects exist that require efficient asset allocation given some budgetary constraints (useful for strategic portfolio management).
Loss of market leadership –
Loss cost reduction –
Loss revenues –
Other opportunities +
7. Portfolio Optimization and Asset Allocation
Generate a traditional series of static base case financial (discounted cash flow) models for each project.
3. Static Financial Models Development
Integrated risk analysis process.
Real options analytics are calculated through binomial lattices and closed-form partial-differential models with simulation.
Closed-Form Models
6. Options Analytics, Simulation, and Optimization
Risk Prediction
Risk Hedging
Risk Modeling Risk Diversification
1. List of Projects and Strategies to Evaluate
Risk Analysis
Cash Flows
Deviation of
Starting (t)
Future Cash Flows
of the Costs to
"Discounted Value Option Value at t
Value of the
Optimal Exercise
(t + 3)
Option Value at t = 0
Costs to Invest
5 - 72,878
2 - ,674,916
1 - 27,735
Interest Rate
(monthly basis)
DCF Value
First Cash Flow Discounted Value of Discounted Value of the
Wait to Invest
Wait to Invest
Wait to Invest
Execute Investment
Decision To Invest
Opportunity Cost
Create reports, make decisions, and do it all again iteratively over time.
Academic Loans
Private Loans
Personal Financials
Phase II Options
Academic Loans
Private Loans
Personal 311135 135
Phase II Options
8. Reports Presentation and Update Analysis
Add Monte Carlo simulation to the analysis and the financial model outputs become inputs into the real options analysis.
Volatility is computed.
Simulation Lognormal
4. Dynamic Monte Carlo Simulation
Risk Management
Moving Beyond Uncertainty
approach of using the forecast revenues and costs, and discounting the net of these revenues and costs at an appropriate risk-adjusted rate. The return on investment and other metrics are generated
4. Monte Carlo Simulation Because the static discounted cash flow produces only a single-point estimate result, there is oftentimes little confidence in its accuracy given that future events that
affect forecast cash flows are highly uncertain. To better estimate the actual value of a particular project, Monte Carlo simulation should be employed next. See Chapters 4 and 5 for details on
running Monte Carlo simulations using the author’s Risk Simulator software. Usually, a sensitivity analysis is first performed on the discounted cash flow model; that is, setting the net present
value as the resulting variable, we can change each of its precedent variables and note the change in the resulting variable. Precedent variables include revenues, costs, tax rates, discount rates,
capital expenditures, depreciation, and so forth, which ultimately flow through the model to affect the net present value figure. By tracing back all these precedent variables, we can change each one
by a preset amount and see the effect on the resulting net present value. A graphical representation can then be created, which is often called a tornado chart (see Chapter 6 on using Risk
Simulator’s simulation analysis tools such as tornado charts, spider charts, and sensitivity charts), because of its shape, where the most sensitive precedent variables are listed first, in
descending order of magnitude. Armed with this information, the analyst can then decide which key variables are highly uncertain in the future and which are deterministic. The uncertain key variables
that drive the net present value and, hence, the decision are called critical success drivers. These critical success drivers are prime candidates for Monte Carlo simulation. Because some of these
critical success drivers may be correlated—for example, operating costs may increase in proportion to quantity sold of a particular product, or prices may be inversely correlated to quantity sold—a
correlated Monte Carlo simulation may be required. Typically, these correlations can be obtained through historical data. Running correlated simulations provides a much closer approximation to the
variables’ real-life behaviors.
5. Real Options Problem Framing The question now is that after quantifying risks in the previous step, what next? The risk information obtained somehow needs to be converted into actionable
intelligence. Just because risk has been quantified to be such and such using Monte Carlo simulation, so what, and what do we do about it? The answer is to use real options analysis to hedge these
risks, to value these risks, and to position yourself to take advantage of the risks. The first step
in real options is to generate a strategic map through the process of framing the problem. Based on the overall problem identification occurring during the initial qualitative management screening
process, certain strategic optionalities would have become apparent for each particular project. The strategic optionalities may include, among other things, the option to expand, contract, abandon,
switch, choose, and so forth. Based on the identification of strategic optionalities that exist for each project or at each stage of the project, the analyst can then choose from a list of options to
analyze in more detail. Real options are added to the projects to hedge downside risks and to take advantage of upside swings.
6. Real Options Modeling and Analysis Through the use of Monte Carlo simulation, the resulting stochastic discounted cash flow model will have a distribution of values. Thus, simulation models,
analyzes, and quantifies the various risks and uncertainties of each project. The result is a distribution of the NPVs and the project’s volatility. In real options, we assume that the underlying
variable is the future profitability of the project, which is the future cash flow series. An implied volatility of the future free cash flow or underlying variable can be calculated through the
results of a Monte Carlo simulation previously performed. Usually, the volatility is measured as the standard deviation of the logarithmic returns on the free cash flow stream. In addition, the
present value of future cash flows for the base case discounted cash flow model is used as the initial underlying asset value in real options modeling. Using these inputs, real options analysis is
performed to obtain the projects’ strategic option values— see Chapters 12 and 13 for details on understanding the basics of real options and on using the Real Options Super Lattice Solver software.
7. Portfolio and Resource Optimization Portfolio optimization is an optional step in the analysis. If the analysis is done on multiple projects, management should view the results as a portfolio of
rolled-up projects because the projects are in most cases correlated with one another, and viewing them individually will not present the true picture. As firms do not only have single projects,
portfolio optimization is crucial. Given that certain projects are related to others, there are opportunities for hedging and diversifying risks through a portfolio. Because firms have limited
budgets, have time and resource constraints, while at the same time have requirements for certain overall levels of returns, risk tolerances, and so forth, portfolio optimization takes into account
all these to create an optimal portfolio mix. The analysis will provide the optimal allocation of investments across multiple projects. See Chapters 10 and 11 for details on using Risk Simulator to
perform portfolio optimization.
Moving Beyond Uncertainty
8. Reporting and Update Analysis The analysis is not complete until reports can be generated. Not only are results presented, but the process should also be shown. Clear, concise, and precise
explanations transform a difficult black-box set of analytics into transparent steps. Management will never accept results coming from black boxes if they do not understand where the assumptions or
data originate and what types of mathematical or financial massaging takes place. Risk analysis assumes that the future is uncertain and that management has the right to make midcourse corrections
when these uncertainties become resolved or risks become known; the analysis is usually done ahead of time and, thus, ahead of such uncertainty and risks. Therefore, when these risks become known,
the analysis should be revisited to incorporate the decisions made or revising any input assumptions. Sometimes, for long-horizon projects, several iterations of the real options analysis should be
performed, where future iterations are updated with the latest data and assumptions. Understanding the steps required to undertake an integrated risk analysis is important because it provides insight
not only into the methodology itself, but also into how it evolves from traditional analyses, showing where the traditional approach ends and where the new analytics start.
QUESTIONS 1. Why is risk important in making decisions? 2. Describe the concept of bang for the buck. 3. Compare and contrast risk and uncertainty.
Two Risk Evaluation
From Risk to Riches TAMING THE BEAST Risky ventures are the norm in the daily business world. The mere mention of names such as George Soros, John Meriweather, Paul Reichmann, and Nicholas Leeson, or
firms such as Long Term Capital Management, Metallgesellschaft, Barings Bank, Bankers Trust, Daiwa Bank, Sumimoto Corporation, Merrill Lynch, and Citibank brings a shrug of disbelief and fear. These
names are some of the biggest in the world of business and finance. Their claim to fame is not simply being the best and brightest individuals or being the largest and most respected firms, but for
bearing the stigma of being involved in highly risky ventures that turned sour almost overnight.1 George Soros was and still is one of the most respected names in high finance; he is known globally
for his brilliance and exploits. Paul Reichmann was a reputable and brilliant real estate and property tycoon. Between the two of them, nothing was impossible, but when they ventured into investments
in Mexican real estate, the wild fluctuations of the peso in the foreign exchange market was nothing short of a disaster. During late 1994 and early 1995, the peso hit an all-time low and their
ventures went from bad to worse, but the one thing that they did not expect was that the situation would become a lot worse before it was all over and billions would be lost as a consequence. Long
Term Capital Management was headed by Meriweather, one of the rising stars in Wall Street, with a slew of superstars on its management team, including several Nobel laureates in finance and economics
(Robert Merton and Myron Scholes). The firm was also backed by giant investment banks. A firm that seemed indestructible literally blew up with billions of dollars in the red, shaking the
international investment community with repercussions throughout Wall Street as individual investors started to lose faith in large hedge funds and wealth-management firms, forcing the eventual
massive Federal Reserve bailout. Barings was one of the oldest banks in England. It was so respected that even Queen Elizabeth II herself held a private account with it. This multibillion dollar
institution was brought down single-handedly by Nicholas Leeson, an employee halfway around the world. Leeson was a young and
brilliant investment banker who headed up Barings’ Singapore branch. His illegally doctored track record showed significant investment profits, which gave him more leeway and trust from the home
office over time. He was able to cover his losses through fancy accounting and by taking significant amounts of risk. His speculations in the Japanese yen went south and he took Barings down with
him, and the top echelon in London never knew what hit them. Had any of the managers in the boardroom at their respective headquarters bothered to look at the risk profile of their investments, they
would surely have made a very different decision much earlier on, preventing what became major embarrassments in the global investment community. If the projected returns are adjusted for risks, that
is, finding what levels of risks are required to attain such seemingly extravagant returns, it would be sensible not to proceed. Risks occur in everyday life that do not require investments in the
multimillions. For instance, when would one purchase a house in a fluctuating housing market? When would it be more profitable to lock in a fixed-rate mortgage rather than keep a floating variable
rate? What are the chances that there will be insufficient funds at retirement? What about the potential personal property losses when a hurricane hits? How much accident insurance is considered
sufficient? How much is a lottery ticket actually worth? Risk permeates all aspects of life and one can never avoid taking or facing risks. What we can do is understand risks better through a
systematic assessment of their impacts and repercussions. This assessment framework must also be capable of measuring, monitoring, and managing risks; otherwise, simply noting that risks exist and
moving on is not optimal. This book provides the tools and framework necessary to tackle risks head-on. Only with the added insights gained through a rigorous assessment of risk can one actively
manage and monitor risk.
Risks permeate every aspect of business, but we do not have to be passive participants. What we can do is develop a framework to better understand risks through a systematic assessment of their
impacts and repercussions. This framework also must be capable of measuring, monitoring, and managing risks.
THE BASICS OF RISK Risk can be defined simply as any uncertainty that affects a system in an unknown fashion whereby the ramifications are also unknown but bears with
From Risk to Riches
it great fluctuation in value and outcome. In every instance, for risk to be evident, the following generalities must exist: ■ ■ ■ ■ ■
Uncertainties and risks have a time horizon. Uncertainties exist in the future and will evolve over time. Uncertainties become risks if they affect the outcomes and scenarios of the system. These
changing scenarios’ effects on the system can be measured. The measurement has to be set against a benchmark.
Risk is never instantaneous. It has a time horizon. For instance, a firm engaged in a risky research and development venture will face significant amounts of risk but only until the product is fully
developed or has proven itself in the market. These risks are caused by uncertainties in the technology of the product under research, uncertainties about the potential market, uncertainties about
the level of competitive threats and substitutes, and so forth. These uncertainties will change over the course of the company’s research and marketing activities—some uncertainties will increase
while others will most likely decrease through the passage of time, actions, and events. However, only the uncertainties that affect the product directly will have any bearing on the risks of the
product being successful. That is, only uncertainties that change the possible scenario outcomes will make the product risky (e.g., market and economic conditions). Finally, risk exists if it can be
measured and compared against a benchmark. If no benchmark exists, then perhaps the conditions just described are the norm for research and development activities, and thus the negative results are
to be expected. These benchmarks have to be measurable and tangible, for example, gross profits, success rates, market share, time to implementation, and so forth.
Risk is any uncertainty that affects a system in an unknown fashion and its ramifications are unknown, but it brings great fluctuation in value and outcome. Risk has a time horizon, meaning that
uncertainty evolves over time, which affects measurable future outcomes and scenarios with respect to a benchmark.
THE NATURE OF RISK AND RETURN Nobel Laureate Harry Markowitz’s groundbreaking research into the nature of risk and return has revolutionized the world of finance. His seminal work, which is now known
all over the world as the Markowitz Efficient Frontier,
looks at the nature of risk and return. Markowitz did not look at risk as the enemy but as a condition that should be embraced and balanced out through its expected returns. The concept of risk and
return was then refined through later works by William Sharpe and others, who stated that a heightened risk necessitates a higher return, as elegantly expressed through the capital asset pricing
model (CAPM), where the required rate of return on a marketable risky equity is equivalent to the return on an equivalent riskless asset plus a beta systematic and undiversifiable risk measure
multiplied by the market risk’s return premium. In essence, a higher risk asset requires a higher return. In Markowitz’s model, one could strike a balance between risk and return. Depending on the
risk appetite of an investor, the optimal or best-case returns can be obtained through the efficient frontier. Should the investor require a higher level of returns, he or she would have to face a
higher level of risk. Markowitz’s work carried over to finding combinations of individual projects or assets in a portfolio that would provide the best bang for the buck, striking an elegant balance
between risk and return. In order to better understand this balance, also known as risk adjustment in modern risk analysis language, risks must first be measured and understood. The following section
illustrates how risk can be measured.
THE STATISTICS OF RISK The study of statistics refers to the collection, presentation, analysis, and utilization of numerical data to infer and make decisions in the face of uncertainty, where the
actual population data is unknown. There are two branches in the study of statistics: descriptive statistics, where data is summarized and described, and inferential statistics, where the population
is generalized through a small random sample, such that the sample becomes useful for making predictions or decisions when the population characteristics are unknown. A sample can be defined as a
subset of the population being measured, whereas the population can be defined as all possible observations of interest of a variable. For instance, if one is interested in the voting practices of
all U.S. registered voters, the entire pool of a hundred million registered voters is considered the population, whereas a small survey of one thousand registered voters taken from several small
towns across the nation is the sample. The calculated characteristics of the sample (e.g., mean, median, standard deviation) are termed statistics, while parameters imply that the entire population
has been surveyed and the results tabulated. Thus, in decision making, the statistic is of vital importance, seeing that sometimes the entire population is yet unknown (e.g., who are all your
customers, what is the total market share, etc.) or it is very difficult to obtain all relevant
From Risk to Riches
information on the population seeing that it would be too time- or resourceconsuming. In inferential statistics, the usual steps undertaken include: ■ ■ ■ ■ ■ ■ ■
Designing the experiment—this phase includes designing the ways to collect all possible and relevant data. Collection of sample data—data is gathered and tabulated. Analysis of data—statistical
analysis is performed. Estimation or prediction—inferences are made based on the statistics obtained. Hypothesis testing—decisions are tested against the data to see the outcomes.
Goodness-of-fit—actual data is compared to historical data to see how accurate, valid, and reliable the inference is. Decision making—decisions are made based on the outcome of the inference.
Measuring the Center of the Distribution—The First Moment The first moment of a distribution measures the expected rate of return on a particular project. It measures the location of the project’s
scenarios and possible outcomes on average. The common statistics for the first moment include the mean (average), median (center of a distribution), and mode (most commonly occurring value). Figure
2.1 illustrates the first moment— where, in this case, the first moment of this distribution is measured by the mean (m) or average value.
Measuring the Spread of the Distribution—The Second Moment The second moment measures the spread of a distribution, which is a measure of risk. The spread or width of a distribution measures the
variability of σ1
σ1 = σ2
μ1 ≠ μ2
Skew = 0 Kurtosis = 0
FIGURE 2.1
First moment.
RISK EVALUATION σ2 σ1
Skew = 0 Kurtosis = 0
μ1 = μ2
FIGURE 2.2
Second moment.
a variable, that is, the potential that the variable can fall into different regions of the distribution—in other words, the potential scenarios of outcomes. Figure 2.2 illustrates two distributions
with identical first moments (identical means) but very different second moments or risks. The visualization becomes clearer in Figure 2.3. As an example, suppose there are two stocks and the first
stock’s movements (illustrated by the darker line) with the smaller fluctuation is compared against the second stock’s movements (illustrated by the dotted line) with a much higher price fluctuation.
Clearly an investor would view the stock with the wilder fluctuation as riskier because the outcomes of the more risky stock are relatively more unknown Stock prices
FIGURE 2.3
Stock price fluctuations.
From Risk to Riches
than the less risky stock. The vertical axis in Figure 2.3 measures the stock prices; thus, the more risky stock has a wider range of potential outcomes. This range is translated into a
distribution’s width (the horizontal axis) in Figure 2.2, where the wider distribution represents the riskier asset. Hence, width or spread of a distribution measures a variable’s risks. Notice that
in Figure 2.2, both distributions have identical first moments or central tendencies, but clearly the distributions are very different. This difference in the distributional width is measurable.
Mathematically and statistically, the width or risk of a variable can be measured through several different statistics, including the range, standard deviation (s), variance, coefficient of
variation, volatility, and percentiles.
Measuring the Skew of the Distribution—The Third Moment The third moment measures a distribution’s skewness, that is, how the distribution is pulled to one side or the other. Figure 2.4 illustrates a
negative or left skew (the tail of the distribution points to the left) and Figure 2.5 illustrates a positive or right skew (the tail of the distribution points to the right). The mean is always
skewed toward the tail of the distribution while the median remains constant. Another way of seeing this is that the mean
σ1 = σ2 Skew < 0 Kurtosis = 0
FIGURE 2.4
μ1 ≠ μ2
Third moment (left skew).
σ1 = σ2 Skew > 0 Kurtosis = 0
μ1 ≠ μ2
FIGURE 2.5
Third moment (right skew).
moves, but the standard deviation, variance, or width may still remain constant. If the third moment is not considered, then looking only at the expected returns (e.g., mean or median) and risk
(standard deviation), a positively skewed project might be incorrectly chosen! For example, if the horizontal axis represents the net revenues of a project, then clearly a left or negatively skewed
distribution might be preferred as there is a higher probability of greater returns (Figure 2.4) as compared to a higher probability for lower level returns (Figure 2.5). Thus, in a skewed
distribution, the median is a better measure of returns, as the medians for both Figures 2.4 and 2.5 are identical, risks are identical, and, hence, a project with a negatively skewed distribution of
net profits is a better choice. Failure to account for a project’s distributional skewness may mean that the incorrect project may be chosen (e.g., two projects may have identical first and second
moments, that is, they both have identical returns and risk profiles, but their distributional skews may be very different).
Measuring the Catastrophic Tail Events of the Distribution—The Fourth Moment The fourth moment, or kurtosis, measures the peakedness of a distribution. Figure 2.6 illustrates this effect. The
background (denoted by the dotted line) is a normal distribution with an excess kurtosis of 0. The new distribution has a higher kurtosis; thus the area under the curve is thicker at the tails with
less area in the central body. This condition has major impacts on risk analysis as for the two distributions in Figure 2.6; the first three moments (mean, standard deviation, and skewness) can be
identical, but the fourth moment (kurtosis) is different. This condition means that, although the returns and risks are identical, the probabilities of extreme and catastrophic
σ1 = σ2
Skew = 0 Kurtosis > 0
μ1 = μ2
FIGURE 2.6
Fourth moment.
From Risk to Riches
events (potential large losses or large gains) occurring are higher for a high kurtosis distribution (e.g., stock market returns are leptokurtic or have high kurtosis). Ignoring a project’s return’s
kurtosis may be detrimental. Note that sometimes a normal kurtosis is denoted as 3.0, but in this book we use the measure of excess kurtosis, henceforth simply known as kurtosis. In other words, a
kurtosis of 3.5 is also known as an excess kurtosis of 0.5, indicating that the distribution has 0.5 additional kurtosis above the normal distribution. The use of excess kurtosis is more prevalent in
academic literature and is, hence, used here. Finally, the normalization of kurtosis to a base of 0 makes for easier interpretation of the statistic (e.g., a positive kurtosis indicates fatter-tailed
distributions while negative kurtosis indicates thinnertailed distributions).
Most distributions can be defined up to four moments. The first moment describes the distribution’s location or central tendency (expected returns), the second moment describes its width or spread
(risks), the third moment its directional skew (most probable events), and the fourth moment its peakedness or thickness in the tails (catastrophic losses or gains). All four moments should be
calculated and interpreted to provide a more comprehensive view of the project under analysis.
THE MEASUREMENTS OF RISK There are multiple ways to measure risk in projects. This section summarizes some of the more common measures of risk and lists their potential benefits and pitfalls. The
measures include: ■
Probability of Occurrence. This approach is simplistic and yet effective. As an example, there is a 10 percent probability that a project will not break even (it will return a negative net present
value indicating losses) within the next 5 years. Further, suppose two similar projects have identical implementation costs and expected returns. Based on a single-point estimate, management should
be indifferent between them. However, if risk analysis such as Monte Carlo simulation is performed, the first project might reveal a 70 percent probability of losses compared to only a 5 percent
probability of losses on the second project. Clearly, the second project is better when risks are analyzed. Standard Deviation and Variance. Standard deviation is a measure of the average of each
data point’s deviation from the mean.2 This is the
most popular measure of risk, where a higher standard deviation implies a wider distributional width and, thus, carries a higher risk. The drawback of this measure is that both the upside and
downside variations are included in the computation of the standard deviation. Some analysts define risks as the potential losses or downside; thus, standard deviation and variance will penalize
upswings as well as downsides. Semi-Standard Deviation. The semi-standard deviation only measures the standard deviation of the downside risks and ignores the upside fluctuations. Modifications of
the semi-standard deviation include calculating only the values below the mean, or values below a threshold (e.g., negative profits or negative cash flows). This provides a better picture of downside
risk but is more difficult to estimate. Volatility. The concept of volatility is widely used in the applications of real options and can be defined briefly as a measure of uncertainty and risks.3
Volatility can be estimated using multiple methods, including simulation of the uncertain variables impacting a particular project and estimating the standard deviation of the resulting asset’s
logarithmic returns over time. This concept is more difficult to define and estimate but is more powerful than most other risk measures in that this single value incorporates all sources of
uncertainty rolled into one value. Beta. Beta is another common measure of risk in the investment finance arena. Beta can be defined simply as the undiversifiable, systematic risk of a financial
asset. This concept is made famous through the CAPM, where a higher beta means a higher risk, which in turn requires a higher expected return on the asset. Coefficient of Variation. The coefficient
of variation is simply defined as the ratio of standard deviation to the mean, which means that the risks are common-sized. For example, the distribution of a group of students’ heights (measured in
meters) can be compared to the distribution of the students’ weights (measured in kilograms).4 This measure of risk or dispersion is applicable when the variables’ estimates, measures, magnitudes, or
units differ. Value at Risk. Value at Risk (VaR) was made famous by J. P. Morgan in the mid-1990s through the introduction of its RiskMetrics approach, and has thus far been sanctioned by several
bank governing bodies around the world. Briefly, it measures the amount of capital reserves at risk given a particular holding period at a particular probability of loss. This measurement can be
modified to risk applications by stating, for example, the amount of potential losses a certain percent of the time during the period of the economic life of the project—clearly, a project with a
smaller VaR is better. Worst-Case Scenario and Regret. Another simple measure is the value of the worst-case scenario given catastrophic losses. Another definition is
From Risk to Riches
regret. That is, if a decision is made to pursue a particular project, but if the project becomes unprofitable and suffers a loss, the level of regret is simply the difference between the actual
losses compared to doing nothing at all. Risk-Adjusted Return on Capital. Risk-adjusted return on capital (RAROC) takes the ratio of the difference between the fiftieth percentile (median) return and
the fifth percentile return on a project to its standard deviation. This approach is used mostly by banks to estimate returns subject to their risks by measuring only the potential downside effects
and ignoring the positive upswings.
The following appendix details the computations of some of these risk measures and is worthy of review before proceeding through the book.
APPENDIX—COMPUTING RISK This appendix illustrates how some of the more common measures of risk are computed. Each risk measurement has its own computations and uses. For example, certain risk
measures are applicable only on time-series data (e.g., volatility) while others are applicable in both cross-sectional and timeseries data (e.g., variance, standard deviation, and covariance), while
others require a consistent holding period (e.g., Value at Risk) or a market comparable or benchmark (e.g., beta coefficient).
Probability of Occurrence This approach is simplistic yet effective. The probability of success or failure can be determined several ways. The first is through management expectations and
assumptions, also known as expert opinion, based on historical occurrences or experience of the expert. Another approach is simply to gather available historical or comparable data, industry
averages, academic research, or other third-party sources, indicating the historical probabilities of success or failure (e.g., pharmaceutical R&D’s probability of technical success based on various
drug indications can be obtained from external research consulting groups). Finally, Monte Carlo simulation can be run on a model with multiple interacting input assumptions and the output of
interest (e.g., net present value, gross margin, tolerance ratios, and development success rates) can be captured as a simulation forecast and the relevant probabilities can be obtained, such as the
probability of breaking even, probability of failure, probability of making a profit, and so forth. See Chapter 5 on step-by-step instructions on running and interpreting simulations and
Standard Deviation and Variance Standard deviation is a measure of the average of each data point’s deviation from the mean. A higher standard deviation or variance implies a wider distributional
width and, thus, a higher risk. The standard deviation can be measured in terms of the population or sample, and for illustration purposes, is shown in the following list, where we define xi as the
individual data points, m as the population mean, N as the population size, x– as the sample mean, and n as the sample size: Population standard deviation: n
σ =
( xi − μ ) ∑ i =1
and population variance is simply the square of the standard deviation or s 2. Alternatively, use Excel’s STDEVP and VARP functions for the population standard deviation and variance respectively.
Sample standard deviation: n
s =
( xi − x ) ∑ i =1
n −1
and sample variance is similarly the square of the standard deviation or s2. Alternatively, use Excel’s STDEV and VAR functions for the sample standard deviation and variance respectively. Figure 2.7
shows the step-by-step computations. The drawbacks of this measure is that both the upside and downside variations are included in the computation of the standard deviation, and its dependence on the
units (e.g., values of x in thousands of dollars versus millions of dollars are not comparable). Some analysts define risks as the potential losses or downside; thus, standard deviation and variance
penalize upswings as well as downsides. An alternative is the semi-standard deviation.
Semi-Standard Deviation The semi-standard deviation only measures the standard deviation of the downside risks and ignores the upside fluctuations. Modifications of the semistandard deviation include
calculating only the values below the mean, or values below a threshold (e.g., negative profits or negative cash flows). This
From Risk to Riches
Sum Mean
X – Mean
Square of (X – Mean)
–10.50 12.25 –11.50 13.25 –14.65 15.65 –14.50 –10.00 –1.43
–9.07 13.68 –10.07 14.68 –13.22 17.08 –13.07
82.2908 187.1033 101.4337 215.4605 174.8062 291.6776 170.8622
Population Standard Deviation and Variance Sum of Square (X – Mean) Variance = Sum of Square (X – Mean)/N Using Excel’s VARP function: Standard Deviation = Square Root of (Sum of Square (X – Mean)/N)
Using Excel’s STDEVP function:
1223.6343 174.8049 174.8049 13.2214 13.2214
Sample Standard Deviation and Variance Sum of Square (X – Mean) Variance = Sum of Square (X – Mean)/(N – 1) Using Excel’s VAR function: Standard Deviation = Square Root of (Sum of Square (X – Mean)/
(N–1)) Using Excel’s STDEV function:
FIGURE 2.7
1223.6343 203.9390 203.9390 14.2807 14.2807
Standard deviation and variance computation.
approach provides a better picture of downside risk but is more difficult to estimate. Figure 2.8 shows how a sample semi-standard deviation and semi-variance are computed. Note that the computation
must be performed manually.
Volatility The concept of volatility is widely used in the applications of real options and can be defined briefly as a measure of uncertainty and risks. Volatility can be estimated using multiple
methods, including simulation of the uncertain variables impacting a particular project and estimating the standard deviation of the resulting asset’s logarithmic returns over time. This concept is
more difficult to define and estimate but is more powerful than most other risk measures in that this single value incorporates all sources of uncertainty
X – Mean
–10.50 12.25 –11.50 13.25 –14.65 15.65 –14.50
2.29 Ignore 1.29 Ignore –1.86 Ignore –1.71
Square of (X – Mean)
Sum Mean
5.2327 (Ignore the positive values) 1.6577 (Ignore the positive values) 3.4689 (Ignore the positive values) 2.9327 –51.1500 –12.7875
Population Standard Deviation and Variance Sum of Square (X – Mean) Variance = Sum of Square (X – Mean)/N Using Excel’s VARP function: Standard Deviation = Square Root of (Sum of Square (X – Mean)/N)
Using Excel’s STDEVP function:
13.2919 3.3230 3.3230 1.8229 1.8229
Sample Standard Deviation and Variance Sum of Square (X – Mean) Variance = Sum of Square (X – Mean)/(N – 1) Using Excel’s VAR function: Standard Deviation = Square Root of (Sum of Square (X – Mean)/
(N–1)) Using Excel’s STDEV function:
FIGURE 2.8
13.2919 4.4306 4.4306 2.1049 2.1049
Semi-standard deviation and semi-variance computation.
rolled into one value. Figure 2.9 illustrates the computation of an annualized volatility. Volatility is typically computed for time-series data only (i.e., data that follows a time series such as
stock price, price of oil, interest rates, and so forth). The first step is to determine the relative returns from period to period, take their natural logarithms (ln), and then compute the sample
standard deviation of these logged values. The result is the periodic volatility. Then, annualize the volatility by multiplying this periodic volatility by the square root of the number of periods in
a year (e.g., 1 if annual data, 4 if quarterly data, and 12 if monthly data are used). For a more detailed discussion of volatility computation as well as other methods for computing volatility such
as using logarithmic present value approach, management assumptions, and GARCH, or generalized autoregressive conditional heteroskedasticity models, and how a discount rate can be determined from
volatility, see Real Options Analysis, Second Edition, by Johnathan Mun (Wiley 2005).
From Risk to Riches
Relative Returns
LN (Relative Returns)
Square of (LN Relative Returns – Average)
0 1 2 3 4 5 6 Sum Average
10.50 12.25 11.50 13.25 14.65 15.65 14.50
1.17 0.94 1.15 1.11 1.07 0.93
0.1542 –0.0632 0.1417 0.1004 0.0660 –0.0763 0.3228 0.0538
0.0101 0.0137 0.0077 0.0022 0.0001 0.0169
Sample Standard Deviation and Variance Sum of Square (LN Relative Returns – Average) Volatility = Square Root of (Sum of Square (LN Relative Returns – Average)/N – 1) Using Excel’s STDEV function on
LN(Relative Returns): Annualized Volatility (Periodic Volatility × Square Root (Periods in a Year))
FIGURE 2.9
0.0507 10.07% 10.07% 34.89%
Volatility computation.
Beta Beta is another common measure of risk in the investment finance arena. Beta can be defined simply as the undiversifiable, systematic risk of a financial asset. This concept is made famous
through the CAPM, where a higher beta means a higher risk, which in turn requires a higher expected return on the asset. The beta coefficient measures the relative movements of one asset value to a
comparable benchmark or market portfolio; that is, we define the beta coefficient as:
β =
Cov(x, m) ρx, mσ xσ m = Var(m) σ m2
where Cov(x,m) is the population covariance between the asset x and the market or comparable benchmark m, Var(m) is the population variance of m, where both can be computed in Excel using the COVAR
and VARP functions. The computed beta will be for the population. In contrast, the sample beta coefficient is computed using the correlation coefficient between x and m or rx,m and the sample
standard deviations of x and m or using sx and sm instead of sx and sm. A beta of 1.0 implies that the relative movements or risk of x is identical to the relative movements of the benchmark (see
Example 1 in Figure 2.10
46 10.50 12.25 11.50 13.25 14.65 15.65 14.50 21.00 24.50 23.00 26.50 29.30 31.30 29.00
Population Beta Covariance population using Excel’s COVAR: Variance of M using Excel’s VARP: Population Beta (Covariance population (X, M)/Variance (M))
Covariance population using Excel’s COVAR: Variance of M using Excel’s VARP: Population Beta (Covariance population (X, M)/Variance (M))
FIGURE 2.10
Beta coefficient computation.
2.9827 2.9827 1.0000
Population Beta
Correlation between X and M using Excel’s CORREL: Standard deviation of X using Excel’s STDEV: Standard deviation of M using Excel’s STDEV: Beta Coefficient (Correlation X and M * Stdev X * Stdev M)/
(Stdev M * Stdev M)
1.0000 1.8654 1.8654 1.0000
Correlation between X and M using Excel’s CORREL: Standard deviation of X using Excel’s STDEV: Standard deviation of M using Excel’s STDEV: Beta Coefficient (Correlation X and M * Stdev X * Stdev M)/
(Stdev M * Stdev M)
11.50 13.25 12.50 14.25 15.65 16.65 15.50
Sample Beta
10.50 12.25 11.50 13.25 14.65 15.65 14.50
Market Comparable M
Example 2: Half the fluctuations of the market
Sample Beta
Market Comparable M
Example 1: Similar fluctuations with the market
5.9653 11.9306 0.5000
1.0000 1.8654 3.7308 0.5000
From Risk to Riches
where the asset x is simply one unit less than the market asset m, but they both fluctuate at the same levels). Similarly, a beta of 0.5 implies that the relative movements or risk of x is half of
the relative movements of the benchmark (see Example 2 in Figure 2.10 where the asset x is simply half the market’s fluctuations m). Therefore, beta is a powerful measure but requires a comparable to
which to benchmark its fluctuations.
Coefficient of Variation The coefficient of variation (CV) is simply defined as the ratio of standard deviation to the mean, which means that the risks are common sized. For example, a distribution
of a group of students’ heights (measured in meters) can be compared to the distribution of the students’ weights (measured in kilograms). This measure of risk or dispersion is applicable when the
variables’ estimates, measures, magnitudes, or units differ. For example, in the computations in Figure 2.7, the CV for the population is –9.25 or –9.99 for the sample. The CV is useful as a measure
of risk per unit of return, or when inverted, can be used as a measure of bang for the buck or returns per unit of risk. Thus, in portfolio optimization, one would be interested in minimizing the CV
or maximizing the inverse of the CV.
Value at Risk Value at Risk (VaR) measures the amount of capital reserves at risk given a particular holding period at a particular probability of loss. This measurement can be modified to risk
applications by stating, for example, the amount of potential losses a certain percent of the time during the period of the economic life of the project—clearly, a project with a smaller VaR is
better. VaR has a holding time period requirement, typically one year or one month. It also has a percentile requirement, for example, a 99.9 percent onetail confidence. There are also modifications
for daily risk measures such as DEaR or Daily Earnings at Risk. The VaR or DEaR can be determined very easily using Risk Simulator; that is, create your risk model, run a simulation, look at the
forecast chart, and enter in 99.9 percent as the right-tail probability of the distribution or 0.01 percent as the left-tail probability of the distribution, then read the VaR or DEaR directly off
the forecast chart.
Worst-Case Scenario and Regret Another simple measure is the value of the worst-case scenario given catastrophic losses. An additional definition is regret; that is, if a decision is made to pursue a
particular project, but if the project becomes unprofitable and suffers a loss, the level of regret is simply the difference between the actual losses compared to doing nothing at all. This analysis
is very similar
to the VaR but is not time dependent. For instance, a financial return on investment model can be created and a simulation is run. The 5 percent worstcase scenario can be read directly from the
forecast chart in Risk Simulator.
Risk-Adjusted Return on Capital Risk-adjusted return on capital (RAROC) takes the ratio of the difference between the fiftieth percentile P50 or its median return and the fifth percentile P5 return
on a project to its standard deviation s, written as: RAROC =
P50 − P5 σ
This approach is used mostly by banks to estimate returns subject to their risks by measuring only the potential downside effects and truncating the distribution to the worst-case 5 percent of the
time, ignoring the positive upswings, while at the same time common sizing to the risk measure of standard deviation. Thus, RAROC can be seen as a measure that combines standard deviation, CV,
semi-standard deviation, and worst-case scenario analysis. This measure is useful when applied with Monte Carlo simulation, where the percentiles and standard deviation measurements required can be
obtained through the forecast chart’s statistics view in Risk Simulator.
QUESTIONS 1. What is the efficient frontier and when is it used? 2. What are inferential statistics and what steps are required in making inferences? 3. When is using standard deviation less
desirable than using semi-standard deviation as a measure of risk? 4. If comparing three projects with similar first, second, and fourth moments, would you prefer a project that has no skew, a
positive skew, or a negative skew? 5. If comparing three projects with similar first to third moments, would you prefer a project that is leptokurtic (high kurtosis), mesokurtic (average kurtosis),
or platykurtic (low kurtosis)? Explain your reasoning with respect to a distribution’s tail area. Under what conditions would your answer change? 6. What are the differences and similarities between
Value at Risk and worst-case scenario as a measure of risk?
A Guide to Model-Building Etiquette
he first step in risk analysis is the creation of a model. A model can range from a simple three-line calculation in an Excel spreadsheet (e.g., A + B = C) to a highly complicated and oftentimes
convoluted series of interconnected spreadsheets. Creating a proper model takes time, patience, strategy, and practice. Evaluating or learning a complicated model passed down to you that was
previously created by another analyst may be rather cumbersome. Even the person who built the model revisits it weeks or months later and tries to remember what was created can sometimes find it
challenging. It is indeed difficult to understand what the model originator was thinking of when the model was first built. As most readers of this book are Excel users, this chapter lists some model
building blocks that every professional model builder should at least consider implementing in his or her Excel spreadsheets.
As a rule of thumb, always remember to document the model; separate the inputs from the calculations and the results; protect the models against tampering; make the model user-friendly; track changes
made in the model; automate the model whenever possible; and consider model aesthetics.
DOCUMENT THE MODEL One of the major considerations in model building is its documentation. Although this step is often overlooked, it is crucial in order to allow continuity, survivorship, and
knowledge transfer from one generation of model builders to the next. Inheriting a model that is not documented from a
predecessor will only frustrate the new user. Some items to consider in model documentation include the following: ■
Strategize the Look and Feel of the Model. Before the model is built, the overall structure of the model should be considered. This conceptualization includes how many sections the model will contain
(e.g., each workbook file applies to a division; while each workbook has 10 worksheets representing each department in the division; and each worksheet has three sections, representing the revenues,
costs, and miscellaneous items) as well as how each of these sections are related, linked, or replicated from one another. Naming Conventions. Each of these workbooks and worksheets should have a
proper name. The recommended approach is simply to provide each workbook and worksheet a descriptive name. However, one should always consider brevity in the naming convention but yet provide
sufficient description of the model. If multiple iterations of the model are required, especially when the model is created by several individuals over time, the date and version numbers should be
part of the model’s file name for proper archiving, backup, and identification purposes. Executive Summary. In the first section of the model, there should always be a welcome page with an executive
summary of the model. The summary may include the file name, location on a shared drive, version of the model, developers of the model, and any other pertinent information, including instructions,
assumptions, caveats, warnings, or suggestions on using the model. File Properties. Make full use of Excel’s file properties (File | Properties). This simple action may make the difference between an
orphaned model and a model that users will have more faith in as to how current or updated it is (Figure 3.1). Document Changes and Tweaks. If multiple developers work on the model, when the model is
saved, the changes, tweaks, edits, and modifications should always be documented such that any past actions can be undone should it become necessary. This simple practice also provides a method to
track the changes that have been made versus a list of bugs or development requirements. Illustrate Formulas. Consider illustrating and documenting the formulas used in the model, especially when
complicated equations and calculations are required. Use Excel’s Equation Editor to do this (Insert | Object | Create New | Microsoft Equation), but also remember to provide a reference for more
advanced models. Results Interpretation. In the executive summary, on the reports or results summary page, include instructions on how the final analytical
A Guide to Model-Building Etiquette
FIGURE 3.1
Excel’s file properties dialog box.
results should be interpreted, including what assumptions are used when building the model, any theory the results pertain to, any reference material detailing the technical aspects of the model,
data sources, and any conjectures made to obtain certain input parameters. Reporting Structure. A good model should have a final report after the inputs have been entered and the analysis is
performed. This report may be as simple as a printable results worksheet or as a more sophisticated macro that creates a new document (e.g., Risk Simulator has a reporting function that provides
detailed analysis on the input parameters and output results). Model Navigation. Consider how a novice user will navigate between modules, worksheets, or input cells. One consideration is to include
navigational capabilities in the model. These navigational capabilities range from a simple set of naming conventions (e.g., sheets in a workbook can be named “1. Input Data,” “2. Analysis,” and “3.
Results”) where the user can quickly and easily identify the relevant worksheets by their tab names (Figure 3.2), to more sophisticated methods. More sophisticated navigational methods include using
hyperlinks and Visual Basic for Applications (VBA) code.
FIGURE 3.2
Worksheet tab names.
For instance, in order to create hyperlinks to other sheets from a main navigational sheet, click on Insert | Hyperlink | Place in This Document in Excel. Choose the relevant worksheet to link to
within the workbook (Figure 3.3). Place all these links in the main navigational sheet and place only the relevant links in each sheet (e.g., only the main menu and Step 2 in the analysis are
available in the Step 1 worksheet). These links can also be named as “next” or “previous,” to further assist the user in navigating a large model. The second and more protracted approach is to use
VBA codes to navigate the model. Refer to the appendix at the end of this chapter—A Primer on VBA Modeling and Writing Macros—for sample VBA codes used in said navigation and automation.
Document the model by strategizing the look and feel of the model, have an adequate naming convention, have an executive summary, include model property descriptions, indicate the changes and tweaks
made, illustrate difficult formulas, document how to interpret results, provide a reporting structure, and make sure the model is easy to navigate.
FIGURE 3.3
Insert hyperlink dialog box.
A Guide to Model-Building Etiquette
Different Worksheets for Different Functions. Consider using a different worksheet within a workbook for the model’s input assumption (these assumptions should all be accumulated into a single
sheet), a set of calculation worksheets, and a final set of worksheets summarizing the results. These sheets should then be appropriately named and grouped for easy identification. Sometimes, the
input worksheet also has some key model results—this arrangement is very useful as a management dashboard, where slight tweaks and changes to the inputs can be made by management and the fluctuations
in key results can be quickly viewed and captured. Describe Input Variables. In the input parameter worksheet, consider providing a summary of each input parameter, including where it is used in the
model. Sometimes, this can be done through cell comments instead (Insert | Comment). Name Input Parameter Cells. Consider naming individual cells by selecting an input cell, typing the relevant name
in the Name Box on the upper left corner of the spreadsheet, and hitting Enter (the arrow in Figure 3.4 shows the location of the name box). Also, consider naming ranges by selecting a range of cells
and typing the relevant name in the Name Box. For more complicated models where multiple input parameters with similar functions exist, consider grouping these names. For instance, if the inputs
“cost” and “revenues” exist in two different divisions, consider using the following hierarchical naming conventions (separated by periods in the names) for the Excel cells: Cost.Division.A
Cost.Division.B Revenues.Division.A Revenues.Division.B
FIGURE 3.4
Name box in Excel.
54 ■
Color Coding Inputs and Results. Another form of identification is simply to color code the input cells one consistent color, while the results, which are usually mathematical functions based on the
input assumptions and other intermediate calculations, should be color coded differently. Model Growth and Modification. A good model should always provide room for growth, enhancement, and update
analysis over time. When additional divisions are added to the model, or other constraints and input assumptions are added at a later date, there should be room to maneuver. Another situation
involves data updating, where, in the future, previous sales forecasts have now become reality and the actual sales now replace the forecasts. The model should be able to accommodate this situation.
Providing the ability for data updating and model growth is where modeling strategy and experience count. Report and Model Printing. Always consider checking the overall model, results, summary, and
report pages for their print layouts. Use Excel’s File | Print Preview capability to set up the page appropriately for printing. Set up the headers and footers to reflect the dates of the analysis as
well as the model version for easy comparison later. Use links, automatic fields, and formulas whenever appropriate (e.g., the Excel formula “=Today()” is a volatile field that updates automatically
to the latest date when the spreadsheet model was last saved).
Separate inputs, calculations, and results by creating different worksheets for different functions, describing input variables, naming input parameters, color coding inputs and results, providing
room for model growth and subsequent modifications, and considering report and model printing layouts.
Protect Workbook and Worksheets. Consider using spreadsheet protection (Tools | Protection) in your intermediate and final results summary sheet to prevent user tampering or accidental manipulation.
Passwords are also recommended here for more sensitive models.1 Hiding and Protecting Formulas. Consider setting cell properties to hide, lock, or both hide and lock cells (Format | Cells |
Protection), then protect the worksheet (Tools | Protection) to prevent the user from accidentally overriding a formula (by locking a cell and protecting the
A Guide to Model-Building Etiquette
sheet), or still allow the user to see the formula without the ability to irreparably break the model by deleting the contents of a cell (by locking but not hiding the cell and protecting the sheet),
or to prevent tampering with and viewing the formulas in the cell (by both locking and hiding the cell and then protecting the sheet).
Protect the models from user tampering at the workbook and worksheet levels through password protecting workbooks, or through hiding and protecting formulas in the individual worksheet cells.
Data Validation. Consider preventing the user from entering bad inputs through spreadsheet validation. Prevent erroneous inputs through data validation (Data | Validation | Settings) where only
specific inputs are allowed. Figure 3.5 illustrates data validation for a cell accepting only positive inputs. The Edit | Copy and Edit | Paste Special functions can be used to replicate the data
validation if validation is chosen in the paste special command. Error Alerts. Provide error alerts to let the user know when an incorrect value is entered through data validation (Data | Validation
| Error Alert)
FIGURE 3.5
Data validation dialog box.
FIGURE 3.6
Error message setup for data validation.
FIGURE 3.7
Error message for data validation.
shown in Figure 3.6. If the validation is violated, an error message box will be executed (Figure 3.7). Cell Warnings and Input Messages. Provide warnings and input messages when a cell is selected
where the inputs required are ambiguous (Data | Validation | Input Message). The message box can be set up to appear whenever the cell is selected, regardless of the data validation. This message box
can be used to provide additional information to the user about the specific input parameter or to provide suggested input values. Define All Inputs. Consider including a worksheet with named cells
and ranges, complete with their respective definitions and where each variable is used in the model.
Make the model user-friendly through data validation, error alerts, cell warnings, and input messages, as well as defining all the inputs required in the model.
A Guide to Model-Building Etiquette
TRACK THE MODEL ■
Insert Comments. Consider inserting comments for key variables (Insert | Comment) for easy recognition and for quick reference. Comments can be easily copied into different cells through the Edit |
Paste Special | Comments procedure. Track Changes. Consider tracking changes if collaborating with other modelers (Tools | Track Changes | Highlight Changes). Tracking all changes is not only
important, but it is also a courtesy to other model developers to note the changes and tweaks that were made. Avoid Hard-Coding Values. Consider using formulas whenever possible and avoid hard-coding
numbers into cells other than assumptions and inputs. In complex models, it would be extremely difficult to track down where a model breaks because a few values are hard-coded instead of linked
through equations. If a value needs to be hard-coded, it is by definition an input parameter and should be listed as such. Use Linking and Embedding. Consider object linking and embedding of files
and objects (Edit | Paste Special) rather than using a simple paste function. This way, any changes in the source files can be reflected in the linked file. If linking between spreadsheets, Excel
automatically updates these linked sheets every time the target sheet is opened. However, to avoid the irritating dialog pop-ups to update links every time the model is executed, simply turn off the
warnings through Edit | Links | Startup Prompt.
Track the model by inserting comments, using the track changes functionality, avoiding hard-coded values, and using the linking and embedding functionality.
AUTOMATE THE MODEL WITH VBA Visual Basic for Applications is a powerful Excel tool that can assist in automating a significant amount of work. Although detailed VBA coding is beyond the scope of this
book, an introduction to some VBA applications is provided in the appendix to this chapter—A Primer on VBA Modeling and Writing Macros—specifically addressing the following six automation issues: 1.
Consider creating VBA modules for repetitive tasks (Alt-F11 or Tools | Macro | Visual Basic Editor).
2. Add custom equations in place of complex and extended Excel equations. 3. Consider recording macros (Tools | Macro | Record New Macro) for repetitive tasks or calculations. 4. Consider placing
automation forms in your model (View | Toolbar | Forms) and the relevant codes to support the desired actions. 5. Consider constraining users to only choosing specific inputs (View | Toolbar | Forms)
and insert drop-list boxes and the relevant codes to support the desired actions. 6. Consider adding custom buttons and menu items on the user’s model within Excel to locate and execute macros
Use VBA to automate the model, including adding custom equations, macros, automation forms, and predefined buttons.
■ ■
Units. Consider the input assumption’s units and preset them accordingly in the cell to avoid any confusion. For instance, if a discount-rate input cell is required, the inputs can either be typed in
as 20 or 0.2 to represent 20 percent. By avoiding a simple input ambiguity through preformatting the cells with the relevant units, user and model errors can be easily avoided. Magnitude. Consider
the input’s potential magnitude, where a large input value may obfuscate the cell’s view by using the cell’s default width. Change the format of the cell either to automatically reduce the font size
to accommodate the higher magnitude input (Format | Cells | Alignment | Shrink to Fit) or have the cell width sufficiently large to accommodate all possible magnitudes of the input. Text Wrapping and
Zooming. Consider wrapping long text in a cell (Format | Cells | Alignment | Wrap Text) for better aesthetics and view. This suggestion also applies to the zoom size of the spreadsheet. Remember that
zoom size is worksheet specific and not workbook specific. Merging Cells. Consider merging cells in titles (Format | Cells | Alignment | Merge Cells) for a better look and feel. Colors and Graphics.
Colors and graphics are an integral part of a model’s aesthetics as well as a functional piece to determine if a cell is an input, a calculation, or a result. A careful blend of background colors and
foreground graphics goes a long way in terms of model aesthetics. Grouping. Consider grouping repetitive columns or insignificant intermediate calculations (Data | Group and Outline | Group).
A Guide to Model-Building Etiquette
Hiding Rows and Columns. Consider hiding extra rows and columns (select the relevant rows and columns to hide by selecting their row or column headers, and then choose Format | Rows or Columns |
Hide) that are deemed as irrelevant intermediate calculations. Conditional Formatting. Consider conditional formatting such that if a cell’s calculated result is a particular value (e.g., positive
versus negative profits), the cell or font changes to a different color (Format | Conditional Formatting). Auto Formatting. Consider using Excel’s auto formatting for tables (Format | Auto Format).
Auto formatting will maintain the same look and feel throughout the entire Excel model for consistency. Custom Styles. The default Excel formatting can be easily altered, or alternatively, new styles
can be added (Format | Styles | New). Styles can facilitate the model-building process in that consistent formatting is applied throughout the entire model by default and the modeler does not have to
worry about specific cell formatting (e.g., shrink to fit and font size can be applied consistently throughout the model). Custom Views. In larger models where data inputs and output results are all
over the place, consider using custom views (View | Custom Views | Add). This custom view feature allows the user to navigate through a large model spreadsheet with ease, especially when navigational
macros are added to these views (see the appendix to this chapter—A Primer on VBA Modeling and Writing Macros—for navigating custom views using macros). In addition, different size zooms on areas of
interest can be created within the same spreadsheet through custom views.
Model aesthetics are preserved by considering the input units and magnitude, text wrapping and zooming views, cell merges, colors and graphics, grouping items, hiding excess rows and columns,
conditional formatting, auto formatting, custom styles, and custom views.
APPENDIX—A PRIMER ON VBA MODELING AND WRITING MACROS The Visual Basic Environment (VBE) In Excel, access the VBE by hitting Alt-F11 or Tools | Macro | Visual Basic Environment. The VBE looks like
Figure 3.8. Select the VBA project pertaining to the opened Excel file (in this case, it is the Risk Analysis.xls file).
FIGURE 3.8
Visual basic environment.
Click on Insert | Module and double-click on the Module icon on the left window to open the module. You are now ready to start coding in VBA.
Custom Equations and Macros Two Basic Equations The following example illustrates two basic equations. They are simple combination and permutation functions. Suppose that there are three variables,
A, B, and C. Further suppose that two of these variables are chosen randomly. How many pairs of outcomes are possible? In a combination, order is not important and the following three pairs of
outcomes are possible: AB, AC, and BC. In a permutation, order is important and matters; thus, the following six pairs of outcomes are possible: AB, AC, BA, BC, CA, and CB. The equations are:
Combination =
3! (Variable)! = =3 (Choose)!(Variable − Ch hoose)! 2 !(3 − 2)!
Permutation =
(Variable)! 3! = =6 (Variable − Choose)! (3 − 2)!
If these two equations are widely used, then creating a VBA function will be more efficient and will avoid any unnecessary errors in larger models when Excel equations have to be created repeatedly.
For instance, the manually inputted equation will have to be: =fact(A1)/(fact(A2)*fact(A1–A2)) as
A Guide to Model-Building Etiquette
compared to a custom function created in VBA where the function in Excel will now be =combine(A1,A2). The mathematical expression is exaggerated if the function is more complex, as will be seen
later. The VBA code to be entered into the previous module (Figure 3.8) for the two simple equations is:
Public Function Combine(Variable As Double, Choose _ As Double) As Double Combine = Application.Fact(Variable) / (Application.Fact(Choose) * _ Application.Fact(Variable – Choose)) End Function Public
Function Permute(Variable As Double, Choose As Double) _ As Double Permute = Application.Fact(Variable) / Application.Fact(Variable – _ Choose) End Function Once the code is entered, the functions
can be executed in the spreadsheet. The underscore at the end of a line of code indicates the continuation of the line of code on the next line. Figure 3.9 shows the spreadsheet environment with the
custom function. If multiple functions were entered, the user can also get access to those functions through the Insert | Function dialog wizard by choosing the userdefined category and scrolling
down to the relevant functions (Figure 3.10). The functions arguments box comes up for the custom function chosen (Figure 3.11), and entering the relevant inputs or linking to input cells can be
accomplished here. Following are the VBA codes for the Black–Scholes models for estimating call and put options. The equations for the Black–Scholes are shown below and are simplified to functions in
Excel named “BlackScholesCall” and “BlackScholesPut.” ⎡ ln(S / X) + (rf + σ 2 / 2)T ⎤ Call = S Φ ⎢ ⎥ σ T ⎦ ⎣ ⎡ ln(S / X) + (rf − σ 2 / 2)T ⎤ − Xe − rf (T ) Φ ⎢ ⎥ σ T ⎦ ⎣ ⎡ ln(S / X) + (rf − σ 2 / 2)T
⎤ Put = Xe − rf (T ) Φ ⎢ − ⎥ σ T ⎦ ⎣ 2 ⎡ ln(S / X) + (rf + σ / 2)T ⎤ − SΦ ⎢ − ⎥ σ T ⎦ ⎣
FIGURE 3.9
Excel spreadsheet with custom functions.
FIGURE 3.10
FIGURE 3.11
Insert function dialog box.
Function arguments box.
A Guide to Model-Building Etiquette
Public Function BlackScholesCall(Stock As Double, Strike As _ Double, Time As Double, Riskfree _ As Double, Volatility As Double) As Double Dim D1 As Double, D2 As Double D1 = (Log(Stock / Strike) +
(Riskfree + 0.5 * Volatility ^ 2 / 2) * _ Time) / (Volatility * Sqr(Time)) D2 = D1 – Volatility * Sqr(Time) BlackScholesCall = Stock * Application.NormSDist(D1) – Strike * _ Exp(–Time * Riskfree) * _
Application.NormSDist(D2) End Function Public Function BlackScholesPut(Stock As Double, Strike As _ Double Time As Double, Riskfree _ As Double Volatility As Double) As Double Dim D1 As Double, D2 As
Double D1 = (Log(Stock / Strike) + (Riskfree – 0.5 * Volatility ^ 2 / 2) * _ Time) / (Volatility * Sqr(Time)) D2 = D1 – Volatility * Sqr(Time) BlackScholesPut = Strike * Exp(–Time * Riskfree) * _
Application.NormSDist(–D2) – Stock * _ Application.NormSDist(–D1) End Function As an example, the function BlackScholesCall(100,100,1,5%,25%) results in 12.32 and BlackScholesPut(100,100,1,5%,25%)
results in 7.44. Note that Log is a natural logarithm function in VBA and that Sqr is square root, and make sure there is a space before the underscore in the code. The underscore at the end of a
line of code indicates the continuation of the line of code on the next line.
Form Macros Another type of automation is form macros. In Excel, select View | Toolbars | Forms and the forms toolbar will appear. Click on the insert drop-list icon as shown in Figure 3.12 and drag
it into an area in the spreadsheet to insert the drop list. Then create a drop-list table as seen in Figure 3.13 (cells B10 to D17). Point at the drop list and use the right mouse click to select
Format Control | Control. Enter the input range as cells C11 to C15, cell link at C16, and five drop-down lines (Figure 3.14).
FIGURE 3.12
Forms icon bar.
FIGURE 3.13
Creating a drop-down box.
In Figure 3.13, the index column simply lists numbers 1 to n, where n is the total number of items in the drop-down list (in this example, n is 5). Here, the index simply converts the items
(annually, semiannually, quarterly, monthly, and weekly) into corresponding indexes. The choices column in the input range is the named elements in the drop list. The value column lists the variables
associated with the choice (semiannually means there are 2 periods in a year, or monthly means there are 12 periods in a year). Cell
FIGURE 3.14
Format object dialog box.
A Guide to Model-Building Etiquette
C16 is the choice of the user selection; that is, if the user chooses monthly on the drop list, cell C16 will become 4, and so forth, as it is linked to the drop list in Figure 3.14. Cell C17 in
Figure 3.13 is the equation =VLookup($C$16,$B$11:$D$15, 3) where the VLookup function will look up the value in cell C16 (the cell that changes in value depending on the drop-list item chosen) with
respect to the first column in the area B11:D15, matches the corresponding row with the same value as in cell C16, and returns the value in the third column (3). In Figure 3.13, the value is 12. In
other words, if the user chooses quarterly, then cell C16 will be 3, and cell C17 will be 4. Clearly, in proper model building, this entire table will be hidden somewhere out of the user’s sight
(placed in the extreme corners of the spreadsheet or in a distant corner and its font color changed to match the background, making it disappear or placed in a hidden worksheet). Only the drop list
will be shown and the models will link to cell C17 as an input parameter. This situation forces the user to choose only from a list of predefined inputs and prevents any accidental insertion of
invalid inputs. Navigational VBA Codes A simple macro to navigate to sheet “2. Analysis” is shown here. This macro can be written in the VBA environment or recorded in the Tools | Macros | Record New
Macro, then perform the relevant navigational actions (i.e., clicking on the “2. Analysis” sheet and hitting the stop recording button), return to the VBA environment, and open up the newly recorded
macro. Sub MoveToSheet2() Sheets(“2. Analysis”).Select End Sub However, if custom views (View | Custom Views | Add) are created in Excel worksheets (to facilitate finding or viewing certain parts of
the model such as inputs, outputs, etc.), navigations can also be created through the following, where a custom view named “results” had been previously created: Sub CustomView()
ActiveWorkbook.CustomViews(“Results”).Show End Sub Form buttons can then be created and these navigational codes can be attached to the buttons. For instance, click on the fourth icon in the forms
icon bar (Figure 3.12) and insert a form button in the spreadsheet and assign
FIGURE 3.15
Simple automated model.
the relevant macros created previously. (If the select macro dialog does not appear, right-click the form button and select Assign Macro.) Input Boxes Input boxes are also recommended for their ease
of use. The following illustrates some sample input boxes created in VBA, where the user is prompted to enter certain restrictive inputs in different steps or wizards. For instance, Figure 3.15
illustrates a simple sales commission calculation model, where the user inputs are the colored and boxed cells. The resulting commissions (cell B11 times cell B13) will be calculated in cell B14. The
user would start using the model by clicking on the Calculate form button. A series of input prompts will then walk the user through inputting the relevant assumptions (Figure 3.16). The code can
also be set up to check for relevant inputs, that is, sales commissions have to be between 0.01 and 0.99. The full VBA code is shown next. The code is first written in VBA, and then the form button
is placed in the worksheet that calls the VBA code.
FIGURE 3.16
Sample input box.
A Guide to Model-Building Etiquette
Sub UserInputs() Dim User As Variant, Today As String, Sales As Double, _ Commissions As Double Range(“B1”).Select User = InputBox(“Enter your name:”) ActiveCell.FormulaR1C1 = User Range(“B2”).Select
Today = InputBox(“Enter today’s date:”) ActiveCell.FormulaR1C1 = Today Range(“B5”).Select Sales = InputBox(“Enter the sales revenue:”) ActiveCell.FormulaR1C1 = Sales Dim N As Double For N = 1 To 5
ActiveCell.Offset(1, 0).Select Sales = InputBox(“Enter the sales revenue for the following _ period:”) ActiveCell.FormulaR1C1 = Sales Next N Range(“B13”).Select Commissions = 0 Do While Commissions <
0.01 Or Commissions > 0.99 Commissions = InputBox(“Enter recommended commission rate _ between 1% and 99%:”) Loop ActiveCell.FormulaR1C1 = Commissions Range(“B1”).Select End Sub Forms and Icons
Sometimes, for globally used macros and VBA scripts, a menu item or an icon can be added to the user’s spreadsheet. Insert a new menu item by clicking on Tools | Customize | Commands | New Menu and
dragging the New Menu item list to the Excel menu bar to a location right before the Help menu. Click on Modify Selection and rename the menu item accordingly (e.g., Risk Analysis). Also, an
ampersand (“&”) can be placed before a letter in the menu item name to underline the next letter such that the menu can be accessed through the keyboard by hitting the Alternate key and then the
corresponding letter key. Next, click on Modify Selection | Begin a Group and then drag the New Menu item list again to the menu bar,
FIGURE 3.17
Custom menu and icon.
but this time, right under the Risk Analysis group. Now, select this submenu item and click on Modify Selection | Name and rename it Run Commissions. Then, Modify Selection | Assign Macro and assign
it to the User Input macro created previously. Another method to access macros (other than using menu items or Tools | Macro | Macros, or Alt-F8) is to create an icon on the icon toolbar. To do this,
click on Tools | Customize | Toolbars | New. Name the new toolbar accordingly and drag it to its new location anywhere on the icon bar. Then, select the Commands | Macros | Custom Button. Drag the
custom button icon to the new toolbar location. Select the new icon on the toolbar and click on Modify Selection | Assign Macro. Assign the User Input macro created previously. The default button
image can also be changed by clicking on Modify Selection | Change Button Image and selecting the relevant icon accordingly, or from an external image file. Figure 3.17 illustrates the new menu item
(Risk Analysis) and the new icon in the shape of a calculator, where selecting either the menu item or the icon will evoke the User Input macro, which walks the user through the simple input wizard.
EXERCISES 1. Create an Excel worksheet with each of the following components activated: a. Cells in an Excel spreadsheet with the following data validations: no negative numbers are allowed, only
positive integers are allowed, only numerical values are allowed. b. Create a form macro drop list (see the appendix to this chapter) with the following 12 items in the drop list: January, February,
March, . . . December. Make sure the selection of any item in the drop list will change a corresponding cell’s value. 2. Go through the VBA examples in the appendix to this chapter and re-create the
following macros and functions for use in an Excel spreadsheet: a. Create a column of future sales with the following equation for future sales (Years 2 to 11): Future sales = (1+RAND())*(Past Year
A Guide to Model-Building Etiquette
for 11 future periods starting with the current year’s sales of $100 (Year 1). Then, in VBA, create a macro using the For . . . Next loop to simulate this calculation 1,000 times and insert a form
button to activate the macro in the Excel worksheet. b. Create the following income function in VBA for use in the Excel spreadsheet: Income = Benefits – Cost. Try out different benefits and cost
inputs to make sure the function works properly.
Risk Quantification
On the Shores of Monaco
onte Carlo simulation, named for the famous gambling capital of Monaco, is a very potent methodology. For the practitioner, simulation opens the door for solving difficult and complex but practical
problems with great ease. Perhaps the most famous early use of Monte Carlo simulation was by the Nobel physicist Enrico Fermi (sometimes referred to as the father of the atomic bomb) in 1930, when he
used a random method to calculate the properties of the newly discovered neutron. Monte Carlo methods were central to the simulations required for the Manhattan Project, where in the 1950s Monte
Carlo simulation was used at Los Alamos for early work relating to the development of the hydrogen bomb, and became popularized in the fields of physics and operations research. The Rand Corporation
and the U.S. Air Force were two of the major organizations responsible for funding and disseminating information on Monte Carlo methods during this time, and today there is a wide application of
Monte Carlo simulation in many different fields including engineering, physics, research and development, business, and finance. Simplistically, Monte Carlo simulation creates artificial futures by
generating thousands and even hundreds of thousands of sample paths of outcomes and analyzes their prevalent characteristics. In practice, Monte Carlo simulation methods are used for risk analysis,
risk quantification, sensitivity analysis, and prediction. An alternative to simulation is the use of highly complex stochastic closed-form mathematical models. For analysts in a company, taking
graduate-level advanced math and statistics courses is just not logical or practical. A brilliant analyst would use all available tools at his or her disposal to obtain the same answer the easiest
and most practical way possible. And in all cases, when modeled correctly, Monte Carlo simulation provides similar answers to the more mathematically elegant methods. In addition, there are many
real-life applications where closed-form models do not exist and the only recourse is to apply simulation methods. So, what exactly is Monte Carlo simulation and how does it work?
WHAT IS MONTE CARLO SIMULATION? Today, fast computers have made possible many complex computations that were seemingly intractable in past years. For scientists, engineers, statisticians, managers,
business analysts, and others, computers have made it possible to create models that simulate reality and aid in making predictions, one of which is used in simulating real systems by accounting for
randomness and future uncertainties through investigating hundreds and even thousands of different scenarios. The results are then compiled and used to make decisions. This is what Monte Carlo
simulation is all about. Monte Carlo simulation in its simplest form is a random number generator that is useful for forecasting, estimation, and risk analysis. A simulation calculates numerous
scenarios of a model by repeatedly picking values from a user-predefined probability distribution for the uncertain variables and using those values for the model. As all those scenarios produce
associated results in a model, each scenario can have a forecast. Forecasts are events (usually with formulas or functions) that you define as important outputs of the model. Think of the Monte Carlo
simulation approach as picking golf balls out of a large basket repeatedly with replacement. The size and shape of the basket depend on the distributional input assumption (e.g., a normal
distribution with a mean of 100 and a standard deviation of 10, versus a uniform distribution or a triangular distribution) where some baskets are deeper or more symmetrical than others, allowing
certain balls to be pulled out more frequently than others. The number of balls pulled repeatedly depends on the number of trials simulated. For a large model with multiple related assumptions,
imagine the large model as a very large basket, where many baby baskets reside. Each baby basket has its own set of colored golf balls that are bouncing around. Sometimes these baby baskets are
linked with each other (if there is a correlation between the variables), forcing the golf balls to bounce in tandem, whereas in other uncorrelated cases, the balls are bouncing independently of one
another. The balls that are picked each time from these interactions within the model (the large basket) are tabulated and recorded, providing a forecast output result of the simulation.
WHY ARE SIMULATIONS IMPORTANT? An example of why simulation is important can be seen in the case illustration in Figures 4.1 and 4.2, termed the Flaw of Averages.1 The example is most certainly
worthy of more detailed study. It shows how an analyst may be misled into making the wrong decisions without the use of simulation. Suppose you are the owner of a shop that sells perishable goods and
you need to make a decision on the optimal inventory to have on hand. Your
On the Shores of Monaco
Actual Inventory Held
Perishable Cost Fed Ex Cost
$100 $175
Total Cost
Your company is a retailer in perishable goods and you were tasked with finding the optimal level of inventory to have on hand. If your inventory exceeds actual demand, there is a $100 perishable
cost while a $175 Fed Ex cost is incurred if your inventory is insufficient to cover the actual level of demand. These costs are on a per unit basis. Your first inclination is to collect historical
demand data as seen on the right, for the past 60 months. You then take a simple average, which was found to be 5 units. Hence, you select 5 units as the optimal inventory level. You have just
committed a major mistake called the Flaw of Averages! The actual demand data are shown here on the right. Rows 19 through 57 are hidden to conserve space. Being the analyst, what must you then do?
Frequency 8
Historical Data Month 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 58 59 60
(5 Yr) Actual 12 11 7 0 0 2 7 0 11 12 0 9 3 5 0 2 1 10 3 2 17
Frequency Histogram of Actual Demand
Actual Demand Levels
FIGURE 4.1
15 More
The flaw of averages example.
new-hire analyst was successful in downloading 5 years worth of monthly historical sales levels and she estimates the average to be five units. You then make the decision that the optimal inventory
to have on hand is five units. You have just committed the flaw of averages. As the example shows, the
Simulated Average Actual Demand Inventory Held
8.53 9.00
Simulated Demand Range Simulated Cost Range
Perishable Cost Fed Ex Cost Total Cost
$100 $175 $46.88
The best method is to perform a nonparametric simulation where we use the actual historical demand levels as inputs to simulate the most probable level of demand going forward, which we found as 8.53
units. Given this demand, the lowest cost is obtained through a trial inventory of 9 units, a far cry from the original Flaw of Averages estimate of 5 units.
Trial Inventory 1.00 2.00 3.00 4.00 5.00 6.00 7.00 8.00 9.00 10.00 11.00 12.00 13.00 14.00 15.00 16.00
Total Cost $1,318 $1,143 $968 $793 $618 $443 $268 $93 $47 $147 $247 $347 $447 $547 $647 $747
FIGURE 4.2
From 7.21 and 9.85 From 178.91 to 149
Total Cost 1400.00
Simulated Distribution of Total Cost
1200.00 1000.00 800.00 600.00 400.00 200.00 0.00 1
8 9 10 11 Trial Inventory
Fixing the flaw of averages with simulation.
obvious reason why this error occurs is that the distribution of historical demand is highly skewed while the cost structure is asymmetrical. For example, suppose you are in a meeting, and your boss
asks what everyone made last year. You take a quick poll and realize that the salaries range from $60,000 to $150,000. You perform a quick calculation and find the average to be $100,000. Then, your
boss tells you that he made $20 million last year! Suddenly, the average for the group becomes $1.5 million. This value of $1.5 million clearly in no way represents how much each of your peers made
last year. In this case, the median may be more appropriate. Here you see that simply using the average will provide highly misleading results.2 Continuing with the example, Figure 4.2 shows how the
right inventory level is calculated using simulation. The approach used here is called nonparametric bootstrap simulation. It is nonparametric because in this simulation approach, no distributional
parameters are assigned. Instead of assuming some preset distribution (normal, triangular, lognormal, or the like) and its required parameters (mean, standard deviation, and so forth) as
On the Shores of Monaco
required in a Monte Carlo parametric simulation, nonparametric simulation uses the data themselves to tell the story. Imagine that you collect 5 years worth of historical demand levels and write down
the demand quantity on a golf ball for each month. Throw all 60 golf balls into a large basket and mix the basket randomly. Pick a golf ball out at random and write down its value on a piece of
paper, then replace the ball in the basket and mix the basket again. Do this 60 times and calculate the average. This process is a single grouped trial. Perform this entire process several thousand
times, with replacement. The distribution of these thousands of averages represents the outcome of the simulation forecast. The expected value of the simulation is simply the average value of these
thousands of averages. Figure 4.2 shows an example of the distribution stemming from a nonparametric simulation. As you can see, the optimal inventory rate that minimizes carrying costs is nine
units, far from the average value of five units previously calculated in Figure 4.1. Clearly, each approach has its merits and disadvantages. Nonparametric simulation, which can be easily applied
using Risk Simulator’s custom distribution,3 uses historical data to tell the story and to predict the future. Parametric simulation, however, forces the simulated outcomes to follow well-behaving
distributions, which is desirable in most cases. Instead of having to worry about cleaning up any messy data (e.g., outliers and nonsensical values) as is required for nonparametric simulation,
parametric simulation starts fresh every time.
Monte Carlo simulation is a type of parametric simulation, where specific distributional parameters are required before a simulation can begin. The alternative approach is nonparametric simulation
where the raw historical data is used to tell the story and no distributional parameters are required for the simulation to run.
COMPARING SIMULATION WITH TRADITIONAL ANALYSES Figure 4.3 illustrates some traditional approaches used to deal with uncertainty and risk. The methods include performing sensitivity analysis, scenario
analysis, and probabilistic scenarios. The next step is the application of Monte Carlo simulation, which can be seen as an extension to the next step in uncertainty and risk analysis. Figure 4.4
shows a more advanced use (Text continues on page 82.)
$5 $20 $70
Unit Variable Cost Fixed Cost Total Cost
Net Income
$100 – $70
$20 Fixed + ($5 × 10) Variable
10 units × $10 per unit
Recall the Flaw of Average example where a simple point estimate could yield disastrous conclusions.
Since the bottom line Net Income is the key financial performance indicator here, an uncertainty in future sales volume will be impounded into the Net Income calculation. How much faith do you have
in your calculation based on a simple point estimate?
Point Estimates This is a simple example of a Point Estimate approach. The issues that arise may include the risk of how confident you are in the unit sales projections, the sales price, and the
variable unit cost.
11 $10 $110 $5 $20 $75 $35
Up $5
Change 1 unit
Up $10
Unit Sales Unit Price Total Revenue Unit Variable Cost Fixed Cost Total Cost Net Income 10 $11 $110 $5 $20 $70 $40 Down $10
Change 1 unit
Change 1 unit
Unit Sales Unit Price Total Revenue Unit Variable Cost Fixed Cost Total Cost Net Income
10 $10 $100 $6 $20 $80 $20
Unit Sales 14 Unit Sales 10 Unit Sales 8 Good Economy Average Economy Unit Price $11 Unit Price $10 Unit Price $10 Total Revenue $154 $100 Total Revenue $80 Total Revenue Unit Variable Cost Unit
Variable Cost Unit Variable Cost $5 $5 $5 Fixed Cost $20 $20 Fixed Cost $20 Fixed Cost Bad Economy Total Cost $90 $70 $60 Total Cost Total Cost Net Income $64 Net Income $30 Net Income $20 Looking at
the Net Income results, we have $64, $30 and $20. The problem here is, the variation is too large. Which condition do I think will most likely occur and which result do I use in my budget forecast
for the firm? Although Scenario Analysis is useful in ascertaining the impact of different conditions, both advantageous and adverse, the analysis provides little insight to which result to use.
Scenario Analysis In order to provide an added element of variability, using the simple example above, you can perform a Scenario Analysis, where you would change values of key variables by certain
units given certain assumed scenarios. For instance, you may assume three economic scenarios where unit sales and unit sale prices will vary. Under a good economic condition, unit sales go up to 14
at $11 per unit. Under a nominal economic scenario, units sales will be 10 units at $10 per unit. Under a bleak economic scenario, unit sales decrease to 8 units but prices per unit stays at $10.
Hence, we know that Unit Price has the most positive impact on the Net Income bottom line and Unit Variable Cost the most negative impact. In terms of making assumptions, we know that additional care
must be taken when forecasting and estimating these variables. However, we still are in the dark concerning which sensitivity set of results we should be looking at or using.
Unit Sales Unit Price Total Revenue Unit Variable Cost Fixed Cost Total Cost Net Income
Sensitivity Analysis Here, we can make unit changes to the variables in our simple model to see the final effects of such a change. Looking at the simple example, we know that only Unit Sales, Unit
Price, and Unit Variable Cost can change. This is since Total Revenues, Total Costs, and Net Income are calculated values while Fixed Cost is assumed to be fixed and unchanging, regardless of the
amount of sales units or sales price. Changing these three variables by one unit shows that from the original $40, Net Income has now increased $5 for Unit Sales, increased $10 for Unit Price, and
decreased $10 for Unit Variable Cost.
10 $10 $100
Unit Sales Unit Price Total Revenue
Net Income
Net Income $64.00 $30.00 $20.00 $39.40
FIGURE 4.3
? ?
19.42 34.62
$40.04 $39.98 $46.63 $8.20 Between $56.16 and $24.09
Simulated Distribution of Net Income
Average Median Mode Standard Deviation 95% Confidence Interval
Simulation Analysis Looking at the original model, we know that through Sensitivity Analysis, Unit Sales, Unit Price and Unit Variable Cost are three highly uncertain variables. We can then very
easily simulate these three unknowns thousands of times (based on certain distributional assumptions) to see what the final Net Income value looks like.
Probabilistic Scenario Analysis We can always assign probabilities that each scenario will occur, creating a Probabilistic Scenario Analysis and simply calculate the Expected Monetary Value (EMV) of
the forecasts. The results here are more robust and reliable than a simple scenario analysis since we have collapsed the entire range of potential outcomes of $64, $30, and $20 into a single expected
value. This value is what you would expect to get on average.
Point estimates, sensitivity analysis, scenario analysis, and simulation.
Discussions about types of distributional assumptions to use and the actual simulation approach will be discussed later.
The results calculated from the simulation output can then be interpreted as follows:
By performing the simulation thousands of times, we essentially perform thousands of sensitivity analyses and scenario analyses given different sets of probabilities. These are all set in the
original simulation assumptions (types of probability distributions, the parameters of the distributions and which variables to simulate).
$5 $20 $70
10 $10 $100
Unit Variable Cost Fixed Cost Total Cost
Unit Sales Unit Price Total Revenue
Good Economy Average Economy Bad Economy EMV
Probability 35% 40% 25%
15% 30% Daily 100
Simulated Stock Price Path III
Here we see the effects of performing a simulation of stock price paths following a Geometric Brownian Motion model for daily closing prices. Three sample paths are seen here. In reality, thousands
of simulations are performed and their distributional properties are analyzed. Frequently, the average closing prices of these thousands of simulations are analyzed, based on these simulated price
Mean Sigma Timing Starting Value
normal deviates 0.0873 –0.4320 –0.1389 –0.4583 1.7807 –1.4406 –0.5577 0.5277 –0.4844 –0.2307 0.8688 2.1195 –1.9756 1.3734 –0.8790 –0.7610 0.3168 –0.0511 0.0653 –0.6073 0.6900 –0.7012 1.4784 –0.9195
–0.3343 –2.3395 –1.7831 –0.3247 0.5053 0.0386 1.0418 –0.7052 0.1338 0.0451
time days 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 247 248 249 250
value simulated 100.0000 100.2259 99.4675 99.2652 98.4649 101.9095 99.2212 98.2357 99.2838 98.4345 98.0634 99.7532 83.9088 100.1461 102.8517 101.2112 99.8203 100.4824 100.4452 100.6301 99.5368
100.9091 99.6353 102.5312 100.8184 100.2411 95.9465 92.8103 92.2958 93.2409 93.3652 100.9205 99.6388 99.9521 100.0978 0
100 day
Simulated Stock Price Path II
Simulated Stock Price Path I
Rows 31 through 246 have been hidden to conserve space.
A Simple Simulation Example We need to perform many simulations to obtain a valid distribution.
stock price stock price
stock price
Upper Bound
Expected Value
Notice that as time increases, the confidence interval widens since there will be more risk and uncertainty as more time passes.
We can then graph out the confidence intervals together with the expected values of each forecasted time period.
We can also analyze each of these time-specific probability distributions and calculate relevant statistically valid confidence intervals for decision-making purposes.
The thousands of simulated price paths are then tabulated into probability distributions. Here are three sample price paths at three different points in time, for periods 1, 20, and 250. There will
be a total of 250 distributions for each time period, which corresponds to the number of trading days a year.
124.08 88.38 106.23 Certainty is 90.00% from 83.53 to 127.51
40 Outliers 120
Forecast: Average for Period 20 Frequency Chart
5,000 Trials .024
Lower Bound
Conceptualizing the lognormal distribution.
FIGURE 4.4
Expected Price Path
.000 145.65
128.10 93.00 110.55 Certainty is 99.02% from 77.38 to +Infinity
47 Outliers 128
Forecast: Average for Period 250 Frequency Chart
5,000 Trials .026
.000 134.65
117.46 83.08 100.27 Certainty is 95.00% from 74.05 to 125.81
52 Outliers 122
Forecast: Average for Period 1 Frequency Chart
5,000 Trials .024
of Monte Carlo simulation for forecasting.4 The examples in Figure 4.4 show how Monte Carlo simulation can be really complicated, depending on its use. The enclosed CD-ROM’s Risk Simulator software
has a stochastic process module that applies some of these more complex stochastic forecasting models, including Brownian Motion, mean-reversion, and randomwalk models.
USING RISK SIMULATOR AND EXCEL TO PERFORM SIMULATIONS Simulations can be performed using Excel. However, more advanced simulation packages such as Risk Simulator perform the task more efficiently and
have additional features preset in each simulation. We now present both Monte Carlo parametric simulation and nonparametric bootstrap simulation using Excel and Risk Simulator. The examples in
Figures 4.5 and 4.6 are created using Excel to perform a limited number of simulations on a set of probabilistic assumptions. We assume that having performed a series of scenario analyses, we obtain
a set of nine resulting values, complete with their respective probabilities of occurrence. The first step in setting up a simulation in Excel for such a scenario analysis is to understand the
function “RAND( )” within Excel. This function is simply a random number generator Excel uses to create random numbers from a uniform distribution between 0 and 1. Then translate this 0 to 1 range
using the assigned probabilities in our assumption into ranges or bins. For instance, if the value $362,995 occurs with a 55 percent probability, we can create a bin with a range of 0.00 to 0.55.
Similarly, we can create a bin range of 0.56 to 0.65 for the next value of $363,522, which occurs 10 percent of the time, and so forth. Based on these ranges and bins, the nonparametric simulation
can now be set up. Figure 4.5 illustrates an example with 5,000 sets of trials. Each set of trials is simulated 100 times; that is, in each simulation trial set, the original numbers are picked
randomly with replacement by using the Excel formula VLOOKUP(RAND(), $D$16:$F$24, 3), which picks up the third column of data from the D16 to F24 area by matching the results from the RAND() function
and data from the first column. The average of the data sampled is then calculated for each trial set. The distribution of these 5,000 trial sets’ averages is obtained and the frequency distribution
is shown at the bottom of Figure 4.5. According to the Central Limit Theorem, the average of these sample averages will approach the real true mean of the population at the limit. In addition, the
distribution will most likely approach normality when a sufficient set of trials are performed.
On the Shores of Monaco
Simulation (Probability Assumptions) Step 1: The Assumptions
Step 2: The Table Setup
Step 3: Simulate
Trials 1 2 3 4 5 6 7 8 9 10 11 12 95 Rows 13 96 to 94 have been hidden 97 98 to conserve 99 space. 100 Average
Value 362995 363522 252094 122922 23572 305721 61877 147322 179360
Probability 55% 10% 10% 10% 3% 3% 3% 3% 3%
Minimum 0.00 0.56 0.66 0.76 0.86 0.89 0.92 0.95 0.98
Maximum 0.55 0.65 0.75 0.85 0.88 0.91 0.94 0.97 1.00
Implied 362994.83 363522.33 252094 122922.05 23572.39 305721.43 61876.66 147322.19 179359.73
Set 1 147322 362995 252094 362995 252094 362995 122922 363522 362995 122922 305721 362995 252094 252094 362995 362995 122922 363522 275763
Set 2 122922 362995 362995 362995 362995 362995 362995 362995 362995 122922 362995 362995 362995 252094 23572 362995 362995 252094 282681
Set 3 252094 362995 362995 252094 363522 363522 363522 362995 362995 363522 362995 362995 362995 61877 362995 362995 362995 362995 318044
Here are the assumed values and their corresponding probabilities of occurrence. The sum of the probabilities have to add up to 100%. We then translate the assumed values into a set of random numbers
bounded by [0,1]. For instance, for a normal distribution, the probability of getting a number between 0.00 and 0.55 is 55% and between 0.56 and 0.65 is 10% and so forth. This is done in Step 2
Simulate this for 100 trials and take the average. Then, repeat this for several thousand sets, taking the average on every set. Then, using these thousands of simulated sets, create a probability
distribution and calculate its corresponding descriptive statistics (mean, standard deviation, confidence intervals, probabilities, et cetera).
VLOOKUP(RAND(),$D$16:$F$24,3) Set 4 362995 362995 122922 362995 362995 122922 362995 122922 252094 362995 362995 362995 363522 362995 362995 147322 362995 362995 292146
Set 5 362995 147322 362995 362995 363522 252094 362995 362995 252094 305721 252094 252094 362995 363522 122922 362995 362995 362995 300325
Set 100 362995 61877 252094 362995 122922 363522 122922 122922 362995 362995 362995 362995 122922 122922 305721 252094 362995 362995 299948
Set 1000 252094 61877 61877 61877 363522 362995 122922 122922 362995 252094 252094 362995 362995 23572 362995 362995 362995 362995 298498
Probability Distribution of Simulated Output 80
Set 1500 362995 362995 362995 179360 252094 179360 252094 122922 362995 61877 363522 252094 362995 122922 362995 362995 147322 362995 302302
Average 90th%
Set 2000 61877 122922 362995 179360 147322 122922 61877 362995 179360 362995 362995 362995 252094 305721 23572 362995 362995 362995 296806
Set 5000 363522 179360 362995 122922 362995 179360 122922 61877 363522 362995 362995 122922 61877 362995 362995 252094 252094 362995 294590
Descriptive Statistics Mean Median Mode Standard Deviation Skew 5th Percentile 10th Percentile 90th Percentile 95th Percentile
279.50 279.34 313.66 20.42 0.05 245.34 253.16 306.00 312.71
10 0 223.61 231.08 238.54 246.01 253.47 260.93 268.40 275.86 283.33 290.79 298.26 305.72 313.19 320.65 328.12 335.58
FIGURE 4.5
Simulation using Excel I.
Clearly, running this nonparametric simulation manually in Excel is fairly tedious. An alternative is to use Risk Simulator’s custom distribution, which does the same thing but in an infinitely
faster and more efficient fashion. Chapter 6, Pandora’s Tool Box, illustrates some of these simulation tools in more detail. Nonparametric simulation is a very powerful tool but it is only applicable
if data are available. Clearly, the more data there are, the higher the level of precision and confidence in the simulation results. However, when no data exist or when a valid systematic process
underlies the data set (e.g., physics, engineering, economic relationship), parametric simulation may be more appropriate, where exact probabilistic distributions are used.
Example Simulations Using Excel Unit Sales Unit Price Total Revenue
10 $10 $100
Unit Variable Cost Fixed Cost Total Cost
$5 $20 $70
Net Income
Unit Sales Assumption Average Sales of 10.5 with a Standard Deviation of 4.25 Unit Price Assumption: Fluctuates evenly with a Uniform Distribution between $5 and $15 with equally likely probabilities
? ?
Recall that previously we had three highly uncertain variables that we would like to perform a Monte Carlo simulation on. These variables were Unit Sales, Unit Price, and Unit Variable Cost. Before
starting the simulation, we first need to make several distributional assumptions about these variables. Using historical data, we have ascertained that the average historical sales follows a Normal
Distribution with a mean of 10.5 and a standard deviation of 4.25 units. In addition, we have seen that Unit Price have historically fluctuated between $5 and $15 with an almost equal probability of
any price in between occurring. Finally, management came up with a set of Unit Variable Cost assumptions with their corresponding probabilities of occurrence as seen below.
Unit Variable Cost Assumption Cost Schedule Probability Schedule for Cost Probability Variable Cost Min Max Variable Cost 0.3 $3 0.0 0.3 $3 0.5 $4 0.4 0.8 $4 0.2 $5 0.9 1.0 $5 NORMINV(RAND
Unit Sales Unit Price Total Revenue
9.98 $7.38 $144
Unit Variable Cost Fixed Cost Total Cost
$5 $20 $60
Net Income
Unit Sales Unit Price Total Revenue
RAND()*(15-5)+5 Using the assumptions given, we set up the simulation model as seen on the left. VLOOKUP(RAND(),$H$19:$J$21,3)
Unit Sales Unit Price Total Revenue
5.65 $12.50 $71
14.12 $5.49 $78
Unit Variable Cost Fixed Cost Total Cost
$4 $20 $43
Unit Variable Cost Fixed Cost Total Cost
$4 $20 $76
Net Income
Net Income
These are additional sample simulation trials. We perform these trials several thousand times to obtain a probability distribution of the outcomes. The results were shown previously in the graph
entitled Simulated Distribution of Net Income.
Notice that for each simulation trial, a new Unit Sales, Unit Price, and Unit Variable Cost are obtained and hence, a new Net Income is calculated. The new levels of sales, price, and cost are
obtained based on the distributional assumptions previously alluded to above. After thousands of combinations of sales, price, and cost, we obtain several thousand calculated Net Income, which was
then shown in the probability histogram previously.
FIGURE 4.6
Simulation using Excel II.
The RAND() function in Excel is used to generate random numbers for a uniform distribution between 0 and 1. RAND()*(B-A)+A is used to generate random numbers for a uniform distribution between A and
B. NORMSINV(RAND()) generates random numbers from a standard normal distribution with mean of zero and variance of one.
Using Excel to perform simulations is easy and effective for simple problems. However, when more complicated problems arise, such as the one to be presented next, the use of more specialized
simulation packages is warranted. Risk Simulator is such a package. In the example shown in Figure 4.7, the cells for “Revenues,” “Opex,” “FCF/EBITDA Multiple,” and “Revenue Growth Rates” (dark gray)
are the assumption cells, where we enter our distributional input assumptions, such as the type of distribution the
On the Shores of Monaco Monte Carlo Simulation on Financial Analysis Project A Revenues Opex/Revenue Multiple Operating Expenses EBITDA FCF/EBITDA Multiple Free Cash Flows Initial Investment Revenue
Growth Rates
($1,200) ($1,200)
2001 $1,010 0.09 $91 $919 0.20 $187
2002 $1,111 0.10 $109 $1,002 0.25 $246
2003 $1,233 0.11 $133 $1,100 0.31 $336
2004 $1,384 0.12 $165 $1,219 0.40 $486
2005 $1,573 0.13 $210 $1,363 0.56 $760
2001 $1,200 0.09 $108 $1,092 0.10 $109
2002 $1,404 0.10 $138 $1,266 0.11 $139
2003 $1,683 0.11 $181 $1,502 0.12 $183
2004 $2,085 0.12 $249 $1,836 0.14 $252
2005 $2,700 0.13 $361 $2,340 0.16 $364
2001 $950 0.13 $124 $827 0.20 $168
2002 $1,069 0.15 $157 $912 0.25 $224
2003 $1,219 0.17 $205 $1,014 0.31 $309
2004 $1,415 0.20 $278 $1,136 0.40 $453
2005 $1,678 0.24 $395 $1,283 0.56 $715
2001 $1,200 0.08 $90 $1,110 0.14 $159
2002 $1,328 0.08 $107 $1,221 0.16 $200
2003 $1,485 0.09 $129 $1,355 0.19 $259
2004 $1,681 0.09 $159 $1,522 0.23 $346
2005 $1,932 0.10 $200 $1,732 0.28 $483
NPV IRR Risk Adjusted Discount Rate Growth Rate Terminal Value Terminal Risk Adjustment Discounted Terminal Value Terminal to NPV Ratio Payback Period Simulated Risk Value
$126 15.68% 12.00% 3.00% $8,692 30.00% $2,341 18.52 3.89 $390
NPV IRR Risk Adjusted Discount Rate Growth Rate Terminal Value Terminal Risk Adjustment Discounted Terminal Value Terminal to NPV Ratio Payback Period Simulated Risk Value
$149 33.74% 19.00% 3.75% $2,480 30.00% $668 4.49 2.83 $122
NPV IRR Risk Adjusted Discount Rate Growth Rate Terminal Value Terminal Risk Adjustment Discounted Terminal Value Terminal to NPV Ratio Payback Period Simulated Risk Value
$29 15.99% 15.00% 5.50% $7,935 30.00% $2,137 74.73 3.88 $53
NPV IRR Risk Adjusted Discount Rate Growth Rate Terminal Value Terminal Risk Adjustment Discounted Terminal Value Terminal to NPV Ratio Payback Period Simulated Risk Value
$26 21.57% 20.00% 1.50% $2,648 30.00% $713 26.98 3.38 $56
Project B Revenues Opex/Revenue Multiple Operating Expenses EBITDA FCF/EBITDA Multiple Free Cash Flows Initial Investment Revenue Growth Rates
($400) ($400)
Project C Revenues Opex/Revenue Multiple Operating Expenses EBITDA FCF/EBITDA Multiple Free Cash Flows Initial Investment Revenue Growth Rates
($1,100) ($1,100)
Project D Revenues Opex/Revenue Multiple Operating Expenses EBITDA FCF/EBITDA Multiple Free Cash Flows Initial Investment Revenue Growth Rates
Project A Project B Project C Project D Total
($750) ($750)
Implementation Cost $1,200 $400 $1,100 $750 $3,450
Sharpe Ratio 0.02 0.31 0.19 0.17 0.17
Weight 5.14% 25.27% 34.59% 35.00% 100.00%
Project Cost $62 $101 $380 $263 $806
Project NPV $6 $38 $10 $9 $63
Risk Parameter 29% 15% 21% 17% 28%
Lower Barrier $0 0.10 0.40 5%
Upper Barrier $900 1.00 4.00 35%
Payback Period 3.89 2.83 3.88 3.38 3.49
Technology Level 5 3 2 4 3.5
Tech Mix 0.26 0.76 0.69 1.40 3.11
Constraints: Budget Payback Mix Technology Mix Per Project Mix
FIGURE 4.7
(10 percentile at top 900)
Simulation using Risk Simulator.
variable follows and what the parameters are. For instance, we can say that revenues follow a normal distribution with a mean of $1,010 and a standard deviation of $100, based on analyzing historical
revenue data for the firm. The net present value (NPV) cells are the forecast output cells, that is, the results of these cells are the results we ultimately wish to analyze. Refer to Chapter 5, Test
Driving Risk Simulator, for details on setting up and getting started with using the Risk Simulator software.
QUESTIONS 1. 2. 3. 4. 5.
Compare and contrast parametric and nonparametric simulation. What is a stochastic process (e.g., Brownian Motion)? What does the RAND() function do in Excel? What does the NORMSINV() function do in
Excel? What happens when both functions are used together, that is, NORMSINV(RAND())?
Test Driving Risk Simulator
his chapter provides the novice risk analyst an introduction to the Risk Simulator software for performing Monte Carlo simulation, a trial version of which is included in the book’s CD-ROM. The
chapter begins by illustrating what Risk Simulator does and what steps are taken in a Monte Carlo simulation, as well as some of the more basic elements in a simulation analysis. The chapter then
continues with how to interpret the results from a simulation and ends with a discussion of correlating variables in a simulation as well as applying precision and error control. As software versions
with new enhancements are continually released, please review the software’s user manual for more up-to-date details on using the latest version of the software. The Risk Simulator version 1.1 is a
Monte Carlo simulation, forecasting, and optimization software. It is written in Microsoft .NET C# and functions together with Excel as an add-in. This software is also compatible and often used with
the Real Options Super Lattice Solver software (see Chapters 12 and 13), both developed by the author. The different functions or modules in both software are:
The Simulation Module allows you to run simulations in your existing Excel-based models, generate and extract simulation forecasts (distributions of results), perform distributional fitting
(automatically finding the best-fitting statistical distribution), compute correlations (maintain relationships among simulated random variables), identify sensitivities (creating tornado and
sensitivity charts), test statistical hypotheses (finding statistical differences between pairs of forecasts), run bootstrap simulation (testing the robustness of result statistics), and run custom
and nonparametric simulations (simulations using historical data without specifying any distributions or their parameters for forecasting without data or applying expert opinion forecasts). The
Forecasting Module can be used to generate automatic time-series forecasts (with and without seasonality and trend), multivariate regressions (modeling relationships among variables), nonlinear
Test Driving Risk Simulator
(curve fitting), stochastic processes (random walks, mean-reversions, jump-diffusion, and mixed processes), and Box-Jenkins ARIMA (econometric forecasts). The Optimization Module is used for
optimizing multiple decision variables subject to constraints to maximize or minimize an objective, and can be run either as a static optimization, as a dynamic optimization under uncertainty
together with Monte Carlo simulation, or as a stochastic optimization. The software can handle linear and nonlinear optimizations with integer and continuous variables. The Real Options Super Lattice
Solver is another standalone software that complements Risk Simulator, used for solving simple to complex real options problems. See Chapters 12 and 13 for more details on the concept, software, and
applications of real options analysis.
GETTING STARTED WITH RISK SIMULATOR To install the software, insert the accompanying CD-ROM, click on the Install Risk Simulator link, and follow the onscreen instructions. You will need to be online
to download the latest version of the software. The software requires Windows 2000 or XP, administrative privileges, and Microsoft .Net Framework 1.1 be installed on the computer. Most new computers
come with Microsoft .NET Framework 1.1 already preinstalled. However, if an error message pertaining to requiring .NET Framework 1.1 occurs during the installation of Risk Simulator, exit the
installation. Then, install the relevant .NET Framework 1.1 software also included in the CD (found in the DOT NET Framework folder). Complete the .NET installation, restart the computer, and then
reinstall the Risk Simulator software. Once installation is complete, start Microsoft Excel, and if the installation was successful, you should see an additional Simulation item on the menu bar in
Excel and a new icon bar as seen in Figure 5.1. Figure 5.2 shows the icon toolbar in more detail. You are now ready to start using the software. The following sections provide step-by-step
instructions for using the software. As the software is continually updated and improved, the examples in this book might be slightly different than the latest version downloaded from the Internet.
There is a default 30-day trial license file that comes with the software. To obtain a full corporate license, please contact the author’s firm, Real Options Valuation, Inc., at
[email protected]
. Professors at accredited universities can obtain complimentary renewable semester-long copies of the software both for themselves and for installation in computer labs if both the software and this
book are adopted and used in an entire course.
FIGURE 5.1
Risk Simulator menu and icon toolbar.
RUNNING A MONTE CARLO SIMULATION Typically, to run a simulation in your existing Excel model, the following steps must be performed: 1. 2. 3. 4. 5.
Start a new or open an existing simulation profile. Define input assumptions in the relevant cells. Define output forecasts in the relevant cells. Run simulation. Interpret the results.
Edit Set Input Copy Paste Delete Multiple Nonlinear Stochastic Hypothesis Sensitivity Online Profile Assumption Regression Extrapolation Processes Testing Analysis Help
Run Step Reset Time Optimization Distribution Nonparametric Tornado New Set Fitting Bootstrap Analysis Simulation Output Simulation Simulation Simulation Series Analysis Profile Forecast
FIGURE 5.2
Risk Simulator icon toolbar.
Test Driving Risk Simulator
If desired, and for practice, open the example file called Basic Simulation Model and follow along the examples below on creating a simulation. The example file can be found on the start menu at
Start | Real Options Valuation | Risk Simulator | Examples.
1. Starting a New Simulation Profile To start a new simulation, you must first create a simulation profile. A simulation profile contains a complete set of instructions on how you would like to run a
simulation, that is, all the assumptions, forecasts, simulation run preferences, and so forth. Having profiles facilitates creating multiple scenarios of simulations; that is, using the same exact
model, several profiles can be created, each with its own specific simulation assumptions, forecasts, properties, and requirements. The same analyst can create different test scenarios using
different distributional assumptions and inputs or multiple users can test their own assumptions and inputs on the same model. Instead of having to make duplicates of the same model, the same model
can be used and different simulations can be run through this model profiling process. The following list provides the procedure for starting a new simulation profile: 1. Start Excel and create a new
or open an existing model (you can use the Basic Simulation Model example to follow along). 2. Click on Simulation | New Simulation Profile. 3. Enter all pertinent information including a title for
your simulation (Figure 5.3).
Enter the desired number of simulation trials (default is 1,000).
Enter a relevant title for this simulation.
Select if you want the simulation to stop when an error is encountered (default is unchecked).
Select if you want correlations to be considered in the simulation (default is checked).
Select and enter a seed value if you want the simulation to follow a specified random number sequence (default is unchecked).
FIGURE 5.3
New simulation profile.
The following are the elements in the new simulation profile dialog box (Figure 5.3): ■
Title. Specifying a simulation profile name or title allows you to create multiple simulation profiles in a single Excel model, which means that you can now save different simulation scenario
profiles within the same model without having to delete existing assumptions and changing them each time a new simulation scenario is required. Number of trials. The number of simulation trials
required is entered; that is, running 1,000 trials means that 1,000 different iterations of outcomes based on the input assumptions will be generated. You can change this number as desired, but the
input has to be positive integers. The default number of runs is 1,000 trials. Pause simulation on error. If checked, the simulation stops every time an error is encountered in the Excel model; that
is, if your model encounters a computation error (e.g., some input values generated in a simulation trial may yield a divide-by-zero error in one of your spreadsheet cells), the simulation stops.
This feature is important to help audit your model to make sure there are no computational errors in your Excel model. However, if you are sure the model works, then there is no need for this
preference to be checked. Turn on correlations. If checked, correlations between paired input assumptions will be computed. Otherwise, correlations will all be set to zero and a simulation is run
assuming no cross-correlations between input assumptions. As an example, applying correlations will yield more accurate results if indeed correlations exist and will tend to yield a lower forecast
confidence if negative correlations exist. Specify random number sequence. By definition simulation yields slightly different results every time it is run by virtue of the random number generation
routine in Monte Carlo simulation. This is a theoretical fact in all random number generators. However, when making presentations, sometimes you may require the same results (especially when the
report being presented shows one set of results and during a live presentation you would like to show the same results being generated, or when you are sharing models with others and would like the
same results to be obtained every time), then check this preference and enter in an initial seed number. The seed number can be any positive integer. Using the same initial seed value, the same
number of trials, and the same input assumptions will always yield the same sequence of random numbers, guaranteeing the same final set of results.
Note that once a new simulation profile has been created, you can come back later and modify these selections. In order to do this, make sure that the
Test Driving Risk Simulator
FIGURE 5.4
Change active simulation.
current active profile is the profile you wish to modify; otherwise, click on Simulation | Change Simulation Profile, select the profile you wish to change and click OK (Figure 5.4 shows an example
where there are multiple profiles and how to activate, duplicate, or delete a selected profile). Then, click on Simulation | Edit Simulation Profile and make the required changes.
2. Defining Input Assumptions The next step is to set input assumptions in your model. Note that assumptions can only be assigned to cells without any equations or functions, that is, typed-in
numerical values that are inputs in a model, whereas output forecasts can only be assigned to cells with equations and functions, that is, outputs of a model. Recall that assumptions and forecasts
cannot be set unless a simulation profile already exists. Follow this procedure to set new input assumptions in your model: 1. Select the cell you wish to set an assumption on (e.g., cell G8 in the
Basic Simulation Model example). 2. Click on Simulation | Set Input Assumption or click on the set assumption icon in the Risk Simulator icon toolbar. 3. Select the relevant distribution you want,
enter the relevant distribution parameters, and hit OK to insert the input assumption into your model (Figure 5.5) Several key areas are worthy of mention in the Assumption Properties. Figure 5.6
shows the different areas: ■
Assumption Name. This optional area allows you to enter in unique names for the assumptions to help track what each of the assumptions
FIGURE 5.5
Setting an input assumption.
represents. Good modeling practice is to use short but precise assumption names. Distribution Gallery. This area to the left shows all of the different distributions available in the software. To
change the views, right click anywhere in the gallery and select large icons, small icons, or list. More than two dozen distributions are available. Input Parameters. Depending on the distribution
selected, the required relevant parameters are shown. You may either enter the parameters directly or link them to specific cells in your worksheet (click on the link icon to link an input parameter
to a worksheet cell). Hard coding or typing the parameters is useful when the assumption parameters are assumed not to change. Linking to worksheet cells is useful when the input parameters need to
be visible on the worksheets themselves or are allowed to be changed as in a dynamic simulation (where the input parameters themselves are linked to assumptions in the worksheet, creating a
multidimensional simulation or simulation of simulations). Data Boundary. Distributional or data boundaries truncation are typically not used by the average analyst but exist for truncating the
distributional assumptions. For instance, if a normal distribution is selected, the theoretical boundaries are between negative infinity and positive infinity. However, in practice, the simulated
variable exists only within some smaller range, and this range can then be entered to truncate the distribution appropriately.
Test Driving Risk Simulator Enter the assumption’s name.
Different views exist by right-clicking this distribution gallery.
Enter the distribution’s required parameters.
Check and enter the boundaries for truncating distributions if required.
A short description of the distribution is available here.
Use this area to add, edit, or remove any correlations among input assumptions.
FIGURE 5.6 ■
Check to perform dynamic simulations if required.
Assumption properties.
Correlations. Pairwise correlations can be assigned to input assumptions here. If assumptions are required, remember to check the Turn on Correlations preference by clicking on Simulation | Edit
Simulation Profile. See the discussion on correlations later in this chapter for more details about assigning correlations and the effects correlations will have on a model. Short Descriptions. These
exist for each of the distributions in the gallery. The short descriptions explain when a certain distribution is used as well as the input parameter requirements. See the section in the appendix,
Understanding Probability Distributions, for details about each distribution type available in the software.
Note: If you are following along with the example, continue by setting another assumption on cell G9. This time use the Uniform distribution with a minimum value of 0.9 and a maximum value of 1.1.
Then, proceed to defining the output forecasts in the next step.
3. Defining Output Forecasts The next step is to define output forecasts in the model. Forecasts can only be defined on output cells with equations or functions.
Use the following procedure to define the forecasts: 1. Select the cell on which you wish to set an assumption (e.g., cell G10 in the Basic Simulation Model example). 2. Click on Simulation | Set
Output Forecast or click on the set forecast icon on the Risk Simulator icon toolbar. 3. Enter the relevant information and click OK. Figure 5.7 illustrates the set forecast properties: ■
Forecast Name. Specify the name of the forecast cell. This is important because when you have a large model with multiple forecast cells, naming the forecast cells individually allows you to access
the right results quickly. Do not underestimate the importance of this simple step. Good modeling practice is to use short but precise assumption names. Forecast Precision. Instead of relying on a
guesstimate of how many trials to run in your simulation, you can set up precision and error controls. When an error–precision combination has been achieved in the simulation, the simulation will
pause and inform you of the precision achieved, making the number of simulation trials an automated process and not making you rely on guesses of the required number of trials to simulate. Review the
section on error and precision control for more specific details. Show Forecast Window. This property allows the user to show or not show a particular forecast window. The default is to always show a
forecast chart.
Specify the name of the forecast.
Specify the forecast precision and error levels if required.
Specify if you want this forecast to be visible.
FIGURE 5.7
Set output forecast.
Test Driving Risk Simulator
4. Run Simulation If everything looks right, simply click on Simulation | Run Simulation or click on the Run icon on the Risk Simulator toolbar, and the simulation will proceed. You may also reset a
simulation after it has run to rerun it (Simulation | Reset Simulation or the reset icon on the toolbar), or to pause it during a run. Also, the step function (Simulation | Step Simulation or the
step icon on the toolbar) allows you to simulate a single trial, one at a time, useful for educating others on simulation (i.e., you can show that at each trial, all the values in the assumption
cells are being replaced and the entire model is recalculated each time).
5. Interpreting the Forecast Results The final step in Monte Carlo simulation is to interpret the resulting forecast charts. Figures 5.8 to 5.15 show the forecast chart and the corresponding
statistics generated after running the simulation. Typically, the following sections on the forecast window are important in interpreting the results of a simulation: ■
Forecast Chart. The forecast chart shown in Figure 5.8 is a probability histogram that shows the frequency counts of values occurring and the total number of trials simulated. The vertical bars show
the frequency of a particular x value occurring out of the total number of trials, while the cumulative frequency (smooth line) shows the total probabilities of all values at and below x occurring in
the forecast. Forecast Statistics. The forecast statistics shown in Figure 5.9 summarize the distribution of the forecast values in terms of the four moments
FIGURE 5.8
Forecast chart.
FIGURE 5.9
Forecast statistics.
of a distribution. See The Statistics of Risk in Chapter 2 for more details on what some of these statistics mean. You can rotate between the histogram and statistics tab by depressing the space bar.
Preferences. The preferences tab in the forecast chart (Figure 5.10) allows you to change the look and feel of the charts. For instance, if Always Show Window On Top is selected, the forecast charts
will always be visible regardless of what other software is running on your computer. The Semitransparent When Inactive is a powerful option used to compare or overlay multiple forecast charts at
once (e.g., enable this option on several forecast charts and drag them on top of one another to visually see the similarities or differences. Histogram Resolution allows
FIGURE 5.10
Forecast chart preferences.
Test Driving Risk Simulator
FIGURE 5.11
Forecast chart options.
you to change the number of bins of the histogram, anywhere from 5 bins to 100 bins. Also, the Data Update Interval section allows you to control how fast the simulation runs versus how often the
forecast chart is updated; that is, if you wish to see the forecast chart updated at almost every trial, this feature will slow down the simulation as more memory is being allocated to updating the
chart versus running the simulation. This section is merely a user preference and in no way changes the results of the simulation, just the speed of completing the simulation. You can also click on
Close All and Minimize All to close or minimize the existing forecast windows. Options. This forecast chart option (Figure 5.11) allows you to show all the forecast data or to filter in or out values
that fall within some specified interval, or within some standard deviation that you choose. Also, the precision level can be set here for this specific forecast to show the error levels in the
statistics view. See the section Correlations and Precision Control for more details.
USING FORECAST CHARTS AND CONFIDENCE INTERVALS In forecast charts, you can determine the probability of occurrence called confidence intervals; that is, given two values, what are the chances that
the outcome will fall between these two values? Figure 5.12 illustrates that there is a 90 percent probability that the final outcome (in this case, the level of income) will be between $0.2647 and
$1.3230. The two-tailed confidence interval can be obtained by first selecting Two-Tail as the type, entering the desired certainty value (e.g., 90), and hitting Tab on the keyboard. The two
FIGURE 5.12
Forecast chart two-tailed confidence interval.
computed values corresponding to the certainty value will then be displayed. In this example, there is a 5 percent probability that income will be below $0.2647 and another 5 percent probability that
income will be above $1.3230; that is, the two-tailed confidence interval is a symmetrical interval centered on the median or 50th percentile value. Thus, both tails will have the same probability.
Alternatively, a one-tail probability can be computed. Figure 5.13 shows a Left-Tail selection at 95 percent confidence (i.e., choose Left-Tail as the type, enter 95 as the certainty level, and hit
Tab on the keyboard). This means that there is a 95 percent probability that the income will be below $1.3230 (i.e., 95 percent on the left-tail of $1.3230) or a 5 percent probability that
FIGURE 5.13
Forecast chart one-tailed confidence interval.
Test Driving Risk Simulator
FIGURE 5.14
Forecast chart left tail probability evaluation.
income will be above $1.3230, corresponding perfectly with the results seen in Figure 5.12. In addition to evaluating the confidence interval (i.e., given a probability level and finding the relevant
income values), you can determine the probability of a given income value (Figure 5.14). For instance, what is the probability that income will be less than $1? To do this, select the Left-Tail
probability type, enter 1 into the value input box, and hit Tab. The corresponding certainty will then be computed (in this case, there is a 67.70 percent probability income will be below $1). For
the sake of completeness, you can select the Right-Tail probability type and enter the value 1 in the value input box, and hit Tab (Figure 5.15).
FIGURE 5.15
Forecast chart right tail probability evaluation.
The resulting probability indicates the right-tail probability past the value 1, that is, the probability of income exceeding $1 (in this case, we see that there is a 32.30 percent probability of
income exceeding $1). Note that the forecast window is resizable by clicking on and dragging the bottom right corner of the forecast window. Finally, it is always advisable that before rerunning a
simulation, the current simulation should be reset by selecting Simulation | Reset Simulation.
CORRELATIONS AND PRECISION CONTROL The Basics of Correlations The correlation coefficient is a measure of the strength and direction of the relationship between two variables, and can take on any
values between –1.0 and +1.0; that is, the correlation coefficient can be decomposed into its direction or sign (positive or negative relationship between two variables) and the magnitude or strength
of the relationship (the higher the absolute value of the correlation coefficient, the stronger the relationship). The correlation coefficient can be computed in several ways. The first approach is
to manually compute the correlation coefficient r of a pair of variables x and y using: rx, y =
∑ xi yi − ∑ xi ∑ yi 2 2 n ∑ xi2 − ( ∑ xi ) n ∑ yi2 − ( ∑ yi ) n
The second approach is to use Excel’s CORREL function. For instance, if the 10 data points for x and y are listed in cells A1:B10, then the Excel function to use is CORREL (A1:A10, B1:B10). The third
approach is to run Risk Simulator’s Multi-Variable Distributional Fitting Tool and the resulting correlation matrix will be computed and displayed. It is important to note that correlation does not
imply causation. Two completely unrelated random variables might display some correlation, but this does not imply any causation between the two (e.g., sunspot activity and events in the stock market
are correlated, but there is no causation between the two). There are two general types of correlations: parametric and nonparametric correlations. Pearson’s correlation coefficient is the most
common correlation measure, and is usually referred to simply as the correlation coefficient. However, Pearson’s correlation is a parametric measure, which means that it requires both correlated
variables to have an underlying
Test Driving Risk Simulator
normal distribution and that the relationship between the variables is linear. When these conditions are violated, which is often the case in Monte Carlo simulation, the nonparametric counterparts
become more important. Spearman’s rank correlation and Kendall’s tau are the two nonparametric alternatives. The Spearman correlation is most commonly used and is most appropriate when applied in the
context of Monte Carlo simulation—there is no dependence on normal distributions or linearity, meaning that correlations between different variables with different distribution can be applied. In
order to compute the Spearman correlation, first rank all the x and y variable values and then apply the Pearson’s correlation computation. In the case of Risk Simulator, the correlation used is the
more robust nonparametric Spearman’s rank correlation. However, to simplify the simulation process and to be consistent with Excel’s correlation function, the correlation user inputs required are the
Pearson’s correlation coefficient. Risk Simulator will then apply its own algorithms to convert them into Spearman’s rank correlation, thereby simplifying the process.
Applying Correlations in Risk Simulator Correlations can be applied in Risk Simulator in several ways: ■ ■
When defining assumptions, simply enter the correlations into the correlation grid in the Distribution Gallery. With existing data, run the Multi-Variable Distribution Fitting tool to perform
distributional fitting and to obtain the correlation matrix between pairwise variables. If a simulation profile exists, the assumptions fitted will automatically contain the relevant correlation
values. With the use of a direct-input correlation matrix, click on Simulation | Edit Correlations to view and edit the correlation matrix used in the simulation.
Note that the correlation matrix must be positive definite; that is, the correlation must be mathematically valid. For instance, suppose you are trying to correlate three variables: grades of
graduate students in a particular year, the number of beers they consume a week, and the number of hours they study a week. One would assume that the following correlation relationships exist: Grades
and Beer: – Grades and Study: + Beer and Study: –
The more they drink, the lower the grades (no show on exams). The more they study, the higher the grades. The more they drink, the less they study (drunk and partying all the time).
However, if you input a negative correlation between Grades and Study and assuming that the correlation coefficients have high magnitudes, the correlation matrix will be nonpositive definite. It
would defy logic, correlation requirements, and matrix mathematics. However, smaller coefficients can sometimes still work even with the bad logic. When a nonpositive definite or bad correlation
matrix is entered, Risk Simulator automatically informs you of the error and offers to adjust these correlations to something that is semipositive definite while still maintaining the overall
structure of the correlation relationship (the same signs as well as the same relative strengths).
The Effects of Correlations in Monte Carlo Simulation Although the computations required to correlate variables in a simulation is complex, the resulting effects are fairly clear. Table 5.1 shows a
simple correlation model (Correlation Effects Model in the example folder). The calculation for revenue is simply price multiplied by quantity. The same model is replicated for no correlations,
positive correlation (+0.9), and negative correlation (–0.9) between price and quantity. The resulting statistics are shown in Figure 5.16. Notice that the standard deviation of the model without
correlations is 0.23, compared to 0.30 for the positive correlation, and 0.12 for the negative correlation; that is, for simple models with positive relationships (e.g., additions and
multiplications), negative correlations tend to reduce the average spread of the distribution and create a tighter and more concentrated forecast distribution as compared to positive correlations
with larger average spreads. However, the mean remains relatively stable. This implies that correlations do little to change the expected value of projects but can reduce or increase a project’s
risk. Recall in financial theory that negatively correlated variables, projects, or assets when combined in a portfolio tend to create a diversification effect where the overall risk is reduced.
Therefore, we see a smaller standard deviation for the negatively correlated model. Table 5.2 illustrates the results after running a simulation, extracting the raw data of the assumptions, and
computing the correlations between the TABLE 5.1
Price Quantity Revenue
Simple Correlation Model Without Correlation
Positive Correlation
Negative Correlation
$2.00 1.00 $2.00
$2.00 1.00 $2.00
$2.00 1.00 $2.00
Test Driving Risk Simulator
FIGURE 5.16
Correlation results.
104 TABLE 5.2
Correlations Recovered
Price Negative Correlation
Quantity Negative Correlation
Correlation –0.90
Price Positive Correlation 102 461 515 874 769 481 627 82 659 188 458 981 528 865
Quantity Positive Correlation Correlation 158 515 477 833 792 471 446 190 674 286 439 972 569 812
variables. The table shows that the input assumptions are recovered in the simulation; that is, you enter +0.9 and –0.9 correlations and the resulting simulated values have the same correlations.
Clearly there will be minor differences from one simulation run to another, but when enough trials are run, the resulting recovered correlations approach those that were input.
Precision and Error Control One very powerful tool in Monte Carlo simulation is that of precision control. For instance, how many trials are considered sufficient to run in a complex model? Precision
control takes the guesswork out of estimating the relevant number of trials by allowing the simulation to stop if the level of the prespecified precision is reached. The precision control
functionality lets you set how precise you want your forecast to be. Generally speaking, as more trials are calculated, the confidence interval narrows and the statistics become more accurate. The
precision control feature in Risk Simulator uses the characteristic of confidence intervals to determine when a specified accuracy of a statistic has been reached. For each forecast, you can specify
the specific confidence interval for the precision level. Make sure that you do not confuse three very different terms: error, precision, and confidence. Although they sound similar, the concepts are
significantly different from one another. A simple illustration is in order. Suppose you are a taco shell manufacturer and are interested in finding out how
Test Driving Risk Simulator
many broken taco shells there are on average in a single box of 100 shells. One way to do this is to collect a sample of prepackaged boxes of 100 taco shells, open them, and count how many of them
are actually broken. You manufacture 1 million boxes a day (this is your population), but you randomly open only 10 boxes (this is your sample size, also known as your number of trials in a
simulation). The number of broken shells in each box is as follows: 24, 22, 4, 15, 33, 32, 4, 1, 45, and 2. The calculated average number of broken shells is 18.2. Based on these 10 samples or
trials, the average is 18.2 units, while based on the sample, the 80 percent confidence interval is between 2 and 33 units (that is, 80 percent of the time, the number of broken shells is between 2
and 33 based on this sample size or number of trials run). However, how sure are you that 18.2 is the correct average? Are 10 trials sufficient to establish this average and confidence level? The
confidence interval between 2 and 33 is too wide and too variable. Suppose you require a more accurate average value where the error is ±2 taco shells 90 percent of the time—this means that if you
open all 1 million boxes manufactured in a day, 900,000 of these boxes will have broken taco shells on average at some mean unit ±2 tacos. How many more taco shell boxes would you then need to sample
(or trials run) to obtain this level of precision? Here, the 2 tacos is the error level while the 90 percent is the level of precision. If sufficient numbers of trials are run, then the 90 percent
confidence interval will be identical to the 90 percent precision level, where a more precise measure of the average is obtained such that 90 percent of the time, the error, and hence, the confidence
will be ±2 tacos. As an example, say the average is 20 units, then the 90 percent confidence interval will be between 18 and 22 units, where this interval is precise 90 percent of the time, where in
opening all 1 million boxes, 900,000 of them will have between 18 and 22 broken tacos. Stated differently, we have a 10 percent error level with respect to the mean (i.e., 2 divided by 20) at the 90
percent confidence level. The terms percent error and percent confidence level are standard terms used in statistics and in Risk Simulator. The number of trials required to hit this precision is
based on the sampling error equation of x±Z where Z
s n
s n
is the error of 2 tacos
x– is the sample average Z is the standard-normal Z-score obtained from the 90 percent precision level s is the sample standard deviation n is the number of trials required to hit this level of error
with the specified precision.1
FIGURE 5.17
Setting the forecast’s precision level.
Figures 5.17 and 5.18 illustrate how precision control can be performed on multiple simulated forecasts in Risk Simulator. This feature prevents the user from having to decide how many trials to run
in a simulation and eliminates all possibilities of guesswork. Figure 5.18 shows that there is a 0.01 percent error with respect to the mean at a 95 percent confidence level. Using the simple
techniques outlined in this chapter, you are well on your way to running Monte Carlo simulations with Risk Simulator. Later chapters continue with additional techniques and tools available in Risk
Simulator to further enhance your analysis.
FIGURE 5.18
Computing the error.
Test Driving Risk Simulator
APPENDIX—UNDERSTANDING PROBABILITY DISTRIBUTIONS This chapter demonstrates the power of Monte Carlo simulation, but in order to get started with simulation, one first needs to understand the concept
of probability distributions. This appendix continues with the use of the author’s Risk Simulator software and shows how simulation can be very easily and effortlessly implemented in an existing
Excel model. A limited trial version of the Risk Simulator software is available in the enclosed CDROM (to obtain a permanent version, please visit the author’s web site at
www.realoptionsvaluation.com). Professors can obtain free semester-long computer lab licenses for their students and themselves if this book and the simulation/options valuation software are used and
taught in an entire class. To begin to understand probability, consider this example: You want to look at the distribution of nonexempt wages within one department of a large company. First, you
gather raw data—in this case, the wages of each nonexempt employee in the department. Second, you organize the data into a meaningful format and plot the data as a frequency distribution on a chart.
To create a frequency distribution, you divide the wages into group intervals and list these intervals on the chart’s horizontal axis. Then you list the number or frequency of employees in each
interval on the chart’s vertical axis. Now you can easily see the distribution of nonexempt wages within the department. A glance at the chart illustrated in Figure 5.19 reveals that the employees
earn from $7.00 to $9.00 per hour. You can chart this data as a probability distribution. A probability distribution shows the number of employees
60 50 Number of Employees
40 30 20 10 7.00 7.50 8.00 8.50 9.00 Hourly Wage Ranges in Dollars
FIGURE 5.19
Frequency histogram I.
0.30 0.25 0.20 Probability 0.15 0.10 0.05 7.00 7.50 8.00 8.50 9.00 Hourly Wage Ranges in Dollars
FIGURE 5.20
Frequency histogram II.
in each interval as a fraction of the total number of employees. To create a probability distribution, you divide the number of employees in each interval by the total number of employees and list
the results on the chart’s vertical axis. The chart in Figure 5.20 shows the number of employees in each wage group as a fraction of all employees; you can estimate the likelihood or probability that
an employee drawn at random from the whole group earns a wage within a given interval. For example, assuming the same conditions exist at the time the sample was taken, the probability is 0.20 (a one
in five chance) that an employee drawn at random from the whole group earns $8.50 an hour. Probability distributions are either discrete or continuous. Discrete probability distributions describe
distinct values, usually integers, with no intermediate values and are shown as a series of vertical bars. A discrete distribution, for example, might describe the number of heads in four flips of a
coin as 0, 1, 2, 3, or 4. Continuous probability distributions are actually mathematical abstractions because they assume the existence of every possible intermediate value between two numbers; that
is, a continuous distribution assumes there is an infinite number of values between any two points in the distribution. However, in many situations, you can effectively use a continuous distribution
to approximate a discrete distribution even though the continuous model does not necessarily describe the situation exactly.
Selecting a Probability Distribution Plotting data is one method for selecting a probability distribution. The following steps provide another process for selecting probability distributions that
best describe the uncertain variables in your spreadsheets.
Test Driving Risk Simulator
To select the correct probability distribution, use the following steps: 1. Look at the variable in question. List everything you know about the conditions surrounding this variable. You might be
able to gather valuable information about the uncertain variable from historical data. If historical data are not available, use your own judgment, based on experience, listing everything you know
about the uncertain variable. 2. Review the descriptions of the probability distributions. 3. Select the distribution that characterizes this variable. A distribution characterizes a variable when
the conditions of the distribution match those of the variable. Alternatively, if you have historical, comparable, contemporaneous, or forecast data, you can use Risk Simulator’s distributional
fitting modules to find the best statistical fit for your existing data. This fitting process will apply some advanced statistical techniques to find the best distribution and its relevant parameters
that describe the data.
Probability Density Functions, Cumulative Distribution Functions, and Probability Mass Functions In mathematics and Monte Carlo simulation, a probability density function (PDF) represents a
continuous probability distribution in terms of integrals. If a probability distribution has a density of f(x), then intuitively the infinitesimal interval of [x, x + dx] has a probability of f(x)
dx. The PDF therefore can be seen as a smoothed version of a probability histogram; that is, by providing an empirically large sample of a continuous random variable repeatedly, the histogram using
very narrow ranges will resemble the random variable’s PDF. The probability of the interval between [a, b] is given by b
∫a f (x)dx which means that the total integral of the function f must be 1.0. It is a common mistake to think of f(a) as the probability of a. This is incorrect. In fact, f(a) can sometimes be larger
than 1—consider a uniform distribution between 0.0 and 0.5. The random variable x within this distribution will have f(x) greater than 1. The probability in reality is the function f(x)dx discussed
previously, where dx is an infinitesimal amount. The cumulative distribution function (CDF) is denoted as F(x) = P(X ≤ x), indicating the probability of X taking on a less than or equal value to x.
Every CDF is monotonically increasing, is continuous from the right, and at the limits, has the following properties: lim F(x) = 0 and lim F(x) = 1
x →− ∞
x →+ ∞
Further, the CDF is related to the PDF by b
F(b) − F(a) = P(a ≤ X ≤ b) =
∫a f (x)dx
where the PDF function f is the derivative of the CDF function F. In probability theory, a probability mass function or PMF gives the probability that a discrete random variable is exactly equal to
some value. The PMF differs from the PDF in that the values of the latter, defined only for continuous random variables, are not probabilities; rather, its integral over a set of possible values of
the random variable is a probability. A random variable is discrete if its probability distribution is discrete and can be characterized by a PMF. Therefore, X is a discrete random variable if
∑u P(X = u) = 1 as u runs through all possible values of the random variable X.
Discrete Distributions Following is a detailed listing of the different types of probability distributions that can be used in Monte Carlo simulation. This listing is included in the appendix for the
reader’s reference. Bernoulli or Yes/No Distribution The Bernoulli distribution is a discrete distribution with two outcomes (e.g., head or tails, success or failure, 0 or 1). The Bernoulli
distribution is the binomial distribution with one trial and can be used to simulate Yes/No or Success/Failure conditions. This distribution is the fundamental building block of other more complex
distributions. For instance: ■
Binomial distribution. Bernoulli distribution with higher number of n total trials and computes the probability of x successes within this total number of trials. Geometric distribution. Bernoulli
distribution with higher number of trials and computes the number of failures required before the first success occurs. Negative binomial distribution. Bernoulli distribution with higher number of
trials and computes the number of failures before the xth success occurs.
Test Driving Risk Simulator
The mathematical constructs for the Bernoulli distribution are as follows: ⎧1 − p for x = 0 P(x) = ⎨ for x = 1 ⎩p or P(x) = p x (1 − p)1− x Mean = p Standard Deviation = Skewness =
p(1 − p)
1 − 2p p(1 − p)
Excess Kurtosis =
6 p2 − 6 p + 1 p(1 − p)
The probability of success (p) is the only distributional parameter. Also, it is important to note that there is only one trial in the Bernoulli distribution, and the resulting simulated value is
either 0 or 1. Input requirements: Probability of success > 0 and < 1 (that is, 0.0001 ≤ p ≤ 0.9999) Binomial Distribution The binomial distribution describes the number of times a particular event
occurs in a fixed number of trials, such as the number of heads in 10 flips of a coin or the number of defective items out of 50 items chosen. The three conditions underlying the binomial
distribution are: 1. For each trial, only two outcomes are possible that are mutually exclusive. 2. The trials are independent—what happens in the first trial does not affect the next trial. 3. The
probability of an event occurring remains the same from trial to trial. n! p x (1 − p)(n − x) for n > 0; x = 0, 1,, 2, . . . n; and 0 < p < 1 x !(n − x)! Mean = np
P(x) =
Standard Deviation = Skewness =
np(1 − p)
1 − 2p np(1 − p)
Excess Kurtosis =
6 p2 − 6 p + 1 np(1 − p)
The probability of success (p) and the integer number of total trials (n) are the distributional parameters. The number of successful trials is denoted x. It is important to note that probability of
success (p) of 0 or 1 are trivial conditions and do not require any simulations, and, hence, are not allowed in the software. Input requirements: Probability of success > 0 and < 1 (that is, 0.0001 ≤
p ≤ 0.9999). Number of trials ≥ 1 or positive integers and ≤ 1,000 (for larger trials, use the normal distribution with the relevant computed binomial mean and standard deviation as the normal
distribution’s parameters). Discrete Uniform The discrete uniform distribution is also known as the equally likely outcomes distribution, where the distribution has a set of N elements, and each
element has the same probability. This distribution is related to the uniform distribution, but its elements are discrete and not continuous. The mathematical constructs for the discrete uniform
distribution are as follows: 1 ranked value N N +1 Mean = ranked value 2 (N − 1)(N + 1) ranked valuee Standard Deviation = 12 Skewness = 0 (that is, the distribution is perfectly symmetrical) P(x) =
Excess Kurtosis =
−6(N 2 + 1) ranked value 5(N − 1)(N + 1)
Input requirements: Minimum < Maximum and both must be integers (negative integers and zero are allowed) Geometric Distribution The geometric distribution describes the number of trials until the
first successful occurrence, such as the number of times you need to spin a roulette wheel before you win. The three conditions underlying the geometric distribution are:
Test Driving Risk Simulator
1. The number of trials is not fixed. 2. The trials continue until the first success. 3. The probability of success is the same from trial to trial. The mathematical constructs for the geometric
distribution are as follows: P(x) = p(1 − p)x −1 for 0 < p < 1 and x = 1, 2, . . . , n Mean =
1 −1 p
Standard Deviation = Skewness =
1− p p2
2−p 1− p
Excess Kurtosis =
p2 − 6 p + 6 1− p
The probability of success (p) is the only distributional parameter. The number of successful trials simulated is denoted x, which can only take on positive integers. Input requirements: Probability
of success > 0 and < 1 (that is, 0.0001 ≤ p ≤ 0.9999). It is important to note that probability of success (p) of 0 or 1 are trivial conditions and do not require any simulations, and, hence, are not
allowed in the software. Hypergeometric Distribution The hypergeometric distribution is similar to the binomial distribution in that both describe the number of times a particular event occurs in a
fixed number of trials. The difference is that binomial distribution trials are independent, whereas hypergeometric distribution trials change the probability for each subsequent trial and are called
trials without replacement. For example, suppose a box of manufactured parts is known to contain some defective parts. You choose a part from the box, find it is defective, and remove the part from
the box. If you choose another part from the box, the probability that it is defective is somewhat lower than for the first part because you have removed a defective part. If you had replaced the
defective part, the probabilities would have remained the same, and the process would have satisfied the conditions for a binomial distribution.
The three conditions underlying the hypergeometric distribution are: 1. The total number of items or elements (the population size) is a fixed number, a finite population. The population size must be
less than or equal to 1,750. 2. The sample size (the number of trials) represents a portion of the population. 3. The known initial probability of success in the population changes after each trial.
The mathematical constructs for the hypergeometric distribution are as follows: (N x )! (N − N x )! x !(N x − x)! (n − x)!(N − N x − n + x)! P(x) = for x = Max(n − (N − N x ), 0), . . . , Min(n, N x
) N! n !(N − n)! Nx n Mean = N (N − N x )N x n(N − n) Standard Deviation = N 2 (N − 1) Skewness =
(N − 2N x )(N − 2n) N −1 (N − N x )N x n(N − n) N−2
Excess Kurtosis =
V (N , N x , n) where (N − N x ) N x n(−3 + N)(−2 + N)(− N + n)
V (N , Nx, n) = (N − N x )3 − (N − N x )5 + 3(N − N x )2 N x − 6(N − N x )3 N x +(N − N x )4 Nx + 3(N − N x ) N x2 − 12(N − N x )2 N x2 + 8(N − N x )3 N x2 + N x3 −6(N − N x ) N x3 + 8(N − N x )2 N
x3 + (N − N x ) N x4 − N x5 − 6(N − N x )3 N x +6(N − N x )4 N x + 18(N − N x )2 N x n − 6(N − N x )3 N x n + 18(N − N x ) N x2 n −24(N − N x )2 N x2 n − 6(N − N x )3 n − 6(N − N x ) N x3 n + 6N x4 n
+ 6(N − N x )2 n 2 −6(N − N x )3 n 2 − 24(N − N x ) N x n 2 + 12(N − N x )2 N x n 2 + 6N x2 n 2 N − N x ) N x2 n 2 − 6N x3 n 2 +12(N
The number of items in the population (N), trials sampled (n), and number of items in the population that have the successful trait (Nx) are the distributional parameters. The number of successful
trials is denoted x. Input requirements: Population ≥ 2 and integer Trials > 0 and integer
Test Driving Risk Simulator
Successes > 0 and integer Population > Successes Trials < Population Population < 1,750 Negative Binomial Distribution The negative binomial distribution is useful for modeling the distribution of
the number of trials until the rth successful occurrence, such as the number of sales calls you need to make to close a total of 10 orders. It is essentially a superdistribution of the geometric
distribution. This distribution shows the probabilities of each number of trials in excess of r to produce the required success r. The three conditions underlying the negative binomial distribution
are: 1. The number of trials is not fixed. 2. The trials continue until the rth success. 3. The probability of success is the same from trial to trial. The mathematical constructs for the negative
binomial distribution are as follows: (x + r − 1)! r p (1 − p)x for x = r, r + 1, . . . ; and 0 < p < 1 (r − 1)! x ! r(1 − p) Mean = p P(x) =
Standard Deviation = Skewness =
r(1 − p) p2
2−p r(1 − p)
Excess Kurtosis =
p2 − 6 p + 6 r(1 − p)
The probability of success (p) and required successes (r) are the distributional parameters. Input requirements: Successes required must be positive integers > 0 and < 8,000. Probability of success >
0 and < 1 (that is, 0.0001 ≤ p ≤ 0.9999). It is important to note that probability of success (p) of 0 or 1 are trivial conditions and do not require any simulations, and, hence, are not allowed in
the software.
Poisson Distribution The Poisson distribution describes the number of times an event occurs in a given interval, such as the number of telephone calls per minute or the number of errors per page in a
document. The three conditions underlying the Poisson distribution are: 1. The number of possible occurrences in any interval is unlimited. 2. The occurrences are independent. The number of
occurrences in one interval does not affect the number of occurrences in other intervals. 3. The average number of occurrences must remain the same from interval to interval. The mathematical
constructs for the Poisson are as follows: e−λ λ x for x and λ > 0 x! Mean = λ P(x) =
Standard Deviation = 1 Skewness = λ 1 Excess Kurtosis = λ
Rate (l) is the only distributional parameter. Input requirements: Rate > 0 and ≤ 1,000 (that is, 0.0001 ≤ rate ≤ 1,000)
Continuous Distributions Beta Distribution The beta distribution is very flexible and is commonly used to represent variability over a fixed range. One of the more important applications of the beta
distribution is its use as a conjugate distribution for the parameter of a Bernoulli distribution. In this application, the beta distribution is used to represent the uncertainty in the probability
of occurrence of an event. It is also used to describe empirical data and predict the random behavior of percentages and fractions, as the range of outcomes is typically between 0 and 1. The value of
the beta distribution lies in the wide variety of shapes it can assume when you vary the two parameters, alpha and beta. If the parameters are equal, the distribution is symmetrical. If either
parameter is 1 and the other parameter is greater than 1, the distribution is J-shaped. If alpha is less than beta, the distribution is said to be positively skewed (most of the
Test Driving Risk Simulator
values are near the minimum value). If alpha is greater than beta, the distribution is negatively skewed (most of the values are near the maximum value). The mathematical constructs for the beta
distribution are as follows: f (x) =
( x )(α −1) (1 − x )(β −1)
⎡ Γ (α )Γ (β ) ⎤ ⎢ Γ (α + β ) ⎥ ⎦ ⎣ α Mean = α+β
Standard Deviation = Skewness =
for α > 0; β > 0; x > 0
αβ (α + β )2 (1 + α + β )
2(β − α ) 1 + α + β (2 + α + β ) αβ
Excess Kurtosis =
3(α + β + 1)[α β (α + β − 6) + 2((α + β )2 ] −3 α β (α + β + 2)(α + β + 3)
Alpha (a) and beta (b) are the two distributional shape parameters, and G is the gamma function. The two conditions underlying the beta distribution are: 1. The uncertain variable is a random value
between 0 and a positive value. 2. The shape of the distribution can be specified using two positive values. Input requirements: Alpha and beta > 0 and can be any positive value Cauchy Distribution
or Lorentzian Distribution or Breit–Wigner Distribution The Cauchy distribution, also called the Lorentzian distribution or Breit–Wigner distribution, is a continuous distribution describing
resonance behavior. It also describes the distribution of horizontal distances at which a line segment tilted at a random angle cuts the x-axis. The mathematical constructs for the Cauchy or
Lorentzian distribution are as follows: f (x) =
1 γ /2 π (x − m)2 + γ 2 / 4
The Cauchy distribution is a special case where it does not have any theoretical moments (mean, standard deviation, skewness, and kurtosis) as they are all undefined.
Mode location (m) and scale (g) are the only two parameters in this distribution. The location parameter specifies the peak or mode of the distribution, while the scale parameter specifies the
half-width at half-maximum of the distribution. In addition, the mean and variance of a Cauchy or Lorentzian distribution are undefined. In addition, the Cauchy distribution is the Student’s t
distribution with only 1 degree of freedom. This distribution is also constructed by taking the ratio of two standard normal distributions (normal distributions with a mean of zero and a variance of
one) that are independent of one another. Input requirements: Location can be any value Scale > 0 and can be any positive value Chi-Square Distribution The chi-square distribution is a probability
distribution used predominantly in hypothesis testing, and is related to the gamma distribution and the standard normal distribution. For instance, the sums of independent normal distributions are
distributed as a chi-square (c 2) with k degrees of freedom: d Z12 + Z22 + . . . + Zk2 ~ χ k2
The mathematical constructs for the chi-square distribution are as follows: 2(− k / 2) k / 2 −1 − x / 2 x e for all x > 0 Γ (k / 2) Mean = k f (x) =
Standard Deviation = Skewness = 2
2 k
Excess Kurtosis =
12 k
The gamma function is written as G. Degrees of freedom k is the only distributional parameter. The chi-square distribution can also be modeled using a gamma distribution by setting the shape
parameter as k/2 and scale as 2S2 where S is the scale. Input requirements: Degrees of freedom > 1 and must be an integer < 1,000
Test Driving Risk Simulator
Exponential Distribution The exponential distribution is widely used to describe events recurring at random points in time, such as the time between failures of electronic equipment or the time
between arrivals at a service booth. It is related to the Poisson distribution, which describes the number of occurrences of an event in a given interval of time. An important characteristic of the
exponential distribution is the “memoryless” property, which means that the future lifetime of a given object has the same distribution, regardless of the time it existed. In other words, time has no
effect on future outcomes. The mathematical constructs for the exponential distribution are as follows: f (x) = λ e − λ x for x ≥ 0; λ > 0 1 Mean = λ 1 Standard Deviation = λ Skewness = 2 (this value
applies to all success rate λ inputs) Excess Kurtosis = 6 (this value applies to all success rate λ inputs) Success rate (l) is the only distributional parameter. The number of successful trials is
denoted x. The condition underlying the exponential distribution is: 1. The exponential distribution describes the amount of time between occurrences. Input requirements: Rate > 0 and ≤ 300 Extreme
Value Distribution or Gumbel Distribution The extreme value distribution (Type 1) is commonly used to describe the largest value of a response over a period of time, for example, in flood flows,
rainfall, and earthquakes. Other applications include the breaking strengths of materials, construction design, and aircraft loads and tolerances. The extreme value distribution is also known as the
Gumbel distribution. The mathematical constructs for the extreme value distribution are as follows: 1 −Z ze where z = e β Mean = m + 0.577215β f (x) =
x−m β
for β > 0; and any value of x and m
1 2 2 π β 6
Standard Deviation =
12 6(1.2020569) = 1.13955 (this applies for all values of mod de and scale) π3 Excess Kurtosis = 5.4 (this applies for all values of mode and scale)
Skewness =
Mode (m) and scale (b) are the distributional parameters. There are two standard parameters for the extreme value distribution: mode and scale. The mode parameter is the most likely value for the
variable (the highest point on the probability distribution). The scale parameter is a number greater than 0. The larger the scale parameter, the greater the variance. Input requirements: Mode can be
any value Scale > 0 F Distribution or Fisher–Snedecor Distribution The F distribution, also known as the Fisher–Snedecor distribution, is another continuous distribution used most frequently for
hypothesis testing. Specifically, it is used to test the statistical difference between two variances in analysis of variance tests and likelihood ratio tests. The F distribution with the numerator
degree of freedom n and denominator degree of freedom m is related to the chi-square distribution in that:
χ n2 / n
χ m2 / m
Mean =
~ Fn,m or f (x) = d
⎛ n + m⎞ ⎛ n ⎞ Γ⎜ ⎝ 2 ⎟⎠ ⎜⎝ m ⎠⎟
n /2
x n / 2 −1
⎤ ⎛ n⎞ ⎛ m⎞ ⎡ ⎛ n ⎞ Γ ⎜ ⎟ Γ ⎜ ⎟ ⎢ x ⎜ ⎟ + 1⎥ ⎝ 2⎠ ⎝ 2 ⎠ ⎣ ⎝ m⎠ ⎦
(n + m)/ 2
m m−2
Standard Deviation = Skewness =
2m 2 (m + n − 2) for all m > 4 n(m − 2)2 (m − 4)
2(m + 2n − 2) 2(m − 4) m−6 n(m + n − 2)
Excess Kurtosis =
12(− −16 + 20m − 8m 2 + m3 + 44n − 32mn + 5m 2 n − 22n 2 + 5mn 2 n(m − 6)(m − 8)(n + m − 2)
Test Driving Risk Simulator
The numerator degree of freedom n and denominator degree of freedom m are the only distributional parameters. Input requirements: Degrees of freedom numerator and degrees of freedom denominator both
> 0 integers Gamma Distribution (Erlang Distribution) The gamma distribution applies to a wide range of physical quantities and is related to other distributions: lognormal, exponential, Pascal,
Erlang, Poisson, and chi-square. It is used in meteorological processes to represent pollutant concentrations and precipitation quantities. The gamma distribution is also used to measure the time
between the occurrence of events when the event process is not completely random. Other applications of the gamma distribution include inventory control, economic theory, and insurance risk theory.
The gamma distribution is most often used as the distribution of the amount of time until the rth occurrence of an event in a Poisson process. When used in this fashion, the three conditions
underlying the gamma distribution are: 1. The number of possible occurrences in any unit of measurement is not limited to a fixed number. 2. The occurrences are independent. The number of occurrences
in one unit of measurement does not affect the number of occurrences in other units. 3. The average number of occurrences must remain the same from unit to unit. The mathematical constructs for the
gamma distribution are as follows: α −1
− ⎛ x⎞ e β ⎜⎝ β ⎟⎠ f (x) = with any value of α > 0 and β > 0 Γ (α )β Mean = αβ
Standard Deviation = Skewness =
Excess Kurtosis =
6 α
αβ 2
Shape parameter alpha (a) and scale parameter beta (b) are the distributional parameters, and G is the gamma function. When the alpha parameter is a positive integer, the gamma distribution is called
the Erlang distribution, used to predict waiting times in queuing systems, where the Erlang distribution is the sum of independent and identically distributed random variables each having a
memoryless exponential distribution. Setting n as the number of these random variables, the mathematical construct of the Erlang distribution is: f (x) =
x n − 1e − x for all x > 0 and all posittive integers of n (n − 1)!
Input requirements: Scale beta > 0 and can be any positive value Shape alpha ≥ 0.05 and any positive value Location can be any value Logistic Distribution The logistic distribution is commonly used
to describe growth, that is, the size of a population expressed as a function of a time variable. It also can be used to describe chemical reactions and the course of growth for a population or
individual. The mathematical constructs for the logistic distribution are as follows:
f (x) =
μ−x α
μ−x ⎤ ⎡ α ⎢1 + e α ⎥ ⎢⎣ ⎥⎦ Mean = μ
for any value off α and μ
1 2 2 π α 3 Skewness = 0 (this applies to all mean and scalle inputs) Excess Kurtosis = 1.2 (this applies to all mean and scale inputs) Standard Deviation =
Mean (m) and scale (a) are the distributional parameters. There are two standard parameters for the logistic distribution: mean and scale. The mean parameter is the average value, which for this
distribution is the same as the mode, because this distribution is symmetrical. The scale parameter is a number greater than 0. The larger the scale parameter, the greater the variance.
Test Driving Risk Simulator
Input requirements: Scale > 0 and can be any positive value Mean can be any value Lognormal Distribution The lognormal distribution is widely used in situations where values are positively skewed,
for example, in financial analysis for security valuation or in real estate for property valuation, and where values cannot fall below zero. Stock prices are usually positively skewed rather than
normally (symmetrically) distributed. Stock prices exhibit this trend because they cannot fall below the lower limit of zero but might increase to any price without limit. Similarly, real estate
prices illustrate positive skewness and are lognormally distributed as property values cannot become negative. The three conditions underlying the lognormal distribution are: 1. The uncertain
variable can increase without limits but cannot fall below zero. 2. The uncertain variable is positively skewed, with most of the values near the lower limit. 3. The natural logarithm of the
uncertain variable yields a normal distribution. Generally, if the coefficient of variability is greater than 30 percent, use a lognormal distribution. Otherwise, use the normal distribution. The
mathematical constructs for the lognormal distribution are as follows: f (x) =
1 x 2π ln(σ )
[ln( x) − ln( μ )]2
2[ln(σ )]2
for x > 0; μ > 0 and σ > 0
⎛ σ2 ⎞ Mean = exp ⎜ μ + 2 ⎟⎠ ⎝ Standard Deviation =
( )
exp σ 2 + 2μ ⎡ exp σ 2 − 1⎤ ⎣ ⎦
( ) Excess Kurtosis = exp ( 4σ ) + 2 exp ( 3σ ) + 3 exp ( 2σ ) − 6
Skewness = ⎡ exp σ 2 − 1 ⎤ (2 + exp(σ 2 )) ⎥⎦ ⎣⎢ 2
Mean (m) and standard deviation (s) are the distributional parameters. Input requirements: Mean and standard deviation both > 0 and can be any positive value
Lognormal Parameter Sets By default, the lognormal distribution uses the arithmetic mean and standard deviation. For applications for which historical data are available, it is more appropriate to
use either the logarithmic mean and standard deviation, or the geometric mean and standard deviation. Normal Distribution The normal distribution is the most important distribution in probability
theory because it describes many natural phenomena, such as people’s IQs or heights. Decision makers can use the normal distribution to describe uncertain variables such as the inflation rate or the
future price of gasoline. The three conditions underlying the normal distribution are: 1. Some value of the uncertain variable is the most likely (the mean of the distribution). 2. The uncertain
variable could as likely be above the mean as it could be below the mean (symmetrical about the mean). 3. The uncertain variable is more likely to be in the vicinity of the mean than further away.
The mathematical constructs for the normal distribution are as follows: 1
( x − μ )2
e 2σ for all values of x and d μ ; while σ > 0 2πσ Mean = μ Standard Deviation = σ Skewness = 0 (this applies to all inputs of mea an and standard deviation) Excess Kurtosis = 0 (this applies to all
inputs of mean and standard deviation) f (x) =
Mean (m) and standard deviation (s) are the distributional parameters. Input requirements: Standard deviation > 0 and can be any positive value Mean can be any value Pareto Distribution The Pareto
distribution is widely used for the investigation of distributions associated with such empirical phenomena as city population sizes, the occurrence of natural resources, the size of companies,
personal incomes, stock price fluctuations, and error clustering in communication circuits. The mathematical constructs for the pareto are as follows:
Test Driving Risk Simulator
β Lβ for x > L x(1+ β ) βL Mean = β −1 f (x) =
Standard Deviation = Skewness =
β L2 (β − 1)2 (β − 2)
β − 2 ⎡ 2(β + 1) ⎤ β ⎢⎣ β − 3 ⎥⎦
Excess Kurtosis =
6(β 3 + β 2 − 6β − 2) β (β − 3)(β − 4)
Location (L) and shape ( b) are the distributional parameters. There are two standard parameters for the pareto distribution: location and shape. The location parameter is the lower bound for the
variable. After you select the location parameter, you can estimate the shape parameter. The shape parameter is a number greater than 0, usually greater than 1. The larger the shape parameter, the
smaller the variance and the thicker the right tail of the distribution. Input requirements: Location > 0 and can be any positive value Shape ≥ 0.05 Student’s t Distribution The Student’s t
distribution is the most widely used distribution in hypothesis testing. This distribution is used to estimate the mean of a normally distributed population when the sample size is small, and is used
to test the statistical significance of the difference between two sample means or confidence intervals for small sample sizes. The mathematical constructs for the t distribution are as follows: Γ[(r
+ 1) / 2]
(1 + t 2 / r)− (r + 1) / 2 rπ Γ[ r / 2] x−x wheree t = and Γ is the gamma function s Mean = 0 (this applies to all degrees of freedom r except if the distribution is shifted to ano other nonzero
central location) f (t) =
Standard Deviation =
r r−2
Skewness = 0 (this applies to alll degrees of freedom r) Excess Kurtosis =
6 for all r > 4 r−4
Degree of freedom r is the only distributional parameter. The t distribution is related to the F-distribution as follows: The square of a value of t with r degrees of freedom is distributed as F with
1 and r degrees of freedom. The overall shape of the probability density function of the t distribution also resembles the bell shape of a normally distributed variable with mean 0 and variance 1,
except that it is a bit lower and wider or is leptokurtic (fat tails at the ends and peaked center). As the number of degrees of freedom grows (say, above 30), the t distribution approaches the
normal distribution with mean 0 and variance 1. Input requirements: Degrees of freedom ≥ 1 and must be an integer Triangular Distribution The triangular distribution describes a situation where you
know the minimum, maximum, and most likely values to occur. For example, you could describe the number of cars sold per week when past sales show the minimum, maximum, and usual number of cars sold.
The three conditions underlying the triangular distribution are: 1. The minimum number of items is fixed. 2. The maximum number of items is fixed. 3. The most likely number of items falls between the
minimum and maximum values, forming a triangular-shaped distribution, which shows that values near the minimum and maximum are less likely to occur than those near the most likely value. The
mathematical constructs for the triangular distribution are as follows: 2(x − Min) ⎧ ⎪⎪ (Max − Min)(Likely − Min) for Min < x < Likely f (x) = ⎨ 2(Max − x) ⎪ for Likely < x < Max ⎪⎩ (Max − Min)(Max −
Likely) 1 (Min + Likely + Max) 3 1 Standard Deviation = (Min 2 + Likely 2 + Max2 − MinMax − MinLikely − MaxLikely) 18 Mean =
2(Min + Max − 2Likely)(2Min − Max − Likely)(Min − 2Max + Likely) 5(Min 2 + Max2 + Likely 2 − MinMax − MinLikely − MaxLikely)3 / 2 Excess Kurtosis = −0.6 (this appliees to all inputs of Min, Max, and
Likely) Skewness =
Test Driving Risk Simulator
Minimum value (Min), most likely value (Likely), and maximum value (Max) are the distributional parameters. Input requirements: Min ≤ Most Likely ≤ Max and can also take any value However, Min < Max
and can also take any value Uniform Distribution With the uniform distribution, all values fall between the minimum and maximum and occur with equal likelihood. The three conditions underlying the
uniform distribution are: 1. The minimum value is fixed. 2. The maximum value is fixed. 3. All values between the minimum and maximum occur with equal likelihood. The mathematical constructs for the
uniform distribution are as follows: 1 for all values such that Min < Max Max − Min Min + Max Mean = 2 f (x) =
(Max − Min)2 12 Skewness = 0 (this applies to all inputss of Min and Max)
Standard Deviation =
Excess Kurtosis = −1.2 (this applies to all inputs of Min and Max) Maximum value (Max) and minimum value (Min) are the distributional parameters. Input requirements: Min < Max and can take any value
Weibull Distribution (Rayleigh Distribution) The Weibull distribution describes data resulting from life and fatigue tests. It is commonly used to describe failure time in reliability studies as well
as the breaking strengths of materials in reliability and quality control tests. Weibull distributions are also used to represent various physical quantities, such as wind speed. The Weibull
distribution is a family of distributions that can assume the properties of several other distributions. For example, depending on the shape parameter you define, the Weibull distribution can be used
to model the exponential and Rayleigh distributions, among others. The Weibull
distribution is very flexible. When the Weibull shape parameter is equal to 1.0, the Weibull distribution is identical to the exponential distribution. The Weibull location parameter lets you set up
an exponential distribution to start at a location other than 0.0. When the shape parameter is less than 1.0, the Weibull distribution becomes a steeply declining curve. A manufacturer might find
this effect useful in describing part failures during a burn-in period. The mathematical constructs for the Weibull distribution are as follows: α −1
− ⎛ x⎞
⎜ ⎟ α ⎡x⎤ e ⎝β⎠ β ⎢⎣ β ⎥⎦ Mean = βΓ (1 + α −1)
f (x) =
Standard Deviation = β 2 ⎡⎣ Γ (1 + 2α −1) − Γ 2 (1 + α −1) ⎤⎦ 2Γ 3 (1 + β −1) − 3Γ (1 + β −1)Γ (1 + 2β −1) + Γ (1 + 3β −1) Skewness = 3/ 2 ⎡ Γ (1 + 2β −1) − Γ 2 (1 + β −1) ⎤ ⎣ ⎦ Excess Ku urtosis =
−6Γ 4 (1 + β −1) + 12Γ 2 (1 + β −1)Γ (1 + 2β −1) − 3Γ 2 (1 + 2β −1) − 4Γ (1 + β −1)Γ (1 + 3β −1) + Γ (1 + 4β −1) ⎡ Γ (1 + 2β −1) − Γ 2 (1 + β −1) ⎤ ⎣ ⎦
Location (L), shape (a), and scale (b) are the distributional parameters, and Γ is the gamma function. Input requirements: Scale > 0 and can be any positive value Shape ≥ 0.05 Location can take on
any value
Test Driving Risk Simulator
QUESTIONS 1. Why do you need to have profiles in a simulation? 2. Explain the differences between Pearson’s product moment correlation coefficient and Spearman’s rank-based correlation. 3. Will more
or fewer trials be required to obtain: higher error levels, higher precision levels, and a wider confidence interval? 4. Explain the differences between error and precision and how these two concepts
are linked. 5. If you know that two simulated variables are correlated but do not have the relevant correlation value, should you still go ahead and correlate them in a simulation?
Following are some hands-on exercises using Risk Simulator. The example files are located on Start, Programs, Real Options Valuation, Risk Simulator, Examples.
Pandora’s Toolbox
his chapter deals with the Risk Simulator software’s analytical tools. These analytical tools are discussed through example applications of the Risk Simulator software, complete with step-by-step
illustrations. These tools are very valuable to analysts working in the realm of risk analysis. The applicability of each tool is discussed in detail in this chapter. All of the example files used in
this chapter are found by going to Start, Programs, Real Options Valuation, Risk Simulator, Examples.
TORNADO AND SENSITIVITY TOOLS IN SIMULATION Theory One of the powerful simulation tools is tornado analysis—it captures the static impacts of each variable on the outcome of the model; that is, the
tool automatically perturbs each variable in the model a preset amount, captures the fluctuation on the model’s forecast or final result, and lists the resulting perturbations ranked from the most
significant to the least. Figures 6.1 through 6.6 illustrate the application of a tornado analysis. For instance, Figure 6.1 is a sample discounted cash-flow model where the input assumptions in the
model are shown. The question is, what are the critical success drivers that affect the model’s output the most? That is, what really drives the net present value of $96.63 or which input variable
impacts this value the most? The tornado chart tool can be obtained through Simulation | Tools | Tornado Analysis. To follow along the first example, open the Tornado and Sensitivity Charts (Linear)
file in the examples folder. Figure 6.2 shows this sample model where cell G6 containing the net present value is chosen as the target result to be analyzed. The target cell’s precedents in the model
are used in creating the tornado chart. Precedents are all the input and intermediate variables that affect the outcome of the model. For instance, if the model consists of A = B + C, and where C = D
+ E, then B, D, and E are the precedents for A (C is not a precedent as it is only an intermediate calculated
Pandora’s Toolbox Discounted Cash Flow Model Base Year Market Risk-Adjusted Discount Rate Private-Risk Discount Rate Annualized Sales Growth Rate Price Erosion Rate Effective Tax Rate
2005 15.00% 5.00% 2.00% 5.00% 40.00%
Sum PV Net Benefits $1,896.63 Sum PV Investments $1,800.00 Net Present Value $96.63 Internal Rate of Return 18.80% Return on Investment 5.37%
2005 2006 2007 2008 Product A Avg Price/Unit $10.00 $9.50 $9.03 $8.57 Product B Avg Price/Unit $12.25 $11.64 $11.06 $10.50 Product C Avg Price/Unit $15.15 $14.39 $13.67 $12.99 Product A Sale Quantity
(’000s) 50.00 51.00 52.02 53.06 Product B Sale Quantity (’000s) 35.00 35.70 36.41 37.14 Product C Sale Quantity (’000s) 20.00 20.40 20.81 21.22 Total Revenues $1,231.75 $1,193.57 $1,156.57 $1,120.71
Direct Cost of Goods Sold $184.76 $179.03 $173.48 $168.11 Gross Profit $1,046.99 $1,014.53 $983.08 $952.60 Operating Expenses $157.50 $160.65 $163.86 $167.14 Sales, General and Admin. Costs $15.75
$16.07 $16.39 $16.71 Operating Income (EBITDA) $873.74 $837.82 $802.83 $768.75 Depreciation $10.00 $10.00 $10.00 $10.00 Amortization $3.00 $3.00 $3.00 $3.00 EBIT $860.74 $824.82 $789.83 $755.75
Interest Payments $2.00 $2.00 $2.00 $2.00 EBT $858.74 $822.82 $787.83 $753.75 Taxes $343.50 $329.13 $315.13 $301.50 Net Income $515.24 $493.69 $472.70 $452.25 Noncash Depreciation Amortization $13.00
$13.00 $13.00 $13.00 $0.00 $0.00 $0.00 $0.00 Noncash: Change in Net Working Capital Noncash: Capital Expenditures $0.00 $0.00 $0.00 $0.00 Free Cash Flow $528.24 $506.69 $485.70 $465.25 Investment
Financial Analysis Present Value of Free Cash Flow Present Value of Investment Outlay Net Cash Flows
$528.24 $1,800.00 ($1,271.76)
FIGURE 6.1
$440.60 $0.00 $506.69
$367.26 $0.00 $485.70
$305.91 $0.00 $465.25
2009 $8.15 $9.98 $12.34 54.12 37.89 21.65 $1,085.97 $162.90 $923.07 $170.48 $17.05 $735.54 $10.00 $3.00 $722.54 $2.00 $720.54 $288.22 $432.33 $13.00 $0.00 $0.00 $445.33
$254.62 $0.00 $445.33
Sample discounted cash flow model.
value). Figure 6.2 also shows the testing range of each precedent variable used to estimate the target result. If the precedent variables are simple inputs, then the testing range will be a simple
perturbation based on the range chosen (e.g., the default is ±10 percent). Each precedent variable can be perturbed at different percentages if required. A wider range is important as it is better
able to test extreme values rather than smaller perturbations around the expected values. In certain circumstances, extreme values may have a larger, smaller, or unbalanced impact (e.g.,
nonlinearities may occur where increasing or decreasing economies of scale and scope creep in for larger or
FIGURE 6.2
Running tornado analysis.
smaller values of a variable) and only a wider range will capture this nonlinear impact.
Procedure Use the following steps to create a tornado analysis: 1. Select the single output cell (i.e., a cell with a function or equation) in an Excel model (e.g., cell G6 is selected in our
example). 2. Select Simulation | Tools | Tornado Analysis.
Pandora’s Toolbox
3. Review the precedents and rename them as appropriate (renaming the precedents to shorter names allows a more visually pleasing tornado and spider chart) and click OK. Alternatively, click on Use
Cell Address to apply cell locations as the variable names.
Results Interpretation Figure 6.3 shows the resulting tornado analysis report, which indicates that capital investment has the largest impact on net present value (NPV), followed by tax rate, average
sale price and quantity demanded of the product lines, and so forth. The report contains four distinct elements: 1. Statistical summary listing the procedure performed. 2. A sensitivity table (Table
6.1) shows the starting NPV base value of $96.63 and how each input is changed (e.g., investment is changed from $1,800 to $1,980 on the upside with a +10 percent swing, and from $1,800 to $1,620 on
the downside with a –10 percent swing). The resulting upside and downside values on NPV are –$83.37 and $276.63, with a total change of $360, making it the variable with the highest impact on NPV.
The precedent variables are ranked from the highest impact to the lowest impact. 3. The spider chart (Figure 6.4) illustrates these effects graphically. The y-axis is the NPV target value whereas the
x-axis depicts the percentage change on each of the precedent values (the central point is the base case value at $96.63 at 0 percent change from the base value of each precedent). Positively sloped
lines indicate a positive relationship or effect while negatively sloped lines indicate a negative relationship (e.g., investment is negatively sloped, which means that the higher the investment
level, the lower the NPV). The absolute value of the slope indicates the magnitude of the effect computed as the percentage change in the result given a percentage change in the precedent (a steep
line indicates a higher impact on the NPV y-axis given a change in the precedent x-axis). 4. The tornado chart (Figure 6.5) illustrates the results in another graphical manner, where the highest
impacting precedent is listed first. The x-axis is the NPV value with the center of the chart being the base case condition. Green (lighter) bars in the chart indicate a positive effect while red
(darker) bars indicate a negative effect. Therefore, for investments, the red (darker) bar on the right side indicates a negative effect of investment on higher NPV—in other words, capital investment
and NPV are negatively correlated. The opposite is true for price and quantity of products A to C (their green or lighter bars are on the right side of the chart).
Statistical Summary One of the powerful simulation tools is the tornado chart—it captures the static impacts of each variable on the outcome of the model. That is, the tool automatically perturbs
each precedent variable in the model a user-specified preset amount, captures the fluctuation on the model’s forecast or final result, and lists the resulting perturbations ranked from the most
significant to the least. Precedents are all the input and intermediate variables that affect the outcome of the model. For instance, if the model consists of A = B + C, where C = D + E, then B, D,
and E are the precedents for A (C is not a precedent as it is only an intermediate calculated value). The range and number of values perturbed is user-specified and can be set to test extreme values
rather than smaller perturbations around the expected values. In certain circumstances, extreme values may have a larger, smaller, or unbalanced impact (e.g., nonlinearities may occur where
increasing or decreasing economies of scale and scope creep occurs for larger or smaller values of a variable) and only a wider range will capture this nonlinear impact. A tornado chart lists all the
inputs that drive the model, starting from the input variable that has the most effect on the results. The chart is obtained by perturbing each precedent input at some consistent range (e.g., ± 10%
from the base case) one at a time, and comparing their results to the base case. A spider chart looks like a spider with a central body and its many legs protruding. The positively sloped lines
indicate a positive relationship, while a negatively sloped line indicates a negative relationship. Further, spider charts can be used to visualize linear and nonlinear relationships. The tornado and
spider charts help identify the critical success factors of an output cell in order to identify the inputs to simulate. The identified critical variables that are uncertain are the ones that should
be simulated. Do not waste time simulating variables that are neither uncertain nor have little impact on the results.
Precedent Cell Investment Tax Rate A Price B Price A Quantity B Quantity C Price C Quantity Discount Rate Price Erosion Sales Growth Depreciation Interest Amortization Capex Net Capital
Base Value: 96.6261638553219 Output Output Effective Downside Upside Range $276.63 $219.73 $3.43 $16.71 $23.18 $30.53 $40.15 $48.05 $138.24 $116.80 $90.59 $95.08 $97.09 $96.16 $96.63 $96.63
($83.37) ($26.47) $189.83 $176.55 $170.07 $162.72 $153.11 $145.20 $57.03 $76.64 $102.69 $98.17 $96.16 $97.09 $96.63 $96.63
360.00 246.20 186.40 159.84 146.90 132.19 112.96 97.16 81.21 40.16 12.10 3.08 0.93 0.93 0.00 0.00
Input Downside
Input Changes Input Base Case Upside Value
$1,620.00 $1,980.00 $1,800.00 36.00% 44.00% 40.00% $9.00 $11.00 $10.00 $11.03 $13.48 $12.25 45.00 55.00 50.00 31.50 38.50 35.00 $13.64 $16.67 $15.15 18.00 22.00 20.00 13.50% 16.50% 15.00% 4.50% 5.50%
5.00% 1.80% 2.20% 2.00% $9.00 $11.00 $10.00 $1.80 $2.20 $2.00 $2.70 $3.30 $3.00 $0.00 $0.00 $0.00 $0.00 $0.00 $0.00
Spider Chart 300.0
Investment Discount Rate A Price B Price C Price A Quantity B Quantity C Quantity Depreciation Amortization Interest Net Capital Capex Sales Growth Price Erosion Tax Rate
250.0 200.0 150.0 100.0 50.0 0.0 –50.0 –100.0 –10.00%
FIGURE 6.3
Tornado analysis report.
$276.63 $219.73 $3.43 $16.71 $23.18 $30.53 $40.15 $48.05 $138.24 $116.80 $90.59 $95.08 $97.09 $96.16 $96.63 $96.63
Investment Tax Rate A Price B Price A Quantity B Quantity C Price C Quantity Discount Rate Price Erosion Sales Growth Depreciation Interest Amortization Capex Net Capital ($83.37) ($26.47) $189.83
$176.55 $170.07 $162.72 $153.11 $145.20 $57.03 $76.64 $102.69 $98.17 $96.16 $97.09 $96.63 $96.63
Output Upside 360.00 246.20 186.40 159.84 146.90 132.19 112.96 97.16 81.21 40.16 12.10 3.08 0.93 0.93 0.00 0.00
Effective Range
Base Value: 96.6261638553219
Output Downside
Sensitivity Table
Precedent Cell
TABLE 6.1
$1,620.00 36.00% $9.00 $11.03 45.00 31.50 $13.64 18.00 13.50% 4.50% 1.80% $9.00 $1.80 $2.70 $0.00 $0.00
Input Downside $1,980.00 44.00% $11.00 $13.48 55.00 38.50 $16.67 22.00 16.50% 5.50% 2.20% $11.00 $2.20 $3.30 $0.00 $0.00
Input Upside
Input Changes
$1,800.00 40.00% $10.00 $12.25 50.00 35.00 $15.15 20.00 15.00% 5.00% 2.00% $10.00 $2.00 $3.00 $0.00 $0.00
Base Case Value
Spider Chart 300.0
Investment Discount Rate A Price B Price C Price A Quantity B Quantity C Quantity Depreciation Amortization Interest Net Capital Capex Sales Growth Price Erosion Tax Rate
250.0 200.0 150.0 100.0 50.0 0.0 –50.0 –100.0 –10.00%
FIGURE 6.4
Spider chart.
Notes Remember that tornado analysis is a static sensitivity analysis applied on each input variable in the model; that is, each variable is perturbed individually and the resulting effects are
tabulated. This makes tornado analysis a key component to execute before running a simulation. One of the very first steps in risk analysis is where the most important impact drivers in themodel are
captured and identified. The next step is to identify which of these important impact drivers are uncertain. These uncertain impact drivers are the critical success drivers of a project, where the
results of the model depend on these critical success drivers. These variables are the ones that should be simulated. Do not waste time simulating variables that are neither uncertain nor have little
impact on the results. Tornado charts assist in identifying these critical success drivers quickly and easily. Following this example, it might be that price and quantity should be simulated,
assuming if the required investment and effective tax rate are both known in advance and unchanging. Although the tornado chart is easier to read, the spider chart is important to determine if there
are any nonlinearities in the model. For instance, Figure 6.6 shows another spider chart where nonlinearities are fairly evident (the lines on the graph are not straight but curved). The example
model used is Tornado and Sensitivity Charts (Nonlinear), which applies the Black–Scholes option pricing model. Such nonlinearities cannot be ascer-
Pandora’s Toolbox
1620 0.44
Tax Rate
0.36 9
A Price
B Price
A Quantity
B Quantity
C Price
C Quantity
Discount Rate
Price Erosion
Sales Growth
Net Capital
FIGURE 6.5
Tornado chart.
tained from a tornado chart and may be important information in the model or provide decision makers important insight into the model’s dynamics. For instance, in this Black–Scholes model, the fact
that stock price and strike price are nonlinearly related to the option value is important to know. This characteristic implies that option value will not increase or decrease proportionally to the
changes in stock or strike price, and that there might be some interactions between these two prices as well as other variables. As another example, an engineering model depicting nonlinearities
might indicate that a particular part or component, when subjected to a high enough force or tension, will break. Clearly, it is important to understand such nonlinearities.
70.0 60.0
Stock Price Strike Price
Maturity Risk-free Rate
Volatility 30.0
Dividend Yield
20.0 10.0 0.0 –60.00%
FIGURE 6.6
Nonlinear spider chart.
SENSITIVITY ANALYSIS Theory A related feature is sensitivity analysis. While tornado analysis (tornado charts and spider charts) applies static perturbations before a simulation run, sensitivity
analysis applies dynamic perturbations created after the simulation run. Tornado and spider charts are the results of static perturbations, meaning that each precedent or assumption variable is
perturbed a preset amount one at a time, and the fluctuations in the results are tabulated. In contrast, sensitivity charts are the results of dynamic perturbations in the sense that multiple
assumptions are perturbed simultaneously and their interactions in the model and correlations among variables are captured in the fluctuations of the results. Tornado charts therefore identify which
variables drive the results the most and hence are suitable for simulation, whereas sensitivity charts identify the impact to the results when multiple interacting variables are simulated together in
the model. This effect is clearly illustrated in Figure 6.7. Notice that the ranking of critical success drivers is similar to the tornado chart in the previous examples. However, if correlations are
added between the assumptions, Figure 6.8 shows a very different picture. Notice, for instance, price erosion had little impact on NPV, but when some of the input assumptions are correlated, the
interaction that exists between these correlated variables makes price erosion have more impact. Note that
Pandora’s Toolbox Nonlinear Rank Correlation (Net Present Value)
0.56, B Quantity 0.51, A Quantity 0.35, C Quantity 0.33, A Price 0.31, B Price 0.22, C Price –0.17, Tax Rate –0.05, Price Erosion 0.03, Sales Growth 0.0
FIGURE 6.7
Sensitivity chart without correlations.
tornado analysis cannot capture these correlated dynamic relationships. Only after a simulation is run will such relationships become evident in a sensitivity analysis. A tornado chart’s
presimulation critical success factors will therefore sometimes be different from a sensitivity chart’s postsimulation critical success factor. The postsimulation critical success factors should be
the ones that are of interest as these more readily capture the model precedents’ interactions.
Nonlinear Rank Correlation (Net Present Value) 0.57, B Quantity 0.52, A Quantity 0.36, C Quantity 0.34, A Price 0.26, B Price 0.22, C Price 0.21, Price Erosion –0.18, Tax Rate 0.03, Sales Growth 0.0
FIGURE 6.8
Sensitivity chart with correlations.
Procedure Use the following steps to create a sensitivity analysis: 1. Open or create a model, define assumptions and forecasts, and run the simulation––the example here uses the Tornado and
Sensitivity Charts (Linear) file. 2. Select Simulation | Tools | Sensitivity Analysis. 3. Select the forecast of choice to analyze and click OK (Figure 6.9). Note that sensitivity analysis cannot be
run unless assumptions and forecasts have been defined, and a simulation has been run.
FIGURE 6.9
Running sensitivity analysis.
Pandora’s Toolbox Nonlinear Rank Correlation (Net Present Value)
0.57, B Quantity 0.52, A Quantity 0.36, C Quantity 0.34, A Price 0.26, B Price 0.22, C Price 0.21, Price Erosion –0.18, Tax Rate 0.03, Sales Growth 0.0
FIGURE 6.10
Rank correlation chart.
Results Interpretation The results of the sensitivity analysis comprise a report and two key charts. The first is a nonlinear rank correlation chart (Figure 6.10) that ranks from highest to lowest
the assumption–forecast correlation pairs. These correlations are nonlinear and nonparametric, making them free of any distributional requirements (i.e., an assumption with a Weibull distribution can
be compared to another with a Beta distribution). The results from this chart are fairly similar to that of the tornado analysis seen previously (of course without the capital investment value, which
we decided was a known value and hence was not simulated), with one special exception. Tax rate was relegated to a much lower position in the sensitivity analysis chart (Figure 6.10) as compared to
the tornado chart (Figure 6.5). This is because by itself, tax rate will have a significant impact, but once the other variables are interacting in the model, it appears that tax rate has less of a
dominant effect (because tax rate has a smaller distribution as historical tax rates tend not to fluctuate too much, and also because tax rate is a straight percentage value of the income before
taxes, where other precedent variables have a larger effect on NPV). This example proves that performing sensitivity analysis after a simulation run is important to ascertain if there are any
interactions in the model and if the effects of certain variables still hold. The second chart (Figure 6.11) illustrates the percent variation explained; that is, of the fluctuations in the forecast,
how much of the variation can be explained by each of the assumptions after accounting for all the interactions among
Percent Variation Explained (Net Present Value) 32.27%, B Quantity 26.98%, A Quantity 12.74%, C Quantity 11.65%, A Price 6.79%, B Price 4.71%, C Price 4.55%, Price Erosion 3.14%, Tax Rate 0.11%,
Sales Growth 0.0
FIGURE 6.11
Contribution to variance chart.
variables? Notice that the sum of all variations explained is usually close to 100 percent (sometimes other elements impact the model, but they cannot be captured here directly), and if correlations
exist, the sum may sometimes exceed 100 percent (due to the interaction effects that are cumulative).
Notes Tornado analysis is performed before a simulation run while sensitivity analysis is performed after a simulation run. Spider charts in tornado analysis can consider nonlinearities while rank
correlation charts in sensitivity analysis can account for nonlinear and distributional-free conditions.
DISTRIBUTIONAL FITTING: SINGLE VARIABLE AND MULTIPLE VARIABLES Theory Another powerful simulation tool is distributional fitting; that is, which distribution does an analyst or engineer use for a
particular input variable in a model? What are the relevant distributional parameters? If no historical data exist, then the analyst must make assumptions about the variables in question. One
approach is to use the Delphi method, where a group of experts are tasked with estimating the behavior of each variable. For instance, a group of mechanical engineers can be tasked with evaluating
the extreme
Pandora’s Toolbox
possibilities of a spring coil’s diameter through rigorous experimentation or guesstimates. These values can be used as the variable’s input parameters (e.g., uniform distribution with extreme values
between 0.5 and 1.2). When testing is not possible (e.g., market share and revenue growth rate), management can still make estimates of potential outcomes and provide the bestcase, most-likely case,
and worst-case scenarios, whereupon a triangular or custom distribution can be created. However, if reliable historical data are available, distributional fitting can be accomplished. Assuming that
historical patterns hold and that history tends to repeat itself, then historical data can be used to find the best-fitting distribution with their relevant parameters to better define the variables
to be simulated. Figure 6.12, Figure 6.13, and Figure 6.14 illustrate a distributional-fitting example. The following illustration uses the Data Fitting file in the examples folder.
FIGURE 6.12
Single-variable distributional fitting.
FIGURE 6.13
Distributional fitting result.
Procedure Use the following steps to perform a distribution fitting model: 1. Open a spreadsheet with existing data for fitting (e.g., use the Data Fitting example file). 2. Select the data you wish
to fit not including the variable name (data should be in a single column with multiple rows). 3. Select Simulation | Tools | Distributional Fitting (Single-Variable). 4. Select the specific
distributions you wish to fit to or keep the default where all distributions are selected and click OK (Figure 6.12). 5. Review the results of the fit, choose the relevant distribution you want, and
click OK (Figure 6.13).
Results Interpretation The null hypothesis (Ho) being tested is such that the fitted distribution is the same distribution as the population from which the sample data to be fitted comes. Thus, if
the computed p-value is lower than a critical alpha level
86.95 106.63 109.15 90.23 93.91 102.43 95.83 111.82 102.02 96.41 94.95 90.84 106.48 90.34 88.26 97.86
Mean Standard Deviation Skewness Excess Kurtosis
Original Fitted Data 93.75 99.66 99.75 90.05 105.36 97.64 115.18 83.64 103.45 96.75 101.40 104.72 106.13 107.17 100.85 83.34 101.34 91.32 105.92 88.13 97.09 105.44 98.27 101.73 108.20 105.80 109.12
104.23 119.21 106.20 99.52 89.83
11.86 103.21 110.98 92.44 101.91 113.59 106.67 118.12 82.51 101.45 102.55 74.45 102.88 95.12 92.45 90.96
Actual 99.28 10.17 0.00 0.00
87.25 103.65 83.86 72.67 92.02 104.57 106.00 82.45 108.40 114.95 100.41 114.55 108.52 99.06 95.19 90.68 92.85 100.17 104.23 96.43 97.83 113.45 102.74 106.59 92.58 106.83 93.94 101.31 81.89 85.10
Distributional fitting report.
95.55 104.38 95.38 92.70 90.95 109.24 79.64 103.66 84.72 89.68 108.53 103.34 103.00 100.00 103.79
FIGURE 6.14
99.55 66.48 108.09 92.37 134.14 124.15 92.42 87.17 104.46 79.93 77.41 102.24 104.93 102.03 105.15 97.14
Theoretical 99.28 10.17 0.00 0.00
0.03 0.9727
97.32 123.26 93.21 110.81 107.13 105.34 94.15 106.93 105.05 102.91 90.54 96.51 99.10 118.17 100.84
Single Variable Distributional Fitting
Fitted Distribution Normal Distribution Mu 99.28 Sigma 10.17
Fitted Assumption
Kolmogorov-Smimov Statistic P-Value for Test Statistic
Statistical Summary
85.86 84.18 110.17 96.47 96.35 94.39 92.63 86.82 109.43 94.05 99.63 106.29 88.17 104.29 97.25
98.74 109.85 103.72 121.15 88.30 116.19 94.51 106.68 92.49 107.90 79.71 102.95 90.62 92.68 87.65
88.76 86.04 120.52 94.92 108.48 84.66 93.05 89.61 94.52 111.05 89.32 112.73 96.53 114.69 97.58
97.70 102.26 95.09 77.26 113.50 101.17 96.19 94.56 94.00 90.58 116.30 98.09 106.03 102.49 111.44
(typically 0.10 or 0.05), then the distribution is the wrong distribution. Conversely, the higher the p-value, the better the distribution fits the data. Roughly, you can think of p-value as a
percentage explained; that is, if the p-value is 0.9727 (Figure 6.13), then setting a normal distribution with a mean of 99.28 and a standard deviation of 10.17 explains about 97.27 percent of the
variation in the data, indicating an especially good fit. The data was from a 1,000-trial simulation in Risk Simulator based on a normal distribution with a mean of 100 and a standard deviation of
10. Because only 1,000 trials were simulated, the resulting distribution is fairly close to the specified distributional parameters, and in this case, about a 97.27 percent precision. Both the
results (Figure 6.13) and the report (Figure 6.14) show the test statistic, p-value, theoretical statistics (based on the selected distribution), empirical statistics (based on the raw data), the
original data (to maintain a record of the data used), and the assumption complete with the relevant distributional parameters (i.e., if you selected the option to automatically generate assumption
and if a simulation profile already exists). The results also rank all the selected distributions and how well they fit the data.
Fitting Multiple Variables For fitting multiple variables, the process is fairly similar to fitting individual variables. However, the data should be arranged in columns (i.e., each variable is
arranged as a column) and all the variables are fitted. The same analysis is performed when fitting multiple variables as when single variables are fitted. The difference here is that only the final
report will be generated and you do not get to review each variable’s distributional rankings. If the rankings are important, run the single-variable fitting procedure instead, on one variable at a
Procedure The procedure for fitting multiple variables is as follows: 1. Open a spreadsheet with existing data for fitting. 2. Select the data you wish to fit (data should be in multiple columns with
multiple rows). 3. Select Simulation | Tools | Distributional Fitting (Multi-Variable). 4. Review the data, choose the types of distributions you want to fit to, and click OK.
Notes Notice that the statistical ranking methods used in the distributional fitting routines are the chi-square test and Kolmogorov–Smirnov test. The former
Pandora’s Toolbox
is used to test discrete distributions and the latter continuous distributions. Briefly, a hypothesis test coupled with the maximum likelihood procedure with an internal optimization routine is used
to find the best-fitting parameters on each distribution tested and the results are ranked from the best fit to the worst fit. There are other distributional fitting tests such as the
Anderson–Darling, Shapiro–Wilks, and others; however, these tests are very sensitive parametric tests and are highly inappropriate in Monte Carlo simulation distribution-fitting routines when
different distributions are being tested. Due to their parametric requirements, these tests are most suited for testing normal distributions and distributions with normal-like behaviors (e.g.,
binomial distribution with a high number of trials and symmetrical probabilities) and will provide less accurate results when performed on nonnormal distributions. Take great care when using such
parametric tests. The Kolmogorov–Smirnov and chi-square tests employed in Risk Simulator are nonparametric and semiparametric in nature, and are better suited for fitting normal and nonnormal
BOOTSTRAP SIMULATION Theory Bootstrap simulation is a simple technique that estimates the reliability or accuracy of forecast statistics or other sample raw data. Bootstrap simulation can be used to
answer a lot of confidence and precision-based questions in simulation. For instance, suppose an identical model (with identical assumptions and forecasts but without any random seeds) is run by 100
different people. The results will clearly be slightly different. The question is, if we collected all the statistics from these 100 people, how will the mean be distributed, or the median, or the
skewness, or excess kurtosis? Suppose one person has a mean value of, say, 1.50, while another 1.52. Are these two values statistically significantly different from one another or are they
statistically similar and the slight difference is due entirely to random chance? What about 1.53? So, how far is far enough to say that the values are statistically different? In addition, if a
model’s resulting skewness is –0.19, is this forecast distribution negatively skewed or is it statistically close enough to zero to state that this distribution is symmetrical and not skewed? Thus,
if we bootstrapped this forecast 100 times, that is, run a 1,000-trial simulation for 100 times and collect the 100 skewness coefficients, the skewness distribution would indicate how far zero is
away from –0.19. If the 90 percent confidence on the bootstrapped skewness distribution contains the value zero, then we can state that on a 90 percent confidence level, this distribution is
symmetrical and not skewed, and the value –0.19 is statistically close enough to zero. Otherwise, if zero falls outside of this 90 percent
confidence area, then this distribution is negatively skewed. The same analysis can be applied to excess kurtosis and other statistics. Essentially, bootstrap simulation is a hypothesis testing tool.
Classical methods used in the past relied on mathematical formulas to describe the accuracy of sample statistics. These methods assume that the distribution of a sample statistic approaches a normal
distribution, making the calculation of the statistic’s standard error or confidence interval relatively easy. However, when a statistic’s sampling distribution is not normally distributed or easily
found, these classical methods are difficult to use. In contrast, bootstrapping analyzes sample statistics empirically by repeatedly sampling the data and creating distributions of the different
statistics from each sampling. The classical methods of hypothesis testing are available in Risk Simulator and are explained in the next section. Classical methods provide higher power in their tests
but rely on normality assumptions and can only be used to test the mean and variance of a distribution, as compared to bootstrap simulation, which provides lower power but is nonparametric and
distribution-free, and can be used to test any distributional statistic.
Procedure Use the following steps to run a bootstrap simulation: 1. Run a simulation with assumptions and forecasts. 2. Select Simulation | Tools | Nonparametric Bootstrap. 3. Select only one
forecast to bootstrap, select the statistic(s) to bootstrap, and enter the number of bootstrap trials and click OK (Figure 6.15).
Results Interpretation Figure 6.16 illustrates some sample bootstrap results. The example file used was Hypothesis Testing and Bootstrap Simulation. For instance, the 90 percent confidence for the
skewness statistic is between –0.0189 and 0.0952, such that the value 0 falls within this confidence, indicating that on a 90 percent confidence, the skewness of this forecast is not statistically
significantly different from zero, or that this distribution can be considered as symmetrical and not skewed. Conversely, if the value 0 falls outside of this confidence, then the opposite is true:
The distribution is skewed (positively skewed if the forecast statistic is positive, and negatively skewed if the forecast statistic is negative).
Notes The term bootstrap comes from the saying, “to pull oneself up by one’s own bootstraps,” and is applicable because this method uses the distribution of statistics themselves to analyze the
statistics’ accuracy. Nonparametric simulation is simply randomly picking golf balls from a large basket with
Revenue Cost Income
Nonparametric bootstrap simulation.
$200.00 $100.00 $100.00
MODEL B Revenue Cost Income
FIGURE 6.15
$200.00 $100.00 $100.00
162 FIGURE 6.16
Bootstrap simulation results.
Pandora’s Toolbox
replacement where each golf ball is based on a historical data point. Suppose there are 365 golf balls in the basket (representing 365 historical data points). Imagine if you will that the value of
each golf ball picked at random is written on a large whiteboard. The results of the 365 balls picked with replacement are written in the first column of the board with 365 rows of numbers. Relevant
statistics (mean, median, mode, standard deviation, and so forth) are calculated on these 365 rows. The process is then repeated, say, five thousand times. The whiteboard will now be filled with 365
rows and 5,000 columns. Hence, 5,000 sets of statistics (that is, there will be 5,000 means, 5,000 medians, 5,000 modes, 5,000 standard deviations, and so forth) are tabulated and their distributions
shown. The relevant statistics of the statistics are then tabulated, where from these results one can ascertain how confident the simulated statistics are. Finally, bootstrap results are important
because according to the Law of Large Numbers and Central Limit Theorem in statistics, the mean of the sample means is an unbiased estimator and approaches the true population mean when the sample
size increases.
HYPOTHESIS TESTING Theory A hypothesis test is performed when testing the means and variances of two distributions to determine if they are statistically identical or statistically different from one
another; that is, to see if the differences between the means and variances of two different forecasts that occur are based on random chance or if they are, in fact, statistically significantly
different from one another. This analysis is related to bootstrap simulation with several differences. Classical hypothesis testing uses mathematical models and is based on theoretical distributions.
This means that the precision and power of the test is higher than bootstrap simulation’s empirically based method of simulating a simulation and letting the data tell the story. However, classical
hypothesis test is only applicable for testing two distributions’ means and variances (and by extension, standard deviations) to see if they are statistically identical or different. In contrast,
nonparametric bootstrap simulation can be used to test for any distributional statistics, making it more useful, but the drawback is its lower testing power. Risk Simulator provides both techniques
from which to choose.
Procedure Use the following steps to run a hypothesis test: 1. Run a simulation with at least two forecasts.
2. Select Simulation | Tools | Hypothesis Testing. 3. Select the two forecasts to test, select the type of hypothesis test you wish to run, and click OK (Figure 6.17).
Results Interpretation A two-tailed hypothesis test is performed on the null hypothesis (Ho) such that the two variables’ population means are statistically identical to one another. The alternative
hypothesis (Ha) is such that the population means are statistically different from one another. If the calculated p-values are less than or equal to 0.01, 0.05, or 0.10 alpha test levels, it means
that the null hypothesis is rejected, which implies that the forecast means are statistically significantly different at the 1 percent, 5 percent, and 10 percent significance levels. If the null
hypothesis is not rejected when the p-values are high, the means of the two forecast distributions are statistically similar to one another. The same analysis is performed on variances of two
forecasts at a time using the pairwise F-test. If the p-values are small, then the variances (and standard deviations) are statistically different from one another. Otherwise, for large p-values, the
variances are statistically identical to one another. See Figure 6.18. The example file used was Hypothesis Testing and Bootstrap Simulation.
Notes The two-variable t-test with unequal variances (the population variance of forecast 1 is expected to be different from the population variance of forecast 2) is appropriate when the forecast
distributions are from different populations (e.g., data collected from two different geographical locations or two different operating business units). The two-variable t-test with equal variances
(the population variance of forecast 1 is expected to be equal to the population variance of forecast 2) is appropriate when the forecast distributions are from similar populations (e.g., data
collected from two different engine designs with similar specifications). The paired dependent two-variable t-test is appropriate when the forecast distributions are from exactly the same population
and subjects (e.g., data collected from the same group of patients before an experimental drug was used and after the drug was applied).
DATA EXTRACTION, SAVING SIMULATION RESULTS, AND GENERATING REPORTS A simulation’s raw data can be very easily extracted using Risk Simulator’s Data Extraction routine. Both assumptions and forecasts
can be extracted,
Revenue Cost Income
$200.00 $100.00 $100.00
MODEL A $200.00 $100.00 $100.00
FIGURE 6.17
Revenue Cost Income
Hypothesis testing.
Hypothesis Test on the Means and Variances of Two Forecasts Statistical Summary A hypothesis test is performed when testing the means and variances of two distributions to determine if they are
statistically identical or statistically different from one another; that is, to see if the differences between two means and two variances that occur are based on random chance or they are, in fact,
different from one another. The two-variable t-test with unequal variances (the population variance of forecast 1 is expected to be different from the population variance of forecast 2) is
appropriate when the forecast distributions are from different populations (e.g., data collected from two different geographical locations, two different operating business units, and so forth). The
two variable t-test with equal variances (the population variance of forecast 1 is expected to be equal to the population variance of forecast 2) is appropriate when the forecast distributions are
from similar populations (e.g., data collected from two different engine designs with similar specifications, and so forth). The paired dependent two-variable t-test is appropriate when the forecast
distributions are from similar populations (e.g., data collected from the same group of customers but on different occasions, and so forth). A two-tailed hypothesis test is performed on the null
hypothesis Ho such that the two variables’ population means are statistically identical to one another. The alternative hypothesis is that the population means are statistically different from one
another. If the calculated p-values are less than or equal to 0.01, 0.05, or 0.10, the hypothesis is rejected, which implies that the forecast means are statistically significantly different at the
1%, 5%, and 10% significance levels. If the null hypothesis is not rejected when the p-values are high, the means of the two forecast distributions are statistically similar to one another. The same
analysis is performed on variances of two forecasts at a time using the pairwise F-Test. If the p-values are small, then the variances (and standard deviations) are statistically different from one
another, otherwise, for large p-values, the variances are statistically identical to one another.
Result Hypothesis Test Assumption Computed t-statistic: P-value for t-statistic: Computed F-statistic: P-value for F-statistic:
Unequal Variances –0.32947 0.74181 1.026723 0.351212
FIGURE 6.18
Hypothesis testing results.
but a simulation must first be run. The extracted data can then be used for a variety of other analyses and the data can be extracted to different formats—for use in spreadsheets, databases, and
other software products.
Procedure To extract a simulation’s raw data, use the following steps: 1. Open or create a model, define assumptions and forecasts, and run the simulation. 2. Select Simulation | Tools | Data
Extraction. 3. Select the assumptions and/or forecasts you wish to extract the data from and click OK. The simulated data can be extracted and saved to an Excel worksheet, a text file (for easy
import into other software applications), or as a RiskSim file, which can be reopened as Risk Simulator forecast charts at a later date. Finally, you can create a simulation report of all the
assumptions and forecasts in your model by going to Simulation | Create Report. This is an efficient way to gather all the simulation inputs in one concise report.
Pandora’s Toolbox
CUSTOM MACROS Simulation can also be run while harnessing the power of Visual Basic for Applications (VBA) in Excel. For instance, the examples in Chapter 2 on running models with VBA codes can be
used in tandem with Risk Simulator. For an illustration of how to set the macros or customized functions to run with simulation, see the VBA Macro hands-on exercise (Retirement Funding with
Inflation) at the end of this chapter.
APPENDIX—GOODNESS-OF-FIT TESTS Several statistical tests exist for deciding if a sample set of data comes from a specific distribution. The most commonly used are the Kolmogorov– Smirnov test and the
chi-square test. Each test has its advantages and disadvantages. The following sections detail the specifics of these tests as applied in distributional fitting in Monte Carlo simulation analysis.
These two tests are used in Risk Simulator’s distributional fitting routines. Other goodness-of-fit tests such as the Anderson–Darling, Lilliefors, Jacque–Bera, Wilkes–Shapiro, and others are not
used as these are parametric tests and their accuracy depends on the data set being normal or near-normal. Therefore, the results of these tests are oftentimes suspect or yield inconsistent results.
Kolmogorov–Smirnov Test The Kolmogorov–Smirnov (KS) test is based on the empirical distribution function of a sample data set and belongs to a class of nonparametric tests. This nonparametric
characteristic is the key to understanding the KS test, which simply means that the distribution of the KS test statistic does not depend on the underlying cumulative distribution function being
tested. Nonparametric simply means no predefined distributional parameters are required. In other words, the KS test is applicable across a multitude of underlying distributions. Another advantage is
that it is an exact test as compared to the chi-square test, which depends on an adequate sample size for the approximations to be valid. Despite these advantages, the KS test has several important
limitations. It only applies to continuous distributions, and it tends to be more sensitive near the center of the distribution than at the distribution’s tails. Also, the distribution must be fully
specified. Given N ordered data points Y1, Y2, . . . YN, the empirical distribution function is defined as En = n(i)/N where n(i) is the number of points less than Yi where Yi values are ordered from
the smallest to the largest value. This is a step function that increases by 1/N at the value of each ordered data point.
The null hypothesis is such that the data set follows a specified distribution while the alternate hypothesis is that the data set does not follow the specified distribution. The hypothesis is tested
using the KS statistic defined as KS = max F(Yi ) − 1≤ i ≤ N
i N
where F is the theoretical cumulative distribution of the continuous distribution being tested that must be fully specified (i.e., the location, scale, and shape parameters cannot be estimated from
the data). As the null hypothesis is that the data follows some specified distribution, when applied to distributional fitting in Risk Simulator, a low p-value (e.g., less than 0.10, 0.05, or 0.01)
indicates a bad fit (the null hypothesis is rejected) while a high p-value indicates a statistically good fit.
Chi-Square Test The chi-square (CS) goodness-of-fit test is applied to binned data (i.e., data put into classes), and an attractive feature of the CS test is that it can be applied to any univariate
distribution for which you can calculate the cumulative distribution function. However, the values of the CS test statistic are dependent on how the data is binned and the test requires a sufficient
sample size in order for the CS approximation to be valid. This test is sensitive to the choice of bins. The test can be applied to discrete distributions such as the binomial and the Poisson, while
the KS test is restricted to continuous distributions. The null hypothesis is such that the data set follows a specified distribution while the alternate hypothesis is that the data set does not
follow the specified distribution. The hypothesis is tested using the CS statistic defined as
χ2 =
(Oi − Ei )2 / Ei ∑ i =1
where Oi is the observed frequency for bin i and Ei is the expected frequency for bin i. The expected frequency is calculated by Ei = N(F(YU) – F(YL)) where F is the cumulative distribution function
for the distribution being tested, YU is the upper limit for class i, YL is the lower limit for class i, and N is the sample size. The test statistic follows a CS distribution with (k – c) degrees of
freedom where k is the number of nonempty cells and c = the number of esti-
Pandora’s Toolbox
TABLE 6.2
Chi-Square Test
Alpha Level (%)
32.00690 35.17246 41.63840
Note: Chi-square goodness-of-fit test sample critical values. Degrees of freedom 23.
mated parameters (including location and scale parameters and shape parameters) for the distribution + 1. For example, for a three-parameter Weibull distribution, c = 4. Therefore, the hypothesis
that the data are from a population with the specified distribution is rejected if c2 > c2(a, k – c) where c2(a, k – c) is the CS percent point function with k – c degrees of freedom and a
significance level of a (see Table 6.2). Again, as the null hypothesis is such that the data follows some specified distribution, when applied to distributional fitting in Risk Simulator, a low
p-value (e.g., less than 0.10, 0.05, or 0.01) indicates a bad fit (the null hypothesis is rejected) while a high p-value indicates a statistically good fit.
QUESTIONS 1. Name the key similarities and differences between a tornado chart and a spider chart. Then, compare tornado and spider charts with sensitivity analysis. 2. In distributional fitting,
sometimes you may not get the distribution you thought is the right fit as the best choice. Why is this so? Also, why does the beta distribution usually come up as one of the top few candidates as
the best-fitting distribution? 3. Briefly explain what a hypothesis test is. 4. How is bootstrap simulation related to precision and error control in simulation? 5. In sensitivity analysis, how is
percent variation explained linked to rank correlation?
Additional hands-on exercises are presented in the following pages. These exercises require Risk Simulator to be installed and application of the techniques presented in this chapter.
Pandora’s Toolbox
Industry Applications
Extended Business Cases I: Pharmaceutical and Biotech Negotiations, Oil and Gas Exploration, Financial Planning with Simulation, Hospital Risk Management, and Risk-Based Executive Compensation
his chapter provides the first installment of five extended business cases. The first case pertains to the application of Monte Carlo simulation and risk analysis in the biotech and pharmaceutical
industries. The case details the use of risk analysis for deal making and structuring, and is contributed by Dr. Charles Hardy. The second case in this chapter is contributed by Steve Hoye, a veteran
of the oil and gas industry. Steve details the risks involved in oil exploration and production by illustrating a comprehensive oil exploration case from cradle to grave. Then, a financial planning
case is presented by Tony Jurado, in considering the risks involved in retirement planning. The next case illustrates how Monte Carlo simulation coupled with queuing theory can be applied to hospital
planning, and is contributed by Larry Pixley, an expert consultant in the health-care sector. Finally, Patrick Haggerty illustrates how simulation can be used to engineer a risk-based executive
compensation plan.
CASE STUDY: PHARMACEUTICAL AND BIOTECH DEAL STRUCTURING This business case is contributed by Dr. Charles Hardy, principal of BioAxia Incorporated of Foster City, California, a consulting firm
specializing in valuation and quantitative deal structuring for bioscience firms. He is also chief financial officer and director of business development at Panorama Research, a biotechnology
incubator in the San Francisco Bay Area. Dr. Hardy has a Ph.D. in pathobiology from the University of Washington in Seattle, Washington, and an MBA in finance and entrepreneurship from the University
of Iowa in Iowa City, Iowa. He has functioned in a variety of roles for several start-up companies, including being CEO of Pulmogen, an early-stage medical device company. Dr. Hardy lives and works
in the San Francisco Bay Area. Smaller companies in the biotechnology industry rely heavily on alliances with pharmaceutical and larger companies to finance their R&D expenditures. Pharmaceutical and
larger organizations in turn depend on these alliances to supplement their internal R&D programs. In order for smaller organizations to realize the cash flows associated with these alliances, they
must have a competent and experienced business development component to negotiate and structure these crucial deals. In fact, the importance of these business collaborations to the survival of most
young companies is so great that deal-making experience, polished business-development skills, and a substantial network of contacts are all frequent assets of the most successful executives of
start-up and early-stage biotechnology companies. Although deal-making opportunities for biotech companies are abundant because of the pharmaceutical industry’s need to keep a healthy pipeline of new
products in development, in recent years deal-making opportunities have lessened. Intuitively, then, firms have to be much more careful in the way they structure and value the deals in which they do
get the opportunity to participate. However, despite this importance, a large number of executives prefer to go with comparable business deal structures for these collaborations in the hope of
maximizing shareholder value for their firms, or by developing deal terms using their own intuition rather than developing a quantitative methodology for deal valuation and optimization to supplement
their negotiation skills and strategies. For companies doing only one deal or less a year, perhaps the risk might be lower by structuring a collaboration based on comparable deal structures; at least
they will get as much as the average company, or will they? As described in this case study, Monte Carlo simulation, stochastic optimization, and real options are ideal tools for valuing and
optimizing the
Extended Business Cases I
financial terms of collaborative biomedical business deals focused on the development of human therapeutics. A large amount of data associated with clinical trial stage lengths and completion
probabilities are publicly available. By quantitatively valuing and structuring deals, companies of all sizes can gain maximum shareholder value at all stages of development, and, most importantly,
future cash flows can be defined based on expected cashflow needs and risk preference.
Deal Types Most deals between two biotechnology companies or a biotechnology company and pharmaceutical company are strategic alliances where a cooperative agreement is made between two organizations
to work together in defined ways with the goal of successfully developing or commercializing products. As the following list describes, there are several different types of strategic alliances: ■
■ ■ ■
Product Licensing. A highly flexible and widely applicable arrangement where one party wishes to access the technology of another organization with no other close cooperation. This type of alliance
carries very low risk and these types of agreements are made at nearly every stage of pharmaceutical development. Product Acquisition. A company purchases an existing product license from another
company and thus obtains the right to market a fully or partially developed product. Product Fostering. A short-term exclusive license for a technology or product in a specific market that will
typically include hand-back provisions. Comarketing. Two companies market the same product under different trade names. Copromotion. Two parties promote the same product under the same brand name.
Minority Investment Alliance. One company buys stock in another as part of a mutually desired strategic relationship.
The historical agreement valued and optimized in this case study is an example of a product-licensing deal.
Financial Terms Each business deal is decidedly unique, which explains why no “generic” financial model is sufficient to value and optimize every opportunity and collaboration. A biomedical
collaborative agreement is the culmination of the
combined goals, desires, requirements, and pressures from both sides of the bargaining table, possibly biased in favor of one party by exceptional negotiating skills, good preparation, more thorough
due diligence, and accurate assumptions, and less of a need for immediate cash. The financial terms agreed on for licensing or acquiring a new product or technology depend on a variety of factors,
most of which impact the value of the deal. These include but are not limited to: ■ ■ ■ ■ ■ ■ ■
Strength of the intellectual property position. Exclusivity of the rights agreed on. Territorial exclusivity granted. Uniqueness of the technology transferred. Competitive position of the company.
Stage of technology developed. Risk of the project being licensed or sold.
Although every deal is different, most include: (1) licensing and R&D fees; (2) milestone payments; (3) product royalty payments; and (4) equity investments.
Primary Financial Models All calculations described in this case study are based on discounted cashflow (DCF) principals using risk-adjusted discount rates. Here, assets under uncertainty are valued
using the following basic financial equation: NPV =
E(CFt )
∑ i = 0 (1 + r
+ π t )t
where NPV is the net present value, E(CFt) is the expected value of the cash flow at time t, rt is the risk-free rate, and pt is the risk premium appropriate for the risk of CFt. All subcomponents of
models described here use different discount rates if they are subject to different risks. In the case of biomedical collaborative agreements, all major subcomponents (licensing fees, R&D costs and
funding, clinical costs, milestone payments, and royalties) are frequently subject to many different distinct risks, and thus are all assigned their own discount rates based on a combination of
factors, with the subject company’s weighted average cost of capital (WACC) used as the base value. To incorporate the uncertain and dynamic nature of these risk assumptions into the model, all of
these discount rates are themselves Monte Carlo variables. This discounting supplementation is critical to valuing the deal accurately, and most important for later stochastic optimization.
Extended Business Cases I
Historical Deal Background and Negotiated Deal Structure The deal valued and optimized in this case study was a preclinical, exclusive product-licensing agreement between a small biotechnology
company and a larger organization. The biopharmaceutical being valued had one major therapeutic indication, with an estimated market size of $1 billion at the date the deal was signed. The licensee
negotiated the right to sublicense. The deal had a variety of funding provisions, with a summary of the financial terms presented in Table 7.1. The licensor estimated they were approximately 2 years
away from filing an investigational new drug (IND) application that would initiate clinical trials in humans. For the purposes of the deal valuation and optimization described here, it is assumed
that no information asymmetries exist between the companies forming the collaboration (i.e., both groups feel there is an equally strong likelihood their candidate biopharmaceutical will be a
commercial success). Licensing fees for the historical deal consisted of an up-front fee followed by licensing maintenance fees including multipliers (Table 7.1). Licensing maintenance fees will
terminate on any one of the following events: (1) first IND filing by licensor; (2) tenth anniversary of the effective date; and (3) termination of the agreement. Milestone values for the historical
deal numbered only three, with a $500,000 payment awarded on IND filing, a $1,500,000 payment on new drug application (NDA) filing, and a $4,000,000 payment on NDA approval (Table 7.1). The
negotiated royalties for the historical deal were a flat 2.0 percent of net sales. As described later in this case, two additional deal scenarios were constructed and stochastically optimized from
the historical structure: a highervalue, lower-risk (HVLR) scenario and a higher-value, higher-risk (HVHR) scenario (Table 7.1). Major Assumptions Figure 7.1 shows a time line for all three deal
scenarios evaluated. Also shown are the milestone schedules for all three scenarios, along with major assumption data. The total time frame for all deal calculations was 307.9 months, where the
candidate pharmaceutical gains a 20 percent maximum market share of a 1 billion dollar market, with a 20 percent standard deviation during the projected 15-year sales period of the pharmaceutical.
The market is assumed to grow 1.0 percent annually starting at the effective date of the agreement and throughout the valuation period. The manufacturing and marketing costs of the potential
pharmaceutical were estimated to be 58 percent, an important assumption considering that royalties are paid on net sales, not gross sales. The total market size, market growth rate, maximum market
share, and manufacturing and marketing offset are all Monte Carlo variables following lognormal distributions where
TABLE 7.1
Historical Financial Terms Granted to the Licensor of the Signed Biomedical Collaborative Deal Valued and Optimized in This Case Study Deal Scenario Component
Higher-Value, Lower-Risk
Higher-Value, Higher-Risk
Licensing Fees
$ 85,000
Licensing Maintenance Fees
$100,000 200,000 300,000 400,000 500,000 $250,000 $500,000
$125,000 250,000 375,000 500,000 500,000 $275,000 $660,000
$ 75,000 150,000 225,000 300,000 300,000 $165,000 $910,000
R&D Funding Milestone Payments
Royalties a
2.0% Net Sales
0.5% Net Sales
5.5% Net Sales
Product license application.
New drug application.
Timing 30 days from effective date First anniversary Second anniversary Third anniversary Fourth anniversary Fifth anniversary Per year First IND filing in United States or European equivalent
Successful conclusion of Phase I clinical trials in the United States or European equivalent Successful conclusion of Phase II clinical trials in the United States or European equivalent First PLAa
(or NDAb) filing or European equivalent NDA approval in the United States or European equivalent
Time line for the biomedical licensing deal. Milestone and royalty values for all deal scenarios evaluated are shown. R&D, licensing, and licensing maintenance fees are not shown.
FIGURE 7.1
extreme values are unlikely. Assumptions regarding clinical trial length, completion probabilities, and major variables in the valuation model are also shown in Figure 7.1. All of these values are
Monte Carlo assumptions. Throughout this case study, deal values were based on royalties from 15 years of net sales. Royalties were paid on a quarterly basis, not at the end of each sales year. Total
R&D costs for the licensor were $200,000 annually, again estimated with a Monte Carlo assumption. Inflation during the period was assumed to be 1.95 percent annually and average annual pharmaceutical
price increases (APPIs) were assumed to be 5.8 percent. Thus, milestones were deflated in value, and royalties inflated by APPI less inflation. For the deal valuation described here, the licensor was
assumed to be unprofitable preceding and during the clinical trial process and milestone payments were not subject to taxes. However, royalties from the licensee paid to the licensor were taxed at a
33.0 percent rate.
Deal Valuations Historical Deal Valuation Figure 7.2 illustrates the Monte Carlo summary of the historical deal, while Figure 7.3 shows a comparative illustration of each major component of the
historical scenario. Mean deal present value was
Certainty is 50.00% from $1,338,078 to $1,515,976. Summary Certainty level is 50.00%. Certainty range is from $1,338,115 to $1,516,020. Display range is from $1,091,067 to $1,772,886. Entire range is
from $994,954 to $2,037,413. After 10,000 trials, the standard error of the mean is $1,344. Statistics Trials 10,000 Mean $1,432,128 Median $1,422,229 Standard Deviation $134,449 Variance
$18,076,644,871 Skewness 0.46 Kurtosis 3.47 Coefficient of Variability 9.38% Range Minimum $994,954 Range Maximum $2,037,413 Range Width $1,042,459 Mean Standard Error $1,344
FIGURE 7.2
Historical deal scenario Monte Carlo summary.
Extended Business Cases I
FIGURE 7.3
A comparative illustration I. This is an illustration of the Monte Carlo distributions of the cash-flow present value of the historical deal scenario, along with the distributions of the deal’s
individual components. Each component has a clearly definable distribution that differs considerably from other deal components, both in value and risk characteristics. The percentage of each
component to total deal present value is also shown.
$1,432,128 with a standard deviation of $134,449 (Figure 7.2). The distribution describing the mean was relatively symmetric with a skewness of 0.46. The kurtosis of the distribution, the
“peakedness,” was 3.47 (excess kurtosis of 0.47), limiting the deal range from $994,954 to $2,037,413. The coefficient of variation (CV), the primary measure of risk for the deal, was low at 9.38
percent. R&D/licensing contributed the most to total deal value with a mean present value of $722,108, while royalties contributed the least with a mean value of $131,092 (Figure 7.3). Milestones in
the historical scenario also contributed greatly to the historical deal value with a mean present value of $578,927. The riskiness of the cash flows varied greatly among individual historical deal
components. R&D/licensing cash flows varied the least and had by far the lowest risk with a CV of only 7.48 percent and, proportional to the distribution’s mean, had the smallest range among any deal
component (data not shown). The present value of milestone cash flows was much more volatile, with a CV of 14.58 percent. Here the range was greater ($315,103 to $1,004,563) with a symmetric
distribution having a skewness of only 0.40 (data not shown).
Royalty present value was by far the most volatile with a CV of 45.71 percent (data not shown). The kurtosis of royalty present value was large (5.98; data not shown), illustrating the proportionally
wide distribution to the small royalty mean ($131,093; Figure 7.3). These data should not be surprising as the royalty cash flows are subject to variability of nearly all Monte Carlo assumptions in
the model and are thus highly volatile. Monte Carlo Assumption and Decision Variable Sensitivities Figure 7.4 shows a tornado chart of historical deal assumptions and decision variables. The
probability of IND filing had the largest influence on variation of total deal present value, as all milestones and royalties are dependent on this variable. Interestingly, next came the annual
research cost for each full-time equivalent (FTE) for the licensor performing the remaining preclinical work in preparation for an IND filing, followed by the negotiated funding amount of each FTE
(Figure 7.4). Thus, an area for the licensor to create shareholder value is to overestimate R&D costs in negotiating the financial terms for the deal, considering R&D/licensing funding contributed
50.42 percent of total deal present value (Figure 7.3). Variables impacting royalty cash flows, such as the royalty discount rate and manufacturing and marketing offset percentages, were more
important than the negotiated milestone amounts, although the milestone discount rate was 10th in contribution to variance to the historical deal (Figure 7.4).
Higher-Value, Lower-Risk Deal Valuation Changes in Key Assumptions and Parameters Differing from the Historical, Signed Deal The financial structure for the HVLR deal scenario was considerably
different from the historical deal (Table 7.1). Indeed, R&D and licensing funding were significantly increased and the milestone schedule was reorganized with five payments instead of the three in
the historical deal. In the HVLR scenario, the value of each individual milestone was stochastically optimized using individual restrictions for each payment. While the future value of the milestone
payments was actually $300,000 less than the historical deal (Table 7.1), the present value as determined by Monte Carlo analysis was 93.6 percent higher. In devising this scenario, to compensate the
licensee for increased R&D/licensing fees and milestone restructuring, the royalty value in the HVLR scenario was reduced to only a 0.5 percent flat rate (Table 7.1). Deal Valuation, Statistics, and
Sensitivities Figure 7.5 shows the Monte Carlo summary of the HVLR scenario, and Figure 7.6 shows an illustration of present value of the HVLR deal and its three components. The Monte Carlo mean deal
value for this scenario was $2,092,617, an increase of 46.1 percent over
Extended Business Cases I
FIGURE 7.4
Historical deal Monte Carlo and decision variable tornado chart.
the historical deal, while total risk was reduced by 16.3 percent as measured by changes in the CV of cash-flow present value (Figures 7.2 and 7.5). This gain in total deal value was achieved by a
93.6 percent increase in the present value of milestone payments (Figures 7.3 and 7.6) along with a 9.6
Certainty is 50.00% from $1,980,294 to $2,200,228. Summary Certainty level is 50.00%. Certainty range is from $1,980,218 to $2,199,958. Display range is from $1,663,093 to $2,523,897. Entire range is
from $1,475,621 to $2,777,048. After 10,000 trials, the standard error of the mean is $1,643. Statistics Trials 10,000 Mean $2,092,617 Median $2,087,697 Standard Deviation $164,274 Variance
$26,986,218,809 Skewness 0.18 Kurtosis 3.06 Coefficient of Variability 7.85% Range Minimum $1,475,620 Range Maximum $2,777,047 Range Width $1,301,427 Mean Standard Error $1,642
FIGURE 7.5
Higher-value, lower-risk deal scenario Monte Carlo.
percent reduction in milestone risk (data not shown). The present value of R&D/licensing funding also increased (30.1 percent) while there is a 22.5 percent reduction in risk. These gains came at the
cost of royalty income being reduced by 75.1 percent (Figures 7.3 and 7.6). The royalty component was so small and the mean so tightly concentrated that the other distributions were comparatively
distorted (Panel A, Figure 7.6). If the royalty component is removed, the total deal, milestone, and R&D/licensing distributions are more clearly presented (Panel B, Figure 7.6). The milestone
percentage of the total HVLR scenario was much higher than the milestone component of the historical deal, while the R&D/licensing fees of the HVLR structure were less than the historical structure
(Figures 7.3 and 7.7). Cumulatively, the HVLR scenario had a 16.9 percent reduction in risk in comparison to the historical deal (Figures 7.2 and 7.5), where the R&D/ licensing and milestone cash
flows of HVLR structure were considerably less risky than the historical scenario (data not shown). However, not surprisingly, the risk for the royalty cash flows of the HVLR structure remained
nearly identical to that of the historical deal’s royalties (data not shown). Monte Carlo Assumption and Decision Variable Sensitivities The tornado chart for the HVLR deal is presented in Figure
7.8. As with the historical deal, the
Extended Business Cases I
FIGURE 7.6 A comparative illustration II. The figures illustrate the Monte Carlo distributions for cash-flow present value of the HVLR deal scenario along with the distributions of the deal’s
individual components. Because the royalty cash flows greatly distort the other distributions (Panel A), removing the royalties from the overlay chart allows the other distributions to be more
clearly presented (Panel B). The data in Panel B are comparable to a similar representation of the historical deal (Figure 7.3). Here, proportionally, milestones contributed the most to deal value
(53.56 percent), followed by R&D/licensing (44.88 percent), while royalties contributed very little (1.56 percent; Panel A).
FIGURE 7.7 A comparative illustration III. Illustrations of the Monte Carlo distributions for cash-flow present value of the HVLR deal scenario along with the distributions of the deal’s individual
components. Here, proportionally, milestones contributed the most to deal value (56.30 percent), followed by R&D/licensing (22.98 percent), while royalties contributed 20.72 percent to total deal
probability of IND filing produced the largest variation in the HVLR deal. The annual research cost for each FTE for the licensor performing the remaining preclinical work in preparation for IND
filing was third, while the negotiated annual funding amount for each FTE was fourth. The value of each milestone was listed earlier in importance in comparison to the historical deal (Figures 7.4
and 7.8). This result should not be surprising as the present value of total milestones increased 93.6 percent over the historical structure. The probabilities of completing various clinical trial
stages were not clustered as with the historical deal (Figures 7.4 and 7.8). Indeed, the probability of completing Phase 1 was 2nd, the probability of Phase 2 completion 5th, and the probability of
Phase 3 completion 10th in predicting variation in total HVLR deal value (Figure 7.8), whereas in the historical deal, these three variables were clustered and ranked 4th through 6th (Figure 7.4).
This reorganization is probably because of milestone restructuring where, in the HVLR deal structure, early milestone payments are worth much more (Table 7.1 and Figure 7.1). Among the top 20 most
Extended Business Cases I
FIGURE 7.8
Higher-value, lower-risk deal scenario Monte Carlo tornado.
variables inducing variation in the HVLR deal are the lengths of Phase 1, Phase 2, and Phase 3 clinical trials (13th–15th; Figure 7.8), although their importance was considerably less than the
historical deal (Figure 7.4). This is probably because of the reduced royalty component of the HVLR scenario (Table 7.1).
Higher-Value, Higher-Risk Deal Valuation Changes in Key Assumptions and Parameters Differing from the Historical and HVLR Deal Structures A variety of financial terms were changed for the HVHR deal
structure. First, licensing and licensing maintenance fees were reduced, sometimes substantially (Table 7.1). R&D fees were reduced across the board from the historical deal and the milestone
schedule was completely restructured. The historical structure had three payments and the HVLR structure five, with the HVHR deal having only four (Figure 7.1). As shown, the milestone future value
for the HVHR deal was reduced to $5,850,000 from $6,000,000 in the historical deal. Like the HVLR deal, the milestone values for the HVHR scenario were stochastically optimized based on specific
ranges. The sacrifices gained by lower licensing fees, R&D funding, and milestone restructuring were compensated for by a higher flat royalty rate of 5.5 percent of net sales (Table 7.1). Deal
Valuation, Statistics, and Sensitivities Figure 7.7 shows an illustration of the total HVHR deal along with its three components. Total deal value for the HVHR scenario was $1,739,028, a 21.4 percent
increase from the historical deal and 16.9 percent decrease from the HVLR structure. R&D/ licensing present value decreased by 44.7 percent and 57.4 percent from the historical and HVLR deals,
respectively (Figures 7.3 through 7.7). The royalty distribution is much more pronounced and noticeably positively skewed, and illustrates the large downside potential of this deal component. Changes
in the royalty percentage also significantly expanded the range maximum for the total deal ($3,462,679) with a range width of $2,402,076, a 130.4 percent increase from the historical and 84.6 percent
increase over the HVLR deal widths, respectively (Table 7.2). Milestone present value increased by 69.1 percent from the historical deal and decreased 12.6 percent from the HVLR scenario, while
royalty present value increased 175 percent and 1,002 percent, respectively (Figures 7.3 through 7.7). Both the skewness and kurtosis of total deal value under the
TABLE 7.2
Deal Scenario Summary Table as Calculated by Monte Carlo Analysis
Deal Structure Historical Higher-Value, Lower-Risk Higher-Value, Higher-Risk
Expected Value $1,432,128
Range Minimum
Range Maximum
Range Width
9.38% $ 994,954
Extended Business Cases I
HVHR scenario were greater than the other deal structures evaluated (Figures 7.3 through 7.7). This result has to do with the greater royalty component in the HVHR scenario and its associated large
cash-flow volatility. The overall deal risk under the HVHR scenario was the greatest (14.33 percent) in comparison to the historical deal’s 9.38 percent and the HVLR scenario’s 7.85 percent cash-flow
CV, again illustrating the strong royalty component of this deal structure with its greater volatility. With the HVHR deal, R&D/licensing cash flows had much higher risk than either the historical or
HVLR deals (data not shown). This increased risk is surely because negotiated R&D funding per FTE and licensing fees were considerably less than the estimated cost per FTE, resulting in more R&D/
licensing cash-flow volatility in the HVHR structure. This result again shows the importance of accurate accounting and finance in estimating R&D costs for maximizing this type of licensing deal
value. Monte Carlo Assumption and Decision Variable Sensitivities The tornado chart for the HVHR deal scenario emphasized the importance of variables directly impacting royalty cash flows (Figure
7.9). Here, the royalty discount rate was 4th, manufacturing and marketing offset 5th, and maximum market share capture 6th in impacting total deal present value variation. Total market size and the
average APPI were 11th and 12th, respectively. Interestingly, the negotiated royalty percentage was only 19th in contribution to deal variance. Cost per FTE ranked 8th, showing this assumption is
important in all deal scenarios (Figures 7.4, 7.8, and 7.9). Figure 7.10 shows the Monte Carlo simulation results for HVHR. The negotiated first milestone value was the only milestone listed on the
sensitivity chart (13th, Figure 7.9), illustrating the importance of milestone structuring (Table 7.1 and Figure 7.1). The first milestone is impacted the least by the time value of money and the
probability of completion of each clinical trial stage.
A Structural Comparison of Deal Scenario Returns and Risks Total deal expected value and risk as measured by the CV of cash-flow present value are shown in Table 7.2. As illustrated here, higher
expected value is not necessarily correlated with higher risk, which is contrary to a basic principal in finance where investments of higher risk should always yield higher returns. Thus, these data
show why quantitative deal valuation and optimization is critical for all companies as higher deal values can be constructed with significantly less risk. Also shown in Table 7.2 are the range
minimums, maximums, and widths of the total deal value distributions as calculated by Monte Carlo analysis
FIGURE 7.9
Higher-value, higher-risk deal scenario Monte Carlo tornado.
for each scenario evaluated. The range minimum is the smallest number and the range maximum the largest number in a distribution, while the range width is the difference between the range minimum and
maximum. Collaborative business deals in the biotechnology and pharmaceutical industries formed during strategic alliances, such as the one described here, are
Extended Business Cases I
Certainty is 50.00% from $1,563,891 to $1,882,975. Summary Certainty level is 50.00%. Certainty range is from $1,563,891 to $1,882,975. Display range is from $1,132,837 to $2,396,924. Entire range is
from $1,060,603 to $3,462,679. After 10,000 trials, the standard error of the mean is $2,493. Statistics Trials 10,000 Mean $1,739,028 Median $1,712,532 Standard Deviation $249,257 Variance
$62,129,317,618 Skewness 0.77 Kurtosis 4.39 Coefficient of Variability 14.33% Range Minimum $1,060,603 Range Maximum $3,462,679 Range Width $2,402,076 Mean Standard Error $2,492
FIGURE 7.10
Higher-value, higher-risk deal scenario Monte Carlo summary.
in fact risky asset portfolios. As such, the standard deviation of a portfolio of assets is less than the weighted average of the component asset standard deviations. To view the impact of
diversification of cash-flow streams with the various deal scenarios evaluated in this case study, the weight of each deal component was determined and the weighted average CV of cash-flow present
value calculated for each deal scenario (Table 7.3). The CV is used as the primary risk measure because of differences in the scale of the cash flows from individual deal components. As expected with
a portfolio of risky assets, the weighted average of the CV of individual deal components (R&D/licensing funding, milestone payments, and royalties) was always greater than the CV of the total deal
present value, illustrating the impact of diversification (Table 7.3). Thus, portfolios of less than perfectly correlated assets always offer better risk–return opportunities than the individual
component assets on their own. As such, companies would probably not want to completely forgo receiving milestone payments and royalties for only R&D funding and licensing fees, if these deal
components can be valued and optimized with reasonable accuracy as described here. By combining assets whose returns are uncorrelated or partially correlated, such as cash flows from milestone
payments, royalties, licensing, and R&D funding, risk is reduced (Table 7.3). Risk can be eliminated most rapidly while keeping expected returns as high as possible if a
50.42% 44.88 22.98
Historical Higher-Value, Lower-Risk Higher-Value, Higher-Risk
Proportion of total deal present value attributable to royalty payments.
CV in the present value of cash flows from R&D and licensing fees.
Calculated deal CV by Monte Carlo simulation.
Weighted average of the CV of total deal value.
9.17% 1.56 20.72
Proportion of total deal present value attributable to milestone payments.
40.42% 53.56 56.30
WMib 7.47% 5.79 13.40
Proportion of total deal present value attributable to R&D and licensing fees.
14.57% 13.18 12.69
Milestones 45.70% 45.95 46.21
13.84% 10.38 19.80
W. Avg.e
Coefficient of Variation (CV)
Deal Component Weights, Component CVs, Weighted Average Deal CVs, and Calculated Deal CVs
Deal Structure
TABLE 7.3
9.38% 7.85 14.33
Extended Business Cases I
company’s cumulative deal repertoire is valued, structured, and balanced from the beginning of a company’s evolution and development.
Discussion and Conclusion The historical deal evaluated in this case study was a preclinical, product-licensing deal for a biopharmaceutical with one major therapeutic indication. For collaborative
deal structures containing licensing fees, R&D funding, milestone payments, and royalties, each deal component has definable expected values, variances, and widely varying risk characteristics.
Alternative deal structures were developed and optimized, all of which had different expected returns and risk levels with the primary risk measure being the CV of cash-flow present values. Thus,
nearly any biomedical collaborative deal with the types of financial terms described here can be quantitatively valued, structured, and optimized using financial models, Monte Carlo analysis,
stochastic optimization, real options, and portfolio theory. During this study, the author was at a considerable disadvantage because the historical deal valued and optimized here had already been
signed, and he was not present during the negotiation process. Therefore, the author had to make a large number of assumptions when restructuring the financial terms of the agreement. Considering
these limitations, this case is not about what is appropriate in the comparative financial terms for a biomedical licensing deal and what is not; rather, the data described here are valuable in
showing the quantitative influence of different deal structures on the overall valuation of a biomedical collaborative agreement, and most importantly on the level of overall deal risk, as well as
the risk of the individual deal components. The most effective approach using this technique is to work with a negotiator during the development and due diligence, and through the closing process of
a collaborative agreement. During this time, data should be continually gathered and the financial models refined as negotiations and due diligence proceed.
CASE STUDY: OIL AND GAS EXPLORATION AND PRODUCTION This case study was contributed by Steve Hoye. Steve is an independent business consultant with more than 23 years of oil and gas industry
experience, specializing in Monte Carlo simulation for the oil and gas industry. Starting with a bachelor of science degree from Purdue University in 1980, he served as a geophysicist with Texaco in
Houston, Denver, and Midland, Texas, before earning the MBA degree from the University of Denver in 1997. Since then, Steve has held leadership roles with Texaco as the midcontinent
BU technology team leader, and as asset team manager in Texaco’s Permian Basin business unit, before starting his consultancy in 2002. Steve can be reached at
[email protected]
. The oil and gas industry is an excellent place to examine and discuss techniques for analyzing risk. The basic business model discussed involves making investments in land rights, geologic data,
drilling (services and hardware), and human expertise in return for a stream of oil or gas production that can be sold at a profit. This model is beset with multiple, significant risk factors that
determine the resulting project’s profitability, including: ■ ■
Dry-Hole Risk. Investing drilling dollars with no resulting revenue from oil or gas because none is found in the penetrated geologic formation. Drilling Risk. High drilling costs can often ruin a
project’s profitability. Although companies do their best to estimate them accurately, unforeseeable geological or mechanical difficulties can cause significant variability in actual costs.
Production Risk. Even when oil or gas reservoirs are discovered by drilling, there is a high probability that point estimates of the size and recoverability of the hydrocarbon reserves over time are
wrong. Price Risk. Along with the cyclical nature of the oil and gas industry, product prices can also vary unexpectedly during significant political events such as war in the Middle East,
overproduction and cheating by the OPEC cartel, interruptions in supply such as large refinery fires, labor strikes, or political uprisings in large producing nations (e.g., Venezuela in 2002), and
changes in world demand. Political Risk. Significant amounts of the world’s hydrocarbon reserves are controlled by nations with unstable governments. Companies that invest in projects in these
countries take significant risks that the governments and leaders with whom they have signed contracts will no longer be in power when earned revenue streams should be shared contractually. In many
well-documented cases, corporate investments in property, plants, and equipment (PPE) are simply nationalized by local governments, leaving companies without revenue or the equipment and facilities
that they built to earn that revenue.
Oil and gas investments generally are very capital-intensive, often making these risks more than just of passing interest. Business units and entire companies stake their survival on their ability to
properly account for these risks as they apportion their capital budgets in a manner that ensures value to their stakeholders. To underline the importance of risk management in the industry, many
large oil companies commission high-level corporate panels of experts to review and endorse risk assessments done across all of their
Extended Business Cases I
business units for large capital projects. These reviews attempt to ensure consistency of risk assessment across departments and divisions that are often under pressure to make their investment
portfolios look attractive to corporate leadership as they compete for capital. Monte Carlo simulation is a preferred approach to the evaluation of the multiple, complex risk factors in the model we
discuss. Because of the inherent complexity of these risk factors and their interactions, deterministic solutions are not practical, and point forecasts are of limited use and, at worst, are
misleading. In contrast, Monte Carlo simulation is ideal for economic evaluations under these circumstances. Domain experts can individually quantify and describe the project risks associated with
their areas of expertise without having to define their overall effect on project economics.1 Cash-flow models that integrate the diverse risk assumptions for each of the prospect team’s experts are
relatively straightforward to construct and analyze. Most importantly, the resulting predictions of performance do not result in a simple single-point estimate of the profitability of a given oil and
gas prospect. Instead, they provide management with a spectrum of possible outcomes and their related probabilities. Best of all, Monte Carlo simulation provides estimates of the sensitivities of
their investment outcomes to the critical assumptions in their models, allowing them to focus money and people on the critical factors that will determine whether they meet the financial goals
defined in their business plans. Ultimately, Monte Carlo simulation becomes a project management tool that decreases risk while increasing profits. In this case study, we explore a practical model of
an oil-drilling prospect, taking into account many of the risk factors described earlier. While the model is hypothetical, the general parameters we use are consistent with those encountered drilling
in a mature, oil-rich basin in the United States (e.g., Permian Basin of West Texas) in terms of the risk factors and related revenues and expenses. This model is of greater interest as a framework
and approach than it is as an evaluation of any particular drilling prospect. Its value is in demonstrating the approach to quantifying important risk assumptions in an oil prospect using Monte Carlo
simulation, and analyzing their effects on the profitability forecasts of the project. The techniques described herein are extensible to many other styles and types of oil and gas prospects.
Cash-Flow Model The model was constructed using Risk Simulator, which provides all of the necessary Monte Carlo simulation tools as an easy-to-use, comprehensive add-in to Microsoft Excel. The model
simulates the drilling outcome as being a dry-hole or an oil discovery using dry-hole risk factors for the particular
geologic formation and basin. Drilling, seismic, and land-lease costs are incurred whether the well is dry or a discovery. If the well is a discovery, a revenue stream is computed for the produced
oil over time using assumptions for product price, and for the oil production rate as it declines over time from its initial value. Expenses are deducted for royalty payments to landowners, operating
costs associated with producing the oil, and severance taxes levied by states on the produced oil. Finally, the resulting net cash flows are discounted at the weighted average cost of capital (WACC)
for the firm and summed to a net present value (NPV) for the project. Each of these sections of the model is now discussed in more detail.
Dry-Hole Risk Companies often have proprietary schemes for quantifying the risk associated with not finding any oil or gas in their drilled well. In general, though, there are four primary and
independent conditions that must all be encountered in order for hydrocarbons to be found by the drill bit: 1. Hydrocarbons must be present. 2. A reservoir must be developed in the rock formation to
hold the hydrocarbons. 3. An impermeable seal must be available to trap the hydrocarbons in the reservoir and prevent them from migrating somewhere else. 4. A structure or closure must be present
that will cause the hydrocarbons (sealed in the reservoir) to pool in a field where the drill bit will penetrate. Because these four factors are independent and must each be true in order for
hydrocarbons to be encountered by the drill bit (and a dry hole to be avoided), the probability of a producing well is defined as: PProducing Well = PHydrocarbons ¥ PReservoir ¥ PSeal ¥ PStructure
Figure 7.11 shows the model section labeled “Dry-Hole Risk,” along with the probability distributions for each factor’s Monte Carlo assumption. While a project team most often describes each of these
factors as a singlepoint estimate, other methods are sometimes used to quantify these risks. The most effective process the author has witnessed involved the presentation of the geological,
geophysical, and engineering factors by the prospect team to a group of expert peers with wide experience in the proposed area. Thes peer experts then rated each of the risk factors. The resulting
distribution of risk factors often appeared near-normally distributed, with strong central tendencies and symmetrical tails. This approach was very amenable
Extended Business Cases I
Dry-Hole Risk Risk Factor
Prob. of Success
Hydrocarbons Structure Reservoir Seal
89.7% 89.7% 89.7% 89.7%
99.0% 100.0% 75.0% 100.0%
5.0% 0.0% 10.0% 0.0%
100% 100% 100% 100%
Net Producing Well Prob.:
Producing Well [0=no,1=yes]
FIGURE 7.11
Dry-hole risk.
to Monte Carlo simulation. It highlighted those factors where there was general agreement about risk and brought the riskiest factors to the foreground where they were examined and specifically
addressed. Accordingly, the assumptions regarding dry-hole risk in this model reflect a relatively low risk profile.2 Each of the four risk factor assumptions in Figure 7.11 (dark shaded area) are
described as normally distributed variables, with the mean and standard deviations for each distribution to the right of the assumption fields. The ranges of these normal distributions are confined
and truncated between the min and max fields, and random samples for any simulation trial outside this range are ignored as unrealistic. As described earlier, the Net Producing Well Probability field
in the model corresponds to the product of the four previously described risk factors. These four risk factors are drawn as random samples from their respective normal distributions for each trial or
iteration of the simulation. Finally, as each iteration of the Monte Carlo simulation is conducted, the field labeled Producing Well generates a random number between zero and one to determine if
that simulation resulted in a discovery of oil or a dry hole. If the random number is less than the Net Producing Well Probability, it is a producing well and shows the number one. Conversely, if the
random number is greater than the Net Producing Well Probability, the simulated well is a dry hole and shows zero.
Production Risk A multiyear stream of oil can be characterized as an initial oil production rate (measured in barrels of oil per day, BOPD), followed by a decline in production rates as the natural
reservoir energy and volumes are depleted over time. Reservoir engineers can characterize production declines using a wide array of mathematical models, choosing those that most closely match
the geology and producing characteristics of the reservoir. Our hypothetical production stream is described with two parameters: 1. IP. The initial production rate tested from the drilled well. 2.
Decline Rate. An exponentially declining production rate that describes the annual decrease in production from the beginning of the year to the end of the same year. Production rates in BOPD for our
model are calculated by: RateYear End = (1 – Decline Rate) ¥ RateYear Begin Yearly production volumes in barrels of oil are approximated as: Oil VolumeYear = 365 ¥ (RateYear Begin + RateYear End)/2
For Monte Carlo simulation, our model represents the IPs with a lognormal distribution with a mean of 441 BOPD and a standard deviation of 165 BOPD. The decline rate was modeled with a uniform
probability of occurrence between 15 percent and 28 percent. To add interest and realism to our hypothetical model, we incorporated an additional constraint in the production model that simulates a
situation that might occur for a particular reservoir where higher IPs imply that the production decline rate will be higher. This constraint is implemented by imposing a correlation coefficient of
0.60 between the IP and decline rate assumptions that are drawn from their respective distributions during each trial of the simulation. The production and operating expense sections of the model are
shown in Figure 7.12. Although only the first 3 years are shown, the model accounts for up to 25 years of production. However, when production declines below the economic limit,3 it will be zeroed
for that year and every subsequent year, ending the producing life of the well. As shown, the IP is assumed
BOPD Net BBLS / Yr Price / BBl Net Revenue Interest Revenue Operating Costs [$/Barrel] Severance Taxes [$] Net Sales
Decline Rate 21.5%
End of Year: 0 442
77.4% $
4.80 6.0%
FIGURE 7.12
1 347 143,866 $ 20.14 77.4% $ 2,242,311 $ (690,558) $ (134,539) $ 1,417,214
Decline rate.
2 272 112,924 $ 20.14 77.4% $ 1,760,035 $ (542,033) $ (105,602) $ 1,112,400
$ $ $ $ $
3 214 88,636 20.14 77.4% 1,381,487 (425,453) (82,889) 873,145
Extended Business Cases I
to occur at the end of Year 0, with the first full year of production accounted for at the end of Year 1.
Revenue Section Revenues from the model flow literally from the sale of the oil production computed earlier. Again there are two assumptions in our model that represent risks in our prospect: 1.
Price. Over the past 10 years, oil prices have varied from $13.63/barrel in 1998 to nearly $30/barrel in 2000.4 Consistent with the data, our model assumes a normal price distribution with a mean of
$20.14 and a standard deviation of $4.43/barrel. 2. Net Revenue Interest. Oil companies must purchase leases from mineral interest holders. Along with paying cash to retain the drilling and
production rights to a property for a specified time period, the lessee also generally retains some percentage of the oil revenue produced in the form of a royalty. The percentage that the producing
company retains after paying all royalties is the net revenue interest (NRI). Our model represents a typical West Texas scenario with an assumed NRI distributed normally with a mean of 75 percent and
a standard deviation of 2 percent. The revenue portion of the model is also shown in Figure 7.12 immediately below the production stream. The yearly production volumes are multiplied by sampled price
per barrel, and then multiplied by the assumed NRI to reflect dilution of revenues from royalty payments to lessees.
Operating Expense Section Below the revenue portion are operating expenses, which include two assumptions: 1. Operating Costs. Companies must pay for manpower and hardware involved in the production
process. These expenses are generally described as a dollar amount per barrel. A reasonable West Texas cost would be $4.80 per barrel with a standard deviation of $0.60 per barrel. 2. Severance
Taxes. State taxes levied on produced oil and gas are assumed to be a constant value of 6 percent of revenue. Operating expenses are subtracted from the gross sales to arrive at net sales, as shown
in Figure 7.12.
Drilling Costs Completion Cost Professional Overhead Lease Costs / Well Seismic Costs / Well
FIGURE 7.13
$ $ $ $ $
1,209,632 287,000 160,000 469,408 81,195
Year 0 expenses.
Year 0 Expenses Figure 7.13 shows the Year 0 expenses assumed to be incurred before oil production from the well (and revenue) is realized. These expenses are: 1. Drilling Costs. These costs can vary
significantly as previously discussed, due to geologic, engineering, and mechanical uncertainty. It is reasonable to skew the distribution of drilling costs to account for a high-end tail consisting
of a small number of wells with very large drilling costs due to mechanical failure and unforeseen geologic or serendipitous occurrences. Accordingly, our distribution is assumed to be lognormal,
with a mean of $1.2 million and a standard deviation of $200,000. 2. Completion Costs. If it is determined that there is oil present in the reservoir (and we have not drilled a dry hole), engineers
must prepare the well (mechanically/chemically) to produce oil at the optimum sustainable rates.5 For this particular well, we hypothesize our engineers believe this cost is normally distributed with
a mean of $287,000 and a standard deviation of $30,000. 3. Professional Overhead. This project team costs about $320,000 per year in salary and benefits, and we believe the time they have spent is
best represented by a triangular distribution, with a most likely percentage of time spent as 50 percent, with a minimum of 40 percent, a maximum of 65 percent. 4. Seismic and Lease Costs. To develop
the proposal, our team needed to purchase seismic data to choose the optimum well location, and to purchase the right to drill on much of the land in the vicinity of the well. Because this well is
not the only well to be drilled on this seismic data and land, the cost of these items is distributed over the planned number of wells in the project. Uncertain assumptions are shown in Figure 7.14,
and include leased acres, which were assumed to be normally distributed with a mean of 12,000 and a standard deviation of 1,000 acres. The total number of planned wells over which to distribute the
costs was assumed to be uniform between 10 and 30. The number of seismic sections acquired was also assumed to be normally distributed with a mean
Extended Business Cases I
Lease Expense Project Lease Acres Planned Wells Acres / Well Acreage Price Acreage Cost / Well
$ $
Seismic Expense Seismic Sections Acquired Seismic Sections / Well Seismic Cost Seismic Cost / Well
FIGURE 7.14
$ $
Comments 12,800 20 sections 20.0 640 733.45 $ / acre 469,408
50.0 2.50 32,478.18 $ / section 81,195
Uncertain assumptions.
of 50 sections and a standard deviation of 7. These costs are represented as the final two lines of Year 0 expenses in Figure 7.13.
Net Present Value Section The final section of the model sums all revenues and expenses for each year starting at Year 0, discounted at the weighted average cost of capital (WACC—which we assume for
this model is 9 percent per year), and summed across years to compute the forecast of NPV for the project. In addition, NPV/I is computed,6 as it can be used as a threshold and ranking mechanism for
portfolio decisions as the company determines how this project fits with its other investment opportunities given a limited capital budget.
Monte Carlo Simulation Results As we assess the results of running the simulation with the assumptions defined previously, it is useful to define and contrast the point estimate of project value
computed from our model using the mean or most likely values of the earlier assumptions. The expected value of the project is defined as: EProject = EDry Hole + EProducing Well = PDry Hole NPVDry
Hole + PProducing Well NPVProducing Well where PProducing Well = probability of a producing well and PDry Hole = probability of a dry hole = (1 – PProducing Well). Using the mean or most likely point
estimate values from our model, the expected NPV of the project is $1,250,000, which might be a very attractive prospect in the firm’s portfolio.
FIGURE 7.15
Frequency distribution of NPV outcomes.
In contrast, we can now examine the spectrum of outcomes and their probability of occurrence. Our simulation was run with 8,450 trials (trial size selected by precision control) to forecast NPV,
which provided a mean NPV plus or minus $50,000 with 95 percent confidence. Figure 7.15 is the frequency distribution of NPV outcomes. The distribution is obviously bimodal, with the large, sharp
negative NPV peak to the left representing the outcome of a dry hole. The smaller, broader peak toward the higher NPV ranges represents the wider range of more positive NPVs associated with a
producing well. All negative NPV outcomes are to the left of the NPV = 0 line (with a lighter shade) in Figure 7.15, while positive outcome NPVs are represented by the area to the right of the NPV =
0 line with the probability of a positive outcome (breakeven or better) shown as 69.33 percent. Of interest, the negative outcome possibilities include not only the dry-hole population of outcomes as
shown, but also a small but significant portion of producingwell outcomes that could still lose money for the firm. From this information, we can conclude that there is a 30.67 percent chance that
this project will have a negative NPV. It is obviously not good enough for a project of this sort to avoid a negative NPV. The project must return to shareholders something higher than its cost of
capital, and, further, must be competitive with other investment opportunities that the firm has. If our hypothetical firm had a hurdle rate of NPV/I greater than 25 percent for its yearly budget, we
would want to test our simulated project outcomes against the probability that the project could clear that hurdle rate.
Extended Business Cases I
FIGURE 7.16
Forecast distribution of NPV to I ratio.
Figure 7.16 shows the forecast distribution of outcomes for NPV/I. The large peak at negative 100 percent again represents the dry-hole case, where in fact the NPV of the outcome is negative in the
amount of Year 0 costs incurred, making NPV/I equal to –1. All outcomes for NPV greater than the hurdle rate of 25 percent show that there is a 64 percent probability that the project will exceed
that rate. To a risk-sensitive organization, this outcome implies a probability of greater than one in three that the project will fail to clear the firm’s hurdle rate—significant risk indeed.
Finally, our simulation gives us the power to explore the sensitivity of our project outcomes to the risks and assumptions that have been made by our experts in building the model. Figure 7.17 shows
a sensitivity analysis of the NPV of our project to the assumptions made in our model. This chart shows the correlation coefficient of the top 10 model assumptions to the NPV forecast in order of
decreasing correlation. At this point, the project manager is empowered to focus resources on the issues that will have an impact on the profitability of this project. Given the information from
Figure 7.17, we could hypothesize the following actions to address the top risks in this project in order of importance: ■
IP. The initial production rate of the well has a driving influence on value of this project, and our uncertainty in predicting this rate is causing the largest swing in predicted project outcomes.
Accordingly, we could have our team of reservoir and production engineers further examine known production IPs from analogous reservoirs in this area, and perhaps attempt to stratify the data to
further refine predictions of
Net Cash Flows Reservoir Price / BBI 1 Drilling Costs Decline Rate Price / BBI 2 Planned Wells Acreage Price [$/acre] Price / BBI 3 Operating Costs [$/Barrel]
FIGURE 7.17
.40 .17 .14 –.13 .10 .10 .09 –.08 .07 –.07
NPV sensitivity analysis.
IPs based on drilling or completion techniques, geological factors, or geophysical data. Reservoir Risk. This assumption is the driver of whether the well is a dry hole or producer, and as such it is
not surprising that it is a major driving factor. Among many approaches, the project team could investigate the possibility that inadequate analysis of subsurface data is causing many companies to
declare dry holes in reservoirs that have hidden producing potential. Oil Price (Year 1) and Drilling Costs. Both of these items are closely related in their power to affect NPV. Price uncertainty
could best be addressed by having a standard price prediction for the firm against which all projects would be compared.7 Drilling costs could be minimized by process improvements in the drilling
team that would tighten the variation of predicted costs from actual costs. The firm could seek out companies with strong track records in their project area for reliable, low-cost drilling. Decline
Rate. The observant reader will note a positive-signed correlation between decline rate and project NPV. At first glance this is unexpected, because we would normally expect that a higher decline
rate would reduce the volumes of oil to be sold and hurt the revenue realized by our project. Recall, however, that we correlated higher IPs with higher decline rates in our model assumptions, which
is an indirect indication of the power of the IP on the NPV of our project: Despite higher decline rates, the positive impact of higher IPs on our project value is overriding the lost production that
occurs because of the rapid reservoir decline. We should redouble our efforts to better predict IPs in our model.
Conclusion Monte Carlo simulation can be an ideal tool for evaluating oil and gas prospects under conditions of significant and complex uncertainty in the
Extended Business Cases I
assumptions that would render any single-point estimate of the project outcome nearly useless. The technique provides each member of multidisciplinary work teams a straightforward and effective
framework for quantifying and accounting for each of the risk factors that will influence the outcome of his or her drilling project. In addition, Monte Carlo simulation provides management and team
leadership something much more valuable than a single forecast of the project’s NPV: It provides a probability distribution of the entire spectrum of project outcomes, allowing decision makers to
explore any pertinent scenarios associated with the project value. These scenarios could include break-even probabilities as well as scenarios associated with extremely poor project results that
could damage the project team’s credibility and future access to capital, or outcomes that resulted in highly successful outcomes. Finally, Monte Carlo simulation of oil and gas prospects provides
managers and team leaders critical information on which risk factors and assumptions are driving the projected probability of project outcomes, giving them the all-important feedback they need to
focus their people and financial resources on addressing those risk assumptions that will have the greatest positive impact on their business, improving their efficiency and adding profits to their
bottom line.
CASE STUDY: FINANCIAL PLANNING WITH SIMULATION Tony Jurado is a financial planner in northern California. He has a BA from Dartmouth College and is a candidate for the Certified Financial Planner
designation. Tony specializes in the design and implementation of comprehensive financial plans for high-net-worth individuals. He can be contacted at
[email protected]
. Corporate America has increasingly altered the retirement landscape by shifting from defined benefit to defined contribution plans. As the baby boomers retire, they will have different financial
planning needs than those of previous generations because they must manage their own retirement funds. A thoughtful financial planner has the ability to positively impact the lives of these retirees.
A Deterministic Plan Today was the last day of work for Henry Tirement, and, until just now, he and his financial planner, Mr. Determinist, had never seriously discussed
what to do with his 401k rollover. After a moment of fact gathering with Henry, Mr. D obtains the following information: ■ ■ ■ ■ ■ ■ ■
Current assets are $1,000,000 in various mutual funds. Current age is 65. Desired retirement salary is $60,000 before-tax. Expected return on investments is 10 percent. Expected inflation is 3
percent. Life expectancy is age 95. No inheritance considerations.
With his financial calculator, Mr. D concludes that Henry can meet his retirement goals and, in fact, if he died at age 95, he’d have over $3.2 million in his portfolio. Mr. D knows that past
performance does not guarantee future results, but past performance is all that we have to go by. With the stock market averaging over 10 percent for the past 75 years, Mr. D feels certain that this
return is reasonable. As inflation has averaged 3 percent over the same time period, he feels that this assumption is also realistic. Mr. D delivers the good news to Henry and the plan is put into
motion (Table 7.4). Fast forward to 10 years later. Henry is not so thrilled anymore. He visits the office of Mr. D with his statements in hand and they sit down to discuss the portfolio performance.
Writing down the return of each of the past 10 years, Mr. D calculates the average performance of Henry’s portfolio (Table 7.5). “You’ve averaged 10 percent per year!” Mr. D tells Henry. Befuddled,
Henry scratches his head. He shows his last statement to Mr. D that shows a portfolio balance is $501,490.82. Once again, Mr. D uses his spreadsheet program and obtains the results in Table 7.6. Mr.
D is not certain what has happened. Henry took out $60,000 at the beginning of each year and increased this amount by 3 percent annually. The portfolio return averaged 10 percent. Henry should have
over $1.4 million by now. Sequence of Returns Sitting in his office later that night, Mr. D thinks hard about what went wrong in the planning. He wonders what would have happened if the annual
returns had occurred in reverse order (Table 7.7). The average return is still 10 percent and the withdrawal rate has not changed, but the portfolio ending balance is now $1.4 million. The only
difference between the two situations is the sequence of returns. Enlightenment overcomes Mr. D, and he realizes that he has been employing a deterministic planning paradigm during a period of
Extended Business Cases I
TABLE 7.4
Year 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
The Deterministic Plan Returns (%)
Beginning Balance ($)
Withdrawal ($)
Ending Balance ($)
10.00 10.00 10.00 10.00 10.00 10.00 10.00 10.00 10.00 10.00 10.00 10.00 10.00 10.00 10.00 10.00 10.00 10.00 10.00 10.00 10.00 10.00 10.00 10.00 10.00 10.00 10.00 10.00 10.00 10.00
1,000,000.00 1,034,000.00 1,069,420.00 1,106,342.60 1,144,856.88 1,185,058.98 1,227,052.79 1,270,950.62 1,316,874.01 1,364,954.58 1,415,335.01 1,468,170.03 1,523,627.60 1,581,890.14 1,643,155.93
1,707,640.60 1,775,578.81 1,847,226.07 1,922,860.73 2,002,786.22 2,087,333.45 2,176,863.45 2,271,770.35 2,372,484.56 2,479,476.31 2,593,259.53 2,714,396.14 2,843,500.73 2,981,245.73 3,128,367.08
60,000.00 61,800.00 63,654.00 65,563.62 67,530.53 69,556.44 71,643.14 73,792.43 76,006.20 78,286.39 80,634.98 83,054.03 85,545.65 88,112.02 90,755.38 93,478.04 96,282.39 99,170.86 102,145.98
105,210.36 108,366.67 111,617.67 114,966.20 118,415.19 121,967.65 125,626.68 129,395.48 133,277.34 137,275.66 141,393.93
1,034,000.00 1,069,420.00 1,106,342.60 1,144,856.88 1,185,058.98 1,227,052.79 1,270,950.62 1,316,874.01 1,364,954.58 1,415,335.01 1,468,170.03 1,523,627.60 1,581,890.14 1,643,155.93 1,707,640.60
1,775,578.81 1,847,226.07 1,922,860.73 2,002,786.22 2,087,333.45 2,176,863.45 2,271,770.35 2,372,484.56 2,479,476.31 2,593,259.53 2,714,396.14 2,843,500.73 2,981,245.73 3,128,367.08 3,285,670.46
Withdrawals Versus No Withdrawals Most financial planners understand the story of Henry. The important point of Henry’s situation is that he took withdrawals from his portfolio during an unfortunate
sequence of returns. During a period of regular withdrawals, it doesn’t matter that his portfolio returns averaged 10 percent over the long run. It is the sequence of returns combined with regular
withdrawals that was devastating to his portfolio. To
TABLE 7.5
The Actual Results
Return %
–20.00 –10.00 9.00 8.00 12.00 –10.00 –2.00 25.00 27.00 61.00
Average Return
TABLE 7.6
Portfolio Balance Analysis
Returns (%)
Withdrawal ($)
Ending Balance ($)
–20.00 –10.00 9.00 8.00 12.00 –10.00 –2.00 25.00 27.00 61.00
60,000.00 61,800.00 63,654.00 65,563.62 67,530.53 69,556.44 71,643.14 73,792.43 76,006.20 78,286.39
752,000.00 621,180.00 607,703.34 585,510.90 580,138.01 459,523.41 380,122.67 382,912.80 389,771.37 501,490.82
TABLE 7.7
Reversed Returns
Return (%)
Withdrawal ($)
Ending Balance ($)
61.00 27.00 25.00 –2.00 –10.00 12.00 8.00 9.00 –10.00 –20.00
60,000.00 61,800.00 63,654.00 65,563.62 67,530.53 69,556.44 71,643.14 73,792.43 76,006.20 78,286.39
1,513,400.00 1,843,532.00 2,224,847.50 2,116,098.20 1,843,710.91 1,987,053.00 2,068,642.65 2,174,386.74 1,888,542.48 1,448,204.87
Extended Business Cases I
illustrate this point, imagine that Henry never took withdrawals from his portfolio (Table 7.8). The time value of money comes into play when withdrawals are taken. When Henry experienced negative
returns early in retirement while taking withdrawals, he had less money in his portfolio to grow over time. To maintain his inflation-adjusted withdrawal rate, Henry needed a bull market at the
beginning of retirement.
TABLE 7.8
Returns Analysis Without Withdrawals
Actual Return Sequence with No Withdrawals Year
Return (%)
Ending Balance ($)
–20.00 –10.00 9.00 8.00 12.00 –10.00 –2.00 25.00 27.00 61.00
800,000.00 720,000.00 784,800.00 847,584.00 949,294.08 854,364.67 837,277.38 1,046,596.72 1,329,177.84 2,139,976.32
Average Return
Reverse Return Sequence with No Withdrawals Year
Return (%)
End Balance ($)
61.00 27.00 25.00 –2.00 –10.00 12.00 8.00 9.00 –10.00 –20.00
1,610,000.00 2,044,700.00 2,555,875.00 2,504,757.50 2,254,281.75 2,524,795.56 2,726,779.20 2,972,189.33 2,674,970.40 2,139,976.32
Average Return
Henry’s retirement plan is deterministic because it assumes that returns will be the same each and every year. What Henry and Mr. D didn’t understand was that averaging 10 percent over time is very
different than getting 10 percent each and every year. As Henry left the office, Mr. D wished he had a more dynamic retirement planning process—one that allowed for varying variables.
Stochastic Planning Using Monte Carlo Simulation Monte Carlo is a stochastic tool that helps people think in terms of probability and not certainty. As opposed to using a deterministic process,
financial planners can use Monte Carlo to simulate risk in investment returns. A financial plan’s probability of success can be tested by simulating the variability of investment returns. Typically,
to measure this variability, the expected mean and standard deviation of the portfolio’s investment returns are used in a Monte Carlo model. What would Mr. D have told Henry had this approach been
used? Using Henry’s same information but an expected return of 10 percent with a standard deviation of 17.5 percent, Mr. D can assign success probabilities for how long Henry’s money will last. Henry
has a 64 percent chance that his portfolio will last 30 years (Figure 7.18). If Henry is not comfortable with that success rate, then Mr. D can increase both expected return and standard deviation,
or decrease withdrawals. Mr. D could change the return to 20 percent, but this is obviously not realistic. In Henry’s case, it makes more sense to decrease the withdrawal rate. Assuming that Henry
will be comfortable with a 70 percent chance of success, then Mr. D needs to lower the annual withdrawal to $55,000 (Figure 7.19).
FIGURE 7.18
A 64 percent chance of portfolio survival at $60,000 withdrawals.
Extended Business Cases I
FIGURE 7.19
A 70 percent chance of portfolio survival at $55,000 withdrawals.
Expenses Lower Returns It is truly a misuse of Monte Carlo and unfair to the client to illustrate a plan without fees if an advisory fee is to be charged. If Mr. Determinist charges Henry a 1 percent
advisory fee, then this figure must be deducted from the annual return assumption, which will lower the plan’s 30-year success probability to 54 percent. In Henry’s case, the standard deviation will
still be 17.5 percent, which is higher than a standard deviation of a portfolio that averages 9 percent. One can simply modify the Monte Carlo simulation to allow an advisory fee to be included by
maintaining the return and standard deviation assumptions and deducting the advisory fee. For Henry’s plan to still have a 70 percent success ratio after a 1 percent fee, he can withdraw an
inflation-adjusted $47,000 annually, which is notably different from the $55,000 withdrawal rate before fees.
Success Probability Monte Carlo educates the client about the trade-off between risk and return with respect to withdrawals. The risk is the success probability with which the client is comfortable.
The return is the withdrawal rate. The financial planner should understand that a higher success rate amounts to lower withdrawals. A by-product of this understanding is that a higher success rate
also increases the chance of leaving money in the portfolio at the client’s death. In other words, Henry may be sacrificing lifestyle for an excessive probability of success. For Henry to have a 90
percent chance that his portfolio will
FIGURE 7.20
A 90 percent chance of portfolio survival at $32,000 withdrawals.
last 30 years, he needs to lower his withdrawals to $32,000 (Figure 7.20). An equally important interpretation of this result is that Henry has a 90 percent chance of dying with money in his
portfolio. This is money he could have used for vacation, fancy dinners, gifts for his family, or circus tickets.
Success Tolerance Going back to Henry’s example of withdrawing $47,000 each year, if 5,000 simulation trials are run, a 70 percent success rate means that 3,500 times the plan worked. The 1,500 times
the plan failed resulted in Henry being unable to take out $47,000 each and every year for 30 years. What is unclear about the 1,500 failures is how many of these resulted in a withdrawal amount
marginally less than $47,000. If Henry takes out $47,000 for 29 years and then only withdraws $46,000 in the last year, is this a failure? Monte Carlo says yes. Most people are more flexible.
Establishing a success tolerance alleviates this problem. If Henry’s goal is to take out $47,000 but he would be quite happy with $42,000, then he has a success tolerance of $5,000. This is the same
as running a simulation using $42,000 with a zero success tolerance; however, the purpose of the success tolerance is to clearly illustrate to Henry the likelihood that a range of withdrawals will be
achieved. By accounting for both the complexities of the market and the flexibility of human response to those complexities, Monte Carlo helps Henry understand, prepare for, and properly choose his
risk tolerance.
Extended Business Cases I
Bear Markets and Monte Carlo No matter what financial planning method is used, the reality is that a bear market early in retirement will drastically affect the plan. If Mr. D had used Monte Carlo
when Henry first came to him and Henry took out $47,000 in Year 1 and $48,410 in Year 2, the portfolio balance at the end of the second year would have been $642,591. For the portfolio to last
another 28 years and to preserve a 70 percent success rate, Henry must reduce his withdrawal amount to $31,500! The difficulty of this situation is obvious; however, Mr. D is in a position to help
Henry make a decision about maintaining his standard of living versus increasing the chances of running out of money. Table 7.9 illustrates running a Monte Carlo simulation at the end of each year to
determine the withdrawal amount that preserves a 70 percent success rate for Henry’s plan. Like most people, Henry will not be enthusiastic about lowering his retirement salary by as much as 22
percent in any year. Without changing the return assumption, Henry’s alternative is to accept a lower success rate. If Henry never adjusted his withdrawal rate from the initial $47,000, after 10
years his portfolio value would be $856,496 and his withdrawal would be $61,324 ($47,000 ¥ 1.039). The success probability is 60 percent for a portfolio life of 20 years.
Other Monte Carlo Variables Monte Carlo can simulate more than just investment returns. Other variables that are frequently simulated by financial planners using Monte Carlo include inflation and
life expectancy.
TABLE 7.9
Simulation-Based Withdrawal Rates
Return (%)
Beginning ($)
End Balance ($)
Monte Carlo Withdrawal ($)
–20.00 –10.00 9.00 8.00 12.00 –10.00 –2.00 25.00 27.00 61.00
1,000,000 762,400 653,310 676,683 693,558 735,904 627,214 580,860 685,137 819,324
762,400 653,310 676,683 693,558 735,904 627,214 580,860 685,137 819,324 1,239,014
47,000 36,500 32,500 34,500 36,500 39,000 34,500 32,750 40,000 49,750
Withdrawal Change Remaining (%) Years 0 –22 –11 6 6 7 –12 –5 22 24
Inflation Since 1926, inflation has averaged approximately 3 percent annually with a standard deviation of 4.3 percent. In a plan with inflation-adjusted withdrawals, the change in inflation is
significant. According to Ibbotson and Associates, inflation averaged 8.7 percent from the beginning of 1973 until the end of 1982. If such a period of inflation occurred at the beginning of
retirement, the effect on a financial plan would be terrible. Life Expectancy Using mortality tables, a financial planner can randomize the life expectancy of any client to provide a more realistic
plan. According to the National Center for Health Statistics, the average American born in 2002 has a life expectancy of 77.3 years with a standard deviation of 10. However, financial planners should
be more concerned with the specific probability that their clients will survive the duration of the plan.
Monte Carlo Suggestions Financial plans created using Monte Carlo should not be placed on autopilot. As with most forecasting methods, Monte Carlo is not capable of simulating real-life adjustments
that individuals make. As previously discussed, if a portfolio experienced severe negative returns early in retirement, the retiree can change the withdrawal amount. It is also important to realize
that Monte Carlo plans are only as good as the input assumptions. Distributions If Henry is invested in various asset classes, it is important for Mr. D to determine the distinct distribution
characteristics of each asset class. The most effective approach to modeling these differences is by utilizing a distribution-fitting analysis in Risk Simulator. Taxes Henry Tirement’s situation
involved a tax-deferred account and a pretax salary. For individuals with nontaxable accounts, rebalancing may cause taxes. In this case, a financial planner using Monte Carlo might employ a
taxadjusted return and a posttax salary might be used. The after-tax account balance should be used in the assumptions for clients with highly concentrated positions and a low tax basis who plan to
diversify their investments. Correlations It is important to consider any correlations between variables being modeled within Monte Carlo. Cross-correlations, serial correlations, or cross-serial
correlations must be simulated for realistic results. For example, it may be shown that a correlation exists between investment returns and inflation. If this is true, then these variables should not
be treated as independent of each other.
Extended Business Cases I
CASE STUDY: HOSPITAL RISK MANAGEMENT This case is contributed by Lawrence Pixley, a founding partner of Stroudwater Associates, a management consulting firm for the health-care industry. Larry
specializes in analyzing risk and uncertainty for hospitals and physician practices in the context of strategic planning and operational performance analyses. His expertise includes hospital facility
planning, hospital/physician joint ventures, medical staff development, physician compensation packages utilizing a balanced scorecard approach, practice operations assessment, and practice
valuations. Larry spent 15 years in health-care management, and has been a consultant for the past 23 years, specializing in demand forecasting using scientific management tools including real
options analysis, Monte Carlo simulation, simulationoptimization, data envelopment analysis (DEA), queuing theory, and optimization theory. He can be reached at lpixley@stroudwaterassociates .com.
Hospitals today face a wide range of risk factors that can determine success or failure, including: ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Competitive responses both from other hospitals and physician groups. Changes in government rules and regulations. Razor-thin profit margins. Community relations as expressed through zoning and
permitting resistance. State of the bond market and the cost of borrowing. Oligopsony (market with a few buyers) of a few large payers, for example, the state and federal governments. Success at
fund-raising and generating community support. Dependence on key physicians, admitting preferences, and age of medical staff. High fixed cost structure. Advances in medical technology and their
subsequent influence on admissions and lengths of stay.
In addition, hundreds of hospitals across the country are faced with aging facilities. Their dilemma is whether to renovate or relocate to a new site and build an entirely new facility. Many of these
hospitals were first constructed in the early 1900s. Residential neighborhoods have grown up around them, locking them into a relatively small footprint, which severely hampers their options for
The Problem Located in a large metropolitan area, CMC is a 425-bed community hospital. The region is highly competitive, with 12 other hospitals located within a 20-mile radius. Like most hospitals
of similar size, CMC consists of a series of buildings constructed over a 50-year time span, with three major buildings 50, 30 and 15 years old. All three facilities house patients in double
occupancy (or two-bed) rooms. The hospital has been rapidly outgrowing its current facilities. In the last year alone, CMC had to divert 450 admissions to other hospitals, which meant a loss of $1.6
M in incremental revenue. Figure 7.21 shows CMC’s average daily census and demonstrates why the hospital is running out of bed space. Because of this growing capacity issue, the hospital CEO asked
his planning team to project discharges for the next 10 years. The planning department performed a trend line analysis using the linear regression function in Excel and developed the chart shown in
Figure 7.22. Applying a Poisson distribution to the projected 35,000 discharges, the planners projected a total bed need of 514. They made no adjustment for a change in the average length of stay
over that 10-year period, assuming that it would remain constant. See Figure 7.23. Confronted with the potential need to add 95 beds, the board of directors asked the CEO to prepare an initial
feasibility study. To estimate the cost of adding 95 beds to the existing campus, the administrative staff first consulted with a local architect who had designed several small projects for the
hospital. The architect estimated a cost of $260M to renovate the
CMC Average Daily Census 2004 60
Days Occupied
280 292 304 316 328 340 352 364 376 388 400 412 424 436 Beds Occupied
FIGURE 7.21 occupied.
Histogram of CMC bed occupancy by number of days beds were
Extended Business Cases I
CMC Discharge Projections 2014 40,000
35,000 30,000 Trendline projection 2014
25,000 20,000 15,000 10,000 5,000 19 9 19 5 9 19 6 9 19 7 9 19 8 9 20 9 0 20 0 0 20 1 0 20 2 0 20 3 04
FIGURE 7.22
Trend line projections of CMC discharges for next 10 years (provided by CMC planning department).
existing structure and build a new addition, both of which were required to fit 95 more beds within the hospital’s current footprint. To accommodate the additional beds on the current site, however,
all beds would have to be double occupancy. Single occupancy rooms—the most marketable today— simply could not be accommodated on the present campus.
CMC Projected Bed Need for Average Daily Census of 463 by Year 2014 (Poisson Distribution) 8.0 7.0
6.0 514 beds needed to meet demand 99% of the time. Estimated 220 per year diverted to another hospital
5.0 4.0 3.0 2.0 1.0
0.0 Number of Beds Occupied
FIGURE 7.23
Projected CMC bed needs based on estimated average daily census of 463 patients for year 2014 (provided by CMC planning department).
In 1990, the hospital board faced a similar decision, whether to build a needed addition on the present campus or to relocate. The board opted to invest $90 million in a major expansion on the
current site. Faced with the current dilemma, many of those same board members wished that in 1990 they had been able to better analyze their future options. A number of them expressed regrets that
they did not relocate to another campus then. They clearly understood that their current decision—to renovate and add to the existing campus or to relocate—would be a decision the hospital would live
with for the next 30 to 50 years. There was no available site in the town (25 acres minimum), but there was space available in the adjacent town near a new $110 million ambulatory care center the
hospital built five years ago. Yet, given the amount invested in the current campus and the uncertainty of how a new location would affect market share, there was real hesitancy to relocate. The
board had other considerations as well. Historically there had been litigation involved every time the hospital tried to expand. The neighboring property owners unsuccessfully opposed the Emergency
Department expansion in 1999, but had managed through various legal actions to delay the construction three years. This delay added significantly to the cost of construction, in addition to the
revenue lost from not having the modernized facility available as projected. Two members of the board had attended a conference on the future of hospitals and noted that building more double
occupancy rooms was not a good decision for the following reasons: ■
■ ■
By the time the facility was ready for construction, code requirements for new hospital construction would likely dictate single occupancy rooms. Patients prefer single rooms and CMC would be at a
competitive disadvantage with other hospitals in the area that were already converting to single occupancy. Single occupancy rooms require fewer patient transfers and therefore fewer staff. Rates of
infection were found to be considerably lower.
After receiving a preliminary cost estimate from the architect on a replacement hospital, the CFO presented the analysis shown in Figure 7.24 to the Finance Committee as an initial test of the
project’s viability. The initial projections for a new hospital estimated construction costs at $670 million. The study estimated a $50 million savings by not funding further capital improvements in
the existing buildings. The CFO projected that the hospital would have a debt service capacity of an additional $95 million, assuming that the planning department’s volume projections were accurate
and that
Extended Business Cases I
Initial Capital Analysis for New Hospital ($ in M) Cost of Project $ 670 Less: Unrestricted Cash $ (150) : Deferred Maintenance $ (50) : Existing Debt Capacity $ (100) : Future Debt Capacity Based on
New Volume $ (95) : Sale of Assets $ (56) : Capital Campaign $ (150) Capital Shortfall $ 69
FIGURE 7.24
Capital position analysis for new hospital as prepared by CMC chief financial officer.
revenue and expense per admission remained static. The balance would have to come from the sale of various properties owned by the hospital and a major capital campaign. Over the years, the hospital
had acquired a number of outlying buildings for administrative functions and various clinics that could be consolidated into a new facility. In addition, there was a demand for additional residential
property within the town limits, making the hospital’s current site worth an estimated $17 million. Although skeptical, the CFO felt that with additional analysis, it could be possible to overcome
the projected $69 million shortfall. The board authorized the administration to seek proposals from architectural firms outside their area. The Selection Committee felt that given the risks of
potentially building the wrong-sized facility in the wrong location, they needed firms that could better assess both risks and options. At the same time, as a hedge pending the completion of the
analysis, the committee took a one-year option on the 25-acre property in the adjacent town. After a nationwide review, CMC awarded the project analysis to a nationally recognized architectural firm
and Stroudwater Associates, with the strategic planning and analytics in Stroudwater’s hands.
The Analysis Stroudwater first needed to test the trend line projections completed by CMC’s planning department. Rather than taking simple trend line projections based on past admissions, Stroudwater
used a combination of both qualitative and quantitative forecasting methodologies. Before financial projections could be completed, a better estimate of actual bed need was required. Stroudwater
segmented the bed need calculation into five key decision areas: population trends, utilization changes, market share, length of stay, and queuing decisions. Given the rapid changes in health-care
technology in particular, it was determined that forecasting beyond 10 years was
Population trends
Utilization changes
too speculative, and the board agreed that 10 years was an appropriate period for the analysis. In addition, the hospital wanted to project a minimum of 3 years beyond completion of hospital
construction. Because projections were required for a minimum of 10 years, and because of the large number of variables involved, Stroudwater employed Monte Carlo simulation techniques in each of
these five decision areas. See Figure 7.25. For qualitative input to this process, the hospital formed a 15-person steering committee composed of medical staff, board directors, and key
administrative staff. The committee met every three weeks during the fourmonth study and was regularly polled by Stroudwater on key decision areas through the entire process. In addition, Stroudwater
conducted 60 interviews with physicians, board members, and key administrative staff. During the interviews with key physicians in each major service line, Stroudwater consultants were struck by the
number of aging physicians that were in solo practice and not planning to replace themselves, a significant risk factor for CMC. The CFO identified another issue: A majority of physicians in key
specialties had recently stopped accepting insurance assignments, further putting the hospital at risk vis-à-vis its major competitor whose employed physicians accepted assignment from all payers.
• Increase or decrease • Age cohort changes
Inpatient admissions
• Competing hospitals • Physician relations • Consumer preference • Services offered
Queuing decisions
FIGURE 7.25 requirements.
Length of stay
(Arrival rates)
(Service rates) Uncertainty
Market share
• Inpatient vs. outpatient • Medical technology • Consumer demand • Changes in disease prevalence
• Discharge resources • Technology • Operations efficiency • Payer demands
Bed need
• Size of unit • Expected delay • Combination of units • Patient flow efficiency • Probability of having a bed available
Stroudwater Associates methodology for forecasting hospital bed
Extended Business Cases I
4 ic ac or Th
d ar C
n pe O
o rth O
rt ea H
S G
B O
ar ul sc Va
M G eo N
ro eu N
Risk Factor 2
gy lo ro U
on lm Pu tro as G
1 h ep N
0 $0
$20 $25 $30 Net Revenue
FIGURE 7.26
Bubble chart highlighting service lines considered most at risk (upper right quadrant). Operating margin is represented by the size of the bubble.
To understand better what service lines were at risk, Stroudwater developed a bubble diagram (Figure 7.26) to highlight areas that needed further business planning before making market share
estimates. The three variables were net revenue, operating margin, and a subjective risk factor rating system. The following risk factors were identified, assigned a weight, rated on a scale of one
to five, and plotted on the y-axis: ■ ■ ■ ■ ■
Size of practice—percentage of solo and two-physician practices in specialty. Average age of physicians in specialty. Potential competitive threat from other hospitals. Percentage of admissions
coming from outside of service area. Percentage of physicians in the specialty accepting assignment from major insurance carriers.
The analysis revealed five key specialties—orthopedics, obstetrics, general surgery, open-heart surgery, and cardiology—in which CMC’s bottom line was at risk, but which also afforded the greatest
opportunity for future profitability. To better inform market share estimates, Stroudwater then developed mini business plans for each of the areas identified in the upper right-hand quadrant of
Figure 7.26. Population Trends To determine future population numbers in the CMC service area, Stroudwater depended on nationally recognized firms that specialize in population trending. Because
hospital utilization is three times higher for over 65 populations, it was important to factor in the ongoing effect of the baby boomers. Stroudwater also asked members of the Steering Committee to
review the 2014 population projections and determine what local issues not factored into the professional projections should be considered. The committee members raised several concerns. There was a
distinct possibility of a major furniture manufacturer moving its operations to China, taking some 3,000 jobs out of the primary service area. However, there was also the possibility of a new
computer chip factory coming to the area. Stroudwater developed custom distributions to account for these population/ employment contingencies. Utilization Projections On completion of its population
forecasting, Stroudwater turned its attention to calculating discharges per 1,000 people, an area of considerable uncertainty. To establish a baseline for future projections, 2004 discharge data from
the state hospital association were used to calculate the hospitalization use rates (discharges per 1,000) for CMC’s market. Stroudwater calculated use rates for 34 distinct service lines. See Table
7.10. Stroudwater factored a number of market forces affecting hospital bed utilization into the utilization trend analyses. The consultants considered the following key factors that might decrease
facility utilization: ■
■ ■ ■
Better understanding of the risk factors for disease, and increased prevention initiatives (e.g., smoking prevention programs, cholesterollowering drugs). Discovery/implementation of treatments that
cure or eliminate diseases. Consensus documents or guidelines that recommend decreases in utilization. Shifts to other sites causing declines in utilization in the original sites ■ As technology
allows shifts (e.g., ambulatory surgery). ■ As alternative sites of care become available (e.g., assisted living).
Source: State Hospital Discharge Survey.
137 878 358 66 19,113 435 3,515 9,564 7,488 3,056 1,362 2,043 1,721 5,338 3,042 11,197 13,720 1,767
213 2,836 3,549 859 75,857 3,446 18,246 46,103 51,153 6,633 10,325 15,250 20,239 34,873 13,526 25,007 36,962 11,563
Length of Stay 1,193,436 1,193,436 1,193,436 1,193,436 1,193,436 1,193,436 1,193,436 1,193,436 1,193,436 1,193,436 1,193,436 1,193,436 1,193,436 1,193,436 1,193,436 1,193,436 1,193,436 1,193,436
Utilization Trends for 2014 by Service Line
Abortion Adverse Effects AIDS and Related Burns Cardiology Dermatology Endocrinology Gastroenterology General Surgery Gynecology Hematology Infectious Disease Neonatology Neurology Neurosurgery
Newborn Obstetrics Oncology
Product Line
TABLE 7.10
0.12 0.74 0.30 0.07 16.17 0.37 2.97 8.09 6.34 2.59 1.15 1.73 1.46 4.52 2.57 9.47 11.61 1.50
Discharges 1000 0.18 2.40 3.00 0.73 64.19 2.92 15.44 39.01 43.28 7.31 8.74 12.90 17.13 29.51 11.45 21.16 31.28 9.76
Days 1000 1.6 3.2 9.9 10.0 4.0 7.9 5.2 4.8 6.8 2.6 7.6 7.5 11.8 6.5 4.4 2.2 2.7 6.5
Average Length of Stay 1,247,832 1,247,832 1,247,832 1,247,832 1,247,832 1,247,832 1,247,832 1,247,832 1,247,832 1,247,832 1,247,832 1,247,832 1,247,832 1,247,832 1,247,832 1,247,832 1,247,832
8 0 4 12 12 –5 –5 15
20,1 4 3,7 10,0 7,9 3,2 1,4 2,1 1,8 5,8 3,2 11,6 14,4 1,5
Change in Estimated Utilization Total Market Population (%) Discharges
Changes in practice patterns (e.g., encouraging self-care and healthy lifestyles, reduced length of hospital stay). Changes in technology.
Stroudwater also considered the following factors that may increase hospital bed utilization: ■ ■ ■ ■ ■ ■
Growing elderly population. New procedures and technologies (e.g., hip replacement, stent insertion, MRI). Consensus documents or guidelines that recommend increases in utilization. New disease
entities (e.g., HIV/AIDS, bioterrorism). Increased health insurance coverage. Changes in consumer preferences and demand (e.g., bariatric surgery, hip and knee replacements).
In all key high-volume services, Stroudwater consultants made adjustments for utilization changes and inserted them into the spreadsheet model, using a combination of uniform, triangular, and normal
distributions. Market Share The Steering Committee asked Stroudwater to model two separate scenarios, one for renovations and an addition to the current campus, and the second for an entirely new
campus in the adjacent town. To project the number of discharges that CMC was likely to experience in the year 2014, market share assumptions for both scenarios were made for each major service line.
A standard market share analysis aggregates zip codes into primary and secondary service markets depending on market share percentage. Instead, Stroudwater divided the service area into six separate
market clusters using market share, geographic features, and historic travel patterns. Stroudwater selected eight major service areas that represented 80 percent of the admissions for further
analysis and asked committee members and key physicians in each specialty area to project market share. The committee members and participating physicians attended one large meeting where CMC
planning department members and Stroudwater consultants jointly presented results from the mini-business plans. Local market trends and results of past patient preference surveys were considered in a
discussion that followed. As an outcome from the meeting, participants agreed to focus on specific factors to assist them in estimating market share, including: ■ ■ ■
Change in patient preference. Proximity of competing hospitals. New hospital “halo” effect.
Extended Business Cases I
■ ■
Change in “hospital of choice” preferences by local physicians. Ability to recruit and retain physicians.
Using a customized survey instrument, Stroudwater provided those participating in the exercise with four years of trended market share data; challenging them to create a worst-case, most likely, and
best-case estimate for (1) each of the six market clusters in (2) each of the eight service lines for (3) each campus scenario. After compiling the results of the survey instrument, Stroudwater
assigned triangular distributions to each variable. An exception to the process occurred in the area of cardiac surgery. There was considerable discussion over the impact of a competing hospital
potentially opening a cardiothoracic surgery unit in CMC’s secondary service market. For the “current campus” scenario, the Steering Committee agreed that if a competing unit were opened it would
decrease their market share to the 15 to 19 percent range, and they assigned a 20 percent probability that their competitor would open the unit. Should the competitor not build the unit, a minority
of the group felt that CMC’s market share would increase significantly to the 27 to 30 percent range; a 30 percent probability was assigned. The remaining members were more conservative and estimated
a 23 to 25 percent market share. Similarly, estimates were made for the new campus in which participants felt there were better market opportunities and where losses would be better mitigated should
the competing hospital open a new cardiothoracic unit. Stroudwater used the custom distributions shown in Figure 7.27. Average Length of Stay Stroudwater performed length of stay estimates for 400
diagnostic groupings (DRG) using a combination of historic statistics from the National Hospital Discharge Survey of the National Center for Health Statistics and actual CMC data. Key CMC physicians
participated in estimating length of stay based on the benchmark data, their knowledge of their respective fields, and historic CMC data. Stroudwater consultants separately trended historic lengths
of stay and developed an algorithm for weighting benchmark data and CMC physician estimates. Length of stay estimates were rolled up into one distribution for each of the major service lines. At this
point, Stroudwater performed a sensitivity analysis (Figure 7.28) to determine which assumptions were driving the forecasts. Based on the relative unimportance population had on outcome, the
population distribution assumptions were dropped in favor of single point estimates. Queuing Decisions A typical approach to determining bed need, and the one used by the CMC Planning Department, is
to multiply projections for single point admissions by those for single point lengths of stay to determine
FIGURE 7.27 Cardiothoracic market share using custom distributions comparing market share assumptions for both current and new campus.
Extended Business Cases I
Sensitivity: ADC
Market Share
Length of Stay 9.7%
Population 0.0%
FIGURE 7.28
Sensitivity analysis of key variables in Monte Carlo simulation.
the total number of patient days. Patient days are divided by 365 to determine the average daily census (ADC). A Poisson distribution is then applied to the ADC to determine the total number of beds
required. In addition to the problems of single point estimates, Poisson distributions assume that all arrivals are unscheduled and thus overstate the bed need if any of the services have elective or
urgent admissions. Because CMC had categorized all of its admissions by urgency of the need for a bed, Stroudwater was able to conduct an analysis for each unit and found wide differences in the
timing needs for beds ranging from OB with 100 percent emergency factor to Orthopedics with 57 percent of its admissions classified as elective. See Table 7.11. To deepen the analysis, the physician
members of the committee met separately to determine which units could be combined because of natural affinities and similar nursing requirements. The Steering Committee then met to discuss service
targets for each category of admission. They agreed
TABLE 7.11
Orthopedic/Neurosurgery Admissions Classified by Admission
Total Days Total Admissions Percentage (Admits)
5,540 1,497 40%
415 112 3%
7,894 2,133 57%
13,849 3,743 100%
242 TABLE 7.12
MGK Blocking Model Showing Bed Need Service Targets Bed Needs Service Target
Discharges Service Rate Arrival Rates 1/ALOS
Medical Cardiology 8.6301 General Surgery 10.9315 Orthopedics 17.9795
Emergency Urgent Elective < 1 day 1–2 days 2–3 days
0.0741 0.0901
147.5753 199.5719
49% 40%
2% 3%
49% 57%
that “Emergencies” had to have a bed available immediately, “Urgent” within 48 hours, and “Elective” within 72 hours. Using a multiple channel queuing model jointly developed by Dr. Johnathan Mun and
Lawrence Pixley, bed needs were determined for each of the major unit groupings. See Table 7.12 and Table 7.13. Distributions had been set for utilization and market share by service line to
determine the arrival rates needed for the queuing model. Length of stay distributions by service line had been determined for the service rate input to the model. Forecast cells for Monte Carlo
simulation were set for “Probability of Being Served” for
|
{"url":"https://vdoc.pub/documents/modeling-risk-applying-monte-carlo-simulation-real-options-analysis-forecasting-and-optimization-techniques-wiley-finance-nikn1fkp2kc0","timestamp":"2024-11-12T05:21:01Z","content_type":"text/html","content_length":"478465","record_id":"<urn:uuid:7ad1eef0-00a1-4132-9d87-16309e9e6769>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00447.warc.gz"}
|
concrete.ml.quantization.quantized_ops | Concrete ML
module concrete.ml.quantization.quantized_ops
Quantized versions of the ONNX operators for post training quantization.
class QuantizedSigmoid
Quantized sigmoid op.
class QuantizedHardSigmoid
Quantized HardSigmoid op.
class QuantizedRelu
Quantized Relu op.
class QuantizedPRelu
Quantized PRelu op.
class QuantizedLeakyRelu
Quantized LeakyRelu op.
class QuantizedHardSwish
Quantized Hardswish op.
class QuantizedElu
Quantized Elu op.
class QuantizedSelu
Quantized Selu op.
class QuantizedCelu
Quantized Celu op.
class QuantizedClip
Quantized clip op.
class QuantizedRound
Quantized round op.
class QuantizedPow
Quantized pow op.
Only works for a float constant power. This operation will be fused to a (potentially larger) TLU.
method __init__
n_bits_output: int,
int_input_names: Set[str] = None,
constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
input_quant_opts: QuantizationOptions = None,
) → None
method can_fuse
Determine if this op can be fused.
Power raising can be fused and computed in float when a single integer tensor generates both the operands. For example in the formula: f(x) = x ** (x + 1) where x is an integer tensor.
class QuantizedGemm
Quantized Gemm op.
method __init__
n_bits_output: int,
int_input_names: Set[str] = None,
constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
input_quant_opts: QuantizationOptions = None,
) → None
method can_fuse
Determine if this op can be fused.
Gemm operation can not be fused since it must be performed over integer tensors and it combines different values of the input tensors.
bool: False, this operation can not be fused as it adds different encrypted integers
method q_impl
q_impl(*q_inputs: QuantizedArray, **attrs) → QuantizedArray
class QuantizedMatMul
Quantized MatMul op.
method __init__
n_bits_output: int,
int_input_names: Set[str] = None,
constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
input_quant_opts: QuantizationOptions = None,
) → None
method can_fuse
Determine if this op can be fused.
Gemm operation can not be fused since it must be performed over integer tensors and it combines different values of the input tensors.
bool: False, this operation can not be fused as it adds different encrypted integers
method q_impl
q_impl(*q_inputs: QuantizedArray, **attrs) → QuantizedArray
class QuantizedAdd
Quantized Addition operator.
Can add either two variables (both encrypted) or a variable and a constant
method can_fuse
Determine if this op can be fused.
Add operation can be computed in float and fused if it operates over inputs produced by a single integer tensor. For example the expression x + x * 1.75, where x is an encrypted tensor, can be
computed with a single TLU.
bool: Whether the number of integer input tensors allows computing this op as a TLU
method q_impl
q_impl(*q_inputs: QuantizedArray, **attrs) → QuantizedArray
class QuantizedTanh
Quantized Tanh op.
class QuantizedSoftplus
Quantized Softplus op.
class QuantizedExp
Quantized Exp op.
class QuantizedLog
Quantized Log op.
class QuantizedAbs
Quantized Abs op.
class QuantizedIdentity
Quantized Identity op.
method q_impl
q_impl(*q_inputs: QuantizedArray, **attrs) → QuantizedArray
class QuantizedReshape
Quantized Reshape op.
method q_impl
q_impl(*q_inputs: QuantizedArray, **attrs) → QuantizedArray
Reshape the input integer encrypted tensor.
q_inputs: an encrypted integer tensor at index 0 and one constant shape at index 1
attrs: additional optional reshape options
result (QuantizedArray): reshaped encrypted integer tensor
class QuantizedConv
Quantized Conv op.
method __init__
n_bits_output: int,
int_input_names: Set[str] = None,
constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
input_quant_opts: QuantizationOptions = None,
) → None
Construct the quantized convolution operator and retrieve parameters.
n_bits_output: number of bits for the quantization of the outputs of this operator
int_input_names: names of integer tensors that are taken as input for this operation
constant_inputs: the weights and activations
input_quant_opts: options for the input quantizer
attrs: convolution options
dilations (Tuple[int]): dilation of the kernel, default 1 on all dimensions.
group (int): number of convolution groups, default 1
kernel_shape (Tuple[int]): shape of the kernel. Should have 2 elements for 2d conv
pads (Tuple[int]): padding in ONNX format (begin, end) on each axis
strides (Tuple[int]): stride of the convolution on each axis
method can_fuse
Determine if this op can be fused.
Conv operation can not be fused since it must be performed over integer tensors and it combines different elements of the input tensors.
bool: False, this operation can not be fused as it adds different encrypted integers
method q_impl
q_impl(*q_inputs: QuantizedArray, **attrs) → QuantizedArray
Compute the quantized convolution between two quantized tensors.
Allows an optional quantized bias.
q_inputs: input tuple, contains
x (numpy.ndarray): input data. Shape is N x C x H x W for 2d
w (numpy.ndarray): weights tensor. Shape is (O x I x Kh x Kw) for 2d
b (numpy.ndarray, Optional): bias tensor, Shape is (O,)
attrs: convolution options handled in constructor
res (QuantizedArray): result of the quantized integer convolution
class QuantizedAvgPool
Quantized Average Pooling op.
method __init__
n_bits_output: int,
int_input_names: Set[str] = None,
constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
input_quant_opts: QuantizationOptions = None,
) → None
method can_fuse
Determine if this op can be fused.
Avg Pooling operation can not be fused since it must be performed over integer tensors and it combines different elements of the input tensors.
bool: False, this operation can not be fused as it adds different encrypted integers
method q_impl
q_impl(*q_inputs: QuantizedArray, **attrs) → QuantizedArray
class QuantizedPad
Quantized Padding op.
method __init__
n_bits_output: int,
int_input_names: Set[str] = None,
constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
input_quant_opts: QuantizationOptions = None,
) → None
method can_fuse
Determine if this op can be fused.
Pad operation can not be fused since it must be performed over integer tensors.
bool: False, this operation can not be fused as it is manipulates integer tensors
class QuantizedWhere
Where operator on quantized arrays.
Supports only constants for the results produced on the True/False branches.
method __init__
n_bits_output: int,
int_input_names: Set[str] = None,
constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
input_quant_opts: QuantizationOptions = None,
) → None
class QuantizedCast
Cast the input to the required data type.
In FHE we only support a limited number of output types. Booleans are cast to integers.
class QuantizedGreater
Comparison operator >.
Only supports comparison with a constant.
method __init__
n_bits_output: int,
int_input_names: Set[str] = None,
constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
input_quant_opts: QuantizationOptions = None,
) → None
class QuantizedGreaterOrEqual
Comparison operator >=.
Only supports comparison with a constant.
method __init__
n_bits_output: int,
int_input_names: Set[str] = None,
constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
input_quant_opts: QuantizationOptions = None,
) → None
class QuantizedLess
Comparison operator <.
Only supports comparison with a constant.
method __init__
n_bits_output: int,
int_input_names: Set[str] = None,
constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
input_quant_opts: QuantizationOptions = None,
) → None
class QuantizedLessOrEqual
Comparison operator <=.
Only supports comparison with a constant.
method __init__
n_bits_output: int,
int_input_names: Set[str] = None,
constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
input_quant_opts: QuantizationOptions = None,
) → None
class QuantizedOr
Or operator ||.
This operation is not really working as a quantized operation. It just works when things got fused, as in e.g. Act(x) = x || (x + 42))
method __init__
n_bits_output: int,
int_input_names: Set[str] = None,
constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
input_quant_opts: QuantizationOptions = None,
) → None
method can_fuse
Determine if this op can be fused.
Or can be fused and computed in float when a single integer tensor generates both the operands. For example in the formula: f(x) = x || (x + 1) where x is an integer tensor.
class QuantizedDiv
Div operator /.
This operation is not really working as a quantized operation. It just works when things got fused, as in e.g. Act(x) = 1000 / (x + 42))
method __init__
n_bits_output: int,
int_input_names: Set[str] = None,
constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
input_quant_opts: QuantizationOptions = None,
) → None
method can_fuse
Determine if this op can be fused.
Div can be fused and computed in float when a single integer tensor generates both the operands. For example in the formula: f(x) = x / (x + 1) where x is an integer tensor.
class QuantizedMul
Multiplication operator.
Only multiplies an encrypted tensor with a float constant for now. This operation will be fused to a (potentially larger) TLU.
method __init__
n_bits_output: int,
int_input_names: Set[str] = None,
constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
input_quant_opts: QuantizationOptions = None,
) → None
method can_fuse
Determine if this op can be fused.
Multiplication can be fused and computed in float when a single integer tensor generates both the operands. For example in the formula: f(x) = x * (x + 1) where x is an integer tensor.
class QuantizedSub
Subtraction operator.
This works the same as addition on both encrypted - encrypted and on encrypted - constant.
method can_fuse
Determine if this op can be fused.
Add operation can be computed in float and fused if it operates over inputs produced by a single integer tensor. For example the expression x + x * 1.75, where x is an encrypted tensor, can be
computed with a single TLU.
bool: Whether the number of integer input tensors allows computing this op as a TLU
method q_impl
q_impl(*q_inputs: QuantizedArray, **attrs) → QuantizedArray
class QuantizedBatchNormalization
Quantized Batch normalization with encrypted input and in-the-clear normalization params.
class QuantizedFlatten
Quantized flatten for encrypted inputs.
method can_fuse
Determine if this op can be fused.
Flatten operation can not be fused since it must be performed over integer tensors.
bool: False, this operation can not be fused as it is manipulates integer tensors.
method q_impl
q_impl(*q_inputs: QuantizedArray, **attrs) → QuantizedArray
Flatten the input integer encrypted tensor.
q_inputs: an encrypted integer tensor at index 0
attrs: contains axis attribute
result (QuantizedArray): reshaped encrypted integer tensor
class QuantizedReduceSum
ReduceSum with encrypted input.
This operator is currently an experimental feature.
method __init__
n_bits_output: int,
int_input_names: Set[str] = None,
constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
input_quant_opts: Optional[QuantizationOptions] = None,
) → None
Construct the quantized ReduceSum operator and retrieve parameters.
n_bits_output (int): Number of bits for the operator's quantization of outputs.
int_input_names (Optional[Set[str]]): Names of input integer tensors. Default to None.
constant_inputs (Optional[Dict]): Input constant tensor.
axes (Optional[numpy.ndarray]): Array of integers along which to reduce. The default is to reduce over all the dimensions of the input tensor if 'noop_with_empty_axes' is false, else act as an
Identity op when 'noop_with_empty_axes' is true. Accepted range is [-r, r-1] where r = rank(data). Default to None.
input_quant_opts (Optional[QuantizationOptions]): Options for the input quantizer. Default to None.
attrs (dict): RecuseSum options.
keepdims (int): Keep the reduced dimension or not, 1 means keeping the input dimension, 0 will reduce it along the given axis. Default to 1.
noop_with_empty_axes (int): Defines behavior if 'axes' is empty or set to None. Default behavior with 0 is to reduce all axes. When axes is empty and this attribute is set to true 1, input tensor
will not be reduced, and the output tensor would be equivalent to input tensor. Default to 0.
method calibrate
calibrate(*inputs: ndarray) → ndarray
Create corresponding QuantizedArray for the output of the activation function.
*inputs (numpy.ndarray): Calibration sample inputs.
numpy.ndarray: the output values for the provided calibration samples.
method q_impl
q_impl(*q_inputs: QuantizedArray, **attrs) → QuantizedArray
Sum the encrypted tensor's values over axis 1.
q_inputs (QuantizedArray): An encrypted integer tensor at index 0.
attrs (Dict): Contains axis attribute.
(QuantizedArray): The sum of all values along axis 1 as an encrypted integer tensor.
method tree_sum
tree_sum(input_qarray, is_calibration=False)
Large sum without overflow (only MSB remains).
input_qarray: Enctyped integer tensor.
is_calibration: Whether we are calibrating the tree sum. If so, it will create all the quantizers for the downscaling.
(numpy.ndarray): The MSB (based on the precision self.n_bits) of the integers sum.
class QuantizedErf
Quantized erf op.
class QuantizedNot
Quantized Not op.
class QuantizedBrevitasQuant
Brevitas uniform quantization with encrypted input.
method __init__
n_bits_output: int,
int_input_names: Set[str] = None,
constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
input_quant_opts: Optional[QuantizationOptions] = None,
) → None
Construct the Brevitas quantization operator.
n_bits_output (int): Number of bits for the operator's quantization of outputs. Not used, will be overridden by the bit_width in ONNX
int_input_names (Optional[Set[str]]): Names of input integer tensors. Default to None.
constant_inputs (Optional[Dict]): Input constant tensor.
scale (float): Quantizer scale
zero_point (float): Quantizer zero-point
bit_width (int): Number of bits of the integer representation
input_quant_opts (Optional[QuantizationOptions]): Options for the input quantizer. Default to None. attrs (dict):
rounding_mode (str): Rounding mode (default and only accepted option is "ROUND")
signed (int): Whether this op quantizes to signed integers (default 1),
narrow (int): Whether this op quantizes to a narrow range of integers e.g. [-2n_bits-1 .. 2n_bits-1] (default 0),
method q_impl
q_impl(*q_inputs: QuantizedArray, **attrs) → QuantizedArray
Quantize values.
q_inputs: an encrypted integer tensor at index 0 and one constant shape at index 1
attrs: additional optional reshape options
result (QuantizedArray): reshaped encrypted integer tensor
class QuantizedTranspose
Transpose operator for quantized inputs.
This operator performs quantization, transposes the encrypted data, then dequantizes again.
method q_impl
q_impl(*q_inputs: QuantizedArray, **attrs) → QuantizedArray
Reshape the input integer encrypted tensor.
q_inputs: an encrypted integer tensor at index 0 and one constant shape at index 1
attrs: additional optional reshape options
result (QuantizedArray): reshaped encrypted integer tensor
|
{"url":"https://docs.zama.ai/concrete-ml/0.4/developer-guide/api/concrete.ml.quantization.quantized_ops","timestamp":"2024-11-14T08:01:29Z","content_type":"text/html","content_length":"1050354","record_id":"<urn:uuid:a5747544-05db-469f-8486-adb23f9e32d8>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00818.warc.gz"}
|
How to Do Scientific Notation in Excel - Learn Excel
How to Do Scientific Notation in Excel
Written by: Bill Whitman
Last updated:
Welcome to this brief tutorial on how to do scientific notation in Microsoft Excel. Scientific notation is a useful tool in working with numbers that are either very large or very small, as it
simplifies calculations and makes data more manageable. Excel provides easy-to-use functions for converting numbers to scientific notation, regardless of their actual value. Whether you’re a student
working on a science project or a professional dealing with complex data sets, this guide will show you how to properly use scientific notation in Excel in just a few simple steps.
Step 1: Understanding Scientific Notation
Before getting started on how to do scientific notation in Excel, let’s define what it is. Scientific notation is a mathematical expression used to represent numbers that are either very large or
very small. It makes it easier to read, compare and perform calculations with such numbers. The notation follows a standard format:
Large Numbers
When writing large numbers in scientific notation, move the decimal point to the left until it’s between 1 and 10. Then, add a coefficient (number between 1 and 10) and an exponent (the number of
times the decimal point was moved).
Small Numbers
For small numbers, the decimal point is moved to the right until it’s between 1 and 10, and the exponent will be negative (the absolute value of the exponent will show how many places the decimal
point moved). The coefficient, as mentioned before, will still be a number between 1 and 10.
Step 2: Entering Scientific Notation in Excel
Now that you understand what scientific notation is, it’s time to learn how to enter it into Excel:
Method 1: Formatting the Cell
If you need to enter scientific notation in Excel, you can format the cell accordingly. Here are the steps:
1. Select the cell or cells where you want to enter the scientific notation numbers.
2. Right-click on the selected cell and choose “Format Cells.”
3. Select “Scientific” from the Category list.
4. Specify the number of decimal places you want in the Decimal Places field.
5. Click OK and you’re done.
Now, when you enter a number that requires scientific notation, Excel will automatically convert it into the appropriate format.
Method 2: Using the Power Function
Another way to enter a scientific notation number in Excel is to use the power function. Here’s how:
Type the coefficient followed by the power function “^” and the exponent in another cell. For example, for the number 1.23 x 10^5, you would type “1.23 * 10^5”.
Excel will automatically calculate the value and display it in the cell. You can then copy and paste the result into the cell that you need it in for easy data entry.
Step 3: Converting Numbers to Scientific Notation
If you already have a large dataset and need to convert the values to scientific notation, Excel offers a simple method that can handle the task in a matter of seconds:
1. Select the cells that contain the numbers you want to convert.
2. Right-click on the selection and choose “Format Cells.”
3. Choose “Scientific” from the Category list and specify the desired number of decimal places.
4. Click OK and voila! The selected numbers are now in scientific notation format.
Final Thoughts
In conclusion, scientific notation is an essential tool when working with large or small numbers in Excel. You can easily enter scientific notation manually or use the built-in functions to convert
numbers in the current cell or a range of cells. With the steps given above, mastering scientific notation in Excel will be a breeze. Have fun and good luck with your calculations!
Using Scientific Notation for Calculations
Once you have the hang of entering and displaying numbers in scientific notation, you might want to use this format for calculations in Excel. Here are some tips to help you out:
Addition and Subtraction
When you’re adding or subtracting numbers in scientific notation, make sure the exponents are the same. If they are not, you’ll need to convert one or both of the numbers first.
Multiplying two numbers in scientific notation requires you to multiply the coefficients and add the exponents. For example, to multiply 2 x 10^3 and 3 x 10^2, you would multiply the coefficients to
get 6 and add the exponents to get 5 (since 3 + 2 = 5). The answer would be 6 x 10^5.
Dividing numbers in scientific notation is similar to multiplication. You need to divide the coefficients and subtract the exponents. So, to divide 2 x 10^3 by 3 x 10^2, you would divide 2 by 3 to
get 0.67, and subtract the exponents (3 – 2), to get 10^1. The answer would be 0.67 x 10^1 or 6.7.
Rounding Numbers in Scientific Notation
When you format cells as scientific notation in Excel, the program will round the numbers to the specified number of decimal places. While this is okay for most calculations, there are situations
where you might need to round a number to a certain significant figure.
To round a number to a specific number of significant figures in scientific notation, you need to:
1. Count the number of significant figures you want to keep.
2. Locate the digit that is in the place value of the least significant figure you want to keep.
3. Round the number to that digit and adjust the exponent as necessary.
Wrap Up
In conclusion, scientific notation is an incredibly useful tool in Excel that allows you to work with large and small numbers more easily. Whether you’re displaying and entering numbers in scientific
notation or using it for calculations, understanding how it works is key. With the steps outlined in this article, you should have everything you need to get started using scientific notation in
Frequently Asked Questions
Here are some answers to commonly asked questions about working with scientific notation in Excel:
Can I change the number of decimal places in a scientific notation cell in Excel?
Yes, you can change the number of decimal places displayed in a scientific notation cell in Excel. To do this, right-click on the cell and choose “Format Cells”. In the Format Cells window, select
“Scientific” in the Category list and specify the desired number of decimal places in the Decimal Places field.
What is the maximum number of decimal places I can display in a scientific notation cell in Excel?
The maximum number of decimal places you can display in a scientific notation cell in Excel is 30. However, keep in mind that values displayed with a large number of decimal places can make your
spreadsheet harder to read and may not be necessary.
Can I change the display of scientific notation to regular numbers in Excel?
Yes, you can change the display of scientific notation to regular numbers in Excel. To do this, right-click on the cell and choose “Format Cells”. In the Format Cells window, select “Number” in the
Category list and choose the number of decimal places you want displayed.
How can I convert a regular number to scientific notation in Excel?
To convert a regular number to scientific notation in Excel, simply format the cell as scientific notation (as described in Step 2 above). Excel will automatically convert the number to scientific
notation format.
What is the difference between scientific notation and engineering notation in Excel?
The main difference between scientific notation and engineering notation in Excel is the way the exponents are expressed. In scientific notation, the exponent is always a multiple of 3 (e.g., 10^3 or
10^-6). In engineering notation, the exponent is always a power of 10 (e.g., 10^6 or 10^-3). Both notations are used to represent large and small numbers more easily.
Featured Companies
• Explore the world of Microsoft PowerPoint with LearnPowerpoint.io, where we provide tailored tutorials and valuable tips to transform your presentation skills and clarify PowerPoint for
enthusiasts and professionals alike.
• Your ultimate guide to mastering Microsoft Word! Dive into our extensive collection of tutorials and tips designed to make Word simple and effective for users of all skill levels.
• Boost your brand's online presence with Resultris Content Marketing Subscriptions. Enjoy high-quality, on-demand content marketing services to grow your business.
Other Categories
|
{"url":"https://learnexcel.io/scientific-notation-excel/","timestamp":"2024-11-14T14:05:54Z","content_type":"text/html","content_length":"143024","record_id":"<urn:uuid:09898843-9ec2-4b57-950c-5723c8fefa05>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00205.warc.gz"}
|
residuals.mixreg: Calculate the residuals of a mixture of linear regressions. in mixreg: Functions to Fit Mixtures of Regressions
Description Usage Arguments Details Value Author(s) References See Also Examples
Calculates the residuals from each component of the mixture and the matrix of probabilities that each observation was generated by each component.
object An object of class "mixreg" as returned by mixreg().
std Logical argument; if TRUE then the residuals are standardized (by dividing them by their estimated standard deviation).
... Not used.
Logical argument; if TRUE then the residuals are standardized (by dividing them by their estimated standard deviation).
The calculation of the estimated standard deviations of the residuals is a little bit complicated since each component of the model is fitted using weighted regression in a setting in which the
weights are NOT the reciprocals of error variances. See the reference below for more detail.
resid The residuals of the model, bundled together in a n x K matrix, where n is the number of observations and K is the number of components in the model. The kth column of this matrix is the
vector of residuals from the kth component of the model.
fvals Matrix of the fitted values of the model, structured like resid (above).
gamma An n x K matrix of probabilities. The entry gamma[i,j] of this matrix is the (fitted) probability that observation i was generated by component j.
x The matrix of predictors in the regression model (or if there is only one predictor, this predictor as a vector).
y The vector of response values.
vnms Character vector; the first entry is the name of the response. The remaining entries are “reasonable” names for the individual (vector) predictors. Note that if there is no predictor then vnms
is of length two with second entry "index".
noPred Logical scalar; set to TRUE if there are no predictors in the model.
The residuals of the model, bundled together in a n x K matrix, where n is the number of observations and K is the number of components in the model. The kth column of this matrix is the vector of
residuals from the kth component of the model.
Matrix of the fitted values of the model, structured like resid (above).
An n x K matrix of probabilities. The entry gamma[i,j] of this matrix is the (fitted) probability that observation i was generated by component j.
The matrix of predictors in the regression model (or if there is only one predictor, this predictor as a vector).
Character vector; the first entry is the name of the response. The remaining entries are “reasonable” names for the individual (vector) predictors. Note that if there is no predictor then vnms is of
length two with second entry "index".
Logical scalar; set to TRUE if there are no predictors in the model.
T. Rolf Turner (2000). Estimating the rate of spread of a viral infection of potato plants via mixtures of regressions. Applied Statistics 49 Part 3, pp. 371 – 384.
1 fit <- mixreg(aphRel,plntsInf,ncomp=2,seed=42,data=aphids)
2 r <- residuals(fit)
3 plot(r)
4 fit <- mixreg(plntsInf ~ 1,ncomp=2,data=aphids)
5 r <- residuals(fit)
6 plot(r,shape="l",polycol="green")
fit <- mixreg(aphRel,plntsInf,ncomp=2,seed=42,data=aphids) r <- residuals(fit) plot(r) fit <- mixreg(plntsInf ~ 1,ncomp=2,data=aphids) r <- residuals(fit) plot(r,shape="l",polycol="green")
For more information on customizing the embed code, read Embedding Snippets.
|
{"url":"https://rdrr.io/cran/mixreg/man/residuals.mixreg.html","timestamp":"2024-11-04T11:22:46Z","content_type":"text/html","content_length":"33006","record_id":"<urn:uuid:0e75e5fe-b4f3-4633-b9cb-ce5e88d226ed>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00243.warc.gz"}
|
Radko Mesiar's research works | University of Ostrava and other places
What is this page?
This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our
legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.
Publications (166)
Idempotent uninorms on bounded lattices with at most single point incomparable with the neutral element: Part I
August 2024
4 Reads
1 Citation
International Journal of General Systems
Double set-function Choquet integral with applications
August 2024
15 Reads
1 Citation
Information Sciences
Idempotent uninorms on a bounded chain
August 2023
9 Reads
2 Citations
Fuzzy Sets and Systems
Generalizing Fung-Fu's theorem
Fuzzy Sets and Systems
Choquet Type Integrals for Single-Valued Functions with Respect to Set-Functions and Set-Multifunctions
February 2023
32 Reads
9 Citations
Information Sciences
Generalized-Hukuhara Subdifferential Analysis and Its Application in Nonconvex Composite Interval Optimization Problems
December 2022
78 Reads
10 Citations
Information Sciences
In this article, we study calculus for gH-subdifferential of convex interval-valued functions (IVFs) and apply it in a nonconvex composite model of an interval optimization problem (IOP). Towards
this, we define convexity, convex hull, closedness, and boundedness of a set of interval vectors. In identifying the closedness of the convex hull of a set of interval vectors and the union of closed
sets, we analyze the convergence of the sequence of interval vectors. We prove a relation on the gH-directional derivative of the maximum of finitely many comparable IVFs. With the help of existing
calculus on the gH-subdifferential of an IVF, we derive a Fritz-John-type and a KKT-type efficiency condition for weak efficient solutions of IOPs. In the sequel, we analyze the supremum and infimum
of a set of intervals. Further, we report a characterization of the weak efficient solutions of nonconvex composite IOPs by applying the proposed concepts. The whole analysis is supported by
illustrative examples.
Generation of Continuous T-norms through Latticial Operations
September 2022
19 Reads
5 Citations
Fuzzy Sets and Systems
It is well known that the usual point-wise ordering over the set T of t-norms makes it a poset but not a lattice, i.e., the point-wise maximum or minimum of two t-norms need not always be a t-norm
again. In this work, we propose, two binary operations on the set TCA of continuous Archimedean t-norms and obtain, via these binary operations, a partial order relation ⊑, different from the usual
point-wise order ≤, on the set TCA. As an interesting outcome of this structure, some stronger versions of some existing results dealing with the upper and lower bounds of two continuous Archimedean
t-norms with respect to the point-wise order ≤ are also obtained. Finally, with the help of the operations on the set TCA, two binary operations ⊕,⊗ on the set TC of continuous t-norms are proposed
and showed that (TC,⊕,⊗) is a lattice. Thus we have both a way of generating continuous t-norms from continuous t-norms and also obtain an order on them.
Multiple attribute decision making based on Pythagorean fuzzy Aczel-Alsina average aggregation operators
August 2022
55 Reads
28 Citations
Journal of Ambient Intelligence and Humanized Computing
A useful expansion of the intuitionistic fuzzy set (IFS) for dealing with ambiguities in information is the Pythagorean fuzzy set (PFS), which is one of the most frequently used fuzzy sets in data
science. Due to these circumstances, the Aczel-Alsina operations are used in this study to formulate several Pythagorean fuzzy (PF) Aczel-Alsina aggregation operators, which include the PF
Aczel-Alsina weighted average (PFAAWA) operator, PF Aczel-Alsina order weighted average (PFAAOWA) operator, and PF Aczel-Alsina hybrid average (PFAAHA) operator. The distinguishing characteristics of
these potential operators are studied in detail. The primary advantage of using an advanced operator is that it provides decision-makers with a more comprehensive understanding of the situation. If
we compare the results of this study to those of prior strategies, we can see that the approach proposed in this study is more thorough, more precise, and more concrete. As a result, this technique
makes a significant contribution to the solution of real-world problems. Eventually, the suggested operator is put into practise in order to overcome the issues related to multi-attribute
decision-making under the PF data environment. A numerical example has been used to show that the suggested method is valid, useful, and effective.
Generalized-Hukuhara subgradient and its application in optimization problem with interval-valued functions
June 2022
36 Reads
13 Citations
In this article, the concepts of gH-subgradient and gH-subdifferential of interval-valued functions are illustrated. Several important characteristics of the gH-subdifferential of a convex
interval-valued function, e.g., closeness, boundedness, chain rule, etc. are studied. Alongside, we prove that gH-subdifferential of a gH-differentiable convex interval-valued function contains only
the gH-gradient. It is observed that the directional gH-derivative of a convex interval-valued function is the maximum of all the products between gH-subgradients and the direction. Importantly, we
prove that a convex interval-valued function is gH-Lipschitz continuous if it has gH-subgradients at each point in its domain. Furthermore, relations between efficient solutions of an optimization
problem with interval-valued function and its gH-subgradients are derived.
Jensen's inequalities for standard and generalized asymmetric Choquet integral
June 2022
19 Reads
3 Citations
Fuzzy Sets and Systems
In a recent paper by the authors, Jensen's inequality for Choquet integral was given, and a wrong assertion—“Jensen's inequality does not hold for asymmetric Choquet integral” was made. This paper
can be viewed as a continuation of the previous one, Jensen's inequality for asymmetric Choquet integral is proved, the error is corrected. As its generalization, Jensen's inequality for generalized
asymmetric Choquet integral is obtained.
Citations (69)
... Due to the many applications of the Choquet integral for modeling non-deterministic problems, a generalization of the Choquet integral is recently presented in [21]. Study [22] generalizes
the generalized Choquet-type integral in terms of a double set-function Choquet integral for a real-valued function based on a set-function and fuzzy measure. ...
Double set-function Choquet integral with applications
• Citing Article
• August 2024
Information Sciences
... The class of all measurable closed-valued functions on Ω is denoted by ℜ[Ω]. For more details dealing with set-valued functions, see [1,8] (see also [3][4][5]7,22]). Let ∈ ℜ[Ω]. ...
Choquet Type Integrals for Single-Valued Functions with Respect to Set-Functions and Set-Multifunctions
• Citing Article
• February 2023
Information Sciences
... Motivated by [29], the theory of gH-subdifferential and gH-subgradient of IVFs were discussed in [30]. With this, numerous researchers [28,31,32,33,34,35] have contributed to the
gH-subdifferentiability of IVFs. ...
Generalized-Hukuhara Subdifferential Analysis and Its Application in Nonconvex Composite Interval Optimization Problems
• Citing Article
• December 2022
Information Sciences
... where T is a t-norm on L. Since then, the partial order induced by fuzzy logic connective has led to an extensive research by many scholars (see Liu and Wang 2022;Liu 2023;Karaçal and Mesiar
2014;Kesicioglu 2020;Lu et al. 2018;Vemuri et al. 2023). Also it is worth noting that nullnorms and uninorms are a generalization of t-norms and t-conorms, correspondingly, the partial orders
induced by nullnorms and uninorms, respectively, have been introduced in Aşıcı (2017); Ertugrul et al. (2016). ...
Generation of Continuous T-norms through Latticial Operations
• Citing Article
• September 2022
Fuzzy Sets and Systems
... Zarasiz [34] utilized bipolar fuzzy numbers to propose new AOs using the concept of AA operations. Senapati et al. [35] also broadened the concepts of t-Nm and t-CNm to propose PF-AA AOs. In
[36], Farid et al. thoroughly studied the optimizing filtration technology by expanding the AA operations q-rung Ortho pair FS. ...
Multiple attribute decision making based on Pythagorean fuzzy Aczel-Alsina average aggregation operators
• Citing Article
• August 2022
Journal of Ambient Intelligence and Humanized Computing
... The ̅− for real functions introduced in [3], and investigated in [7], mark a new development in the field of Pseudo-Analysis. Based on the fundamental properties of these ̅− , [2], [9], [10],
[19], [20], [21], for the first time in this paper, we have studied and verified other properties for pseudo-linearity/nonlinearity of ̅− and generalization of the table of ̅− , [3] of transformed
functions, [2]. The eight exceptional ̅− cases are considered for some ̅− functions' pseudolinear and pseudo-nonlinear combinations with some conditions. ...
Jensen's inequalities for standard and generalized asymmetric Choquet integral
Fuzzy Sets and Systems
... According to Moore's algorithm, for a nonzero interval A, it is impossible to find any interval B such that A + B = 0. Due to the limitations of Moore's algorithm, Hukuhara [3] proposed
'Hukuhara difference' of intervals. Although this method satisfies A ⊖ H A = 0, but for the calculation of A ⊖ H B, the Hukuhara difference can only be derived if the length of A is greater than
B (for more details refer to reference [2] and [27]). Markov [4] introduced a new interval subtraction method in order to solve this problem, i.e., 'nonstandard subtraction', which was
subsequently named by Stefanini [5] as 'generalized Hukuhara difference (gH-difference)'. ...
Generalized-Hukuhara subgradient and its application in optimization problem with interval-valued functions
... The ̅− for real functions introduced in [3], and investigated in [7], mark a new development in the field of Pseudo-Analysis. Based on the fundamental properties of these ̅− , [2], [9], [10],
[19], [20], [21], for the first time in this paper, we have studied and verified other properties for pseudo-linearity/nonlinearity of ̅− and generalization of the table of ̅− , [3] of transformed
functions, [2]. The eight exceptional ̅− cases are considered for some ̅− functions' pseudolinear and pseudo-nonlinear combinations with some conditions. ...
Jensen's inequality for Choquet integral revisited and a note on Jensen's inequality for generalized Choquet integral
• Citing Article
• September 2021
Fuzzy Sets and Systems
... For example, reference [29] summarises the principles and ideas of XAI, whereas [30] proposes a framework for developing transparent and intelligible models. A full examination of several XAI
approaches and algorithms is provided in [31], and the use of fault-tolerant solutions in XAI systems is covered in [32]. Furthermore, we found that the need for XAI framework development is
growing. ...
Classification by ordinal sums of conjunctive and disjunctive functions for explainable AI and interpretable machine learning solutions
Knowledge-Based Systems
... On the other hands, in many problems, it is not possible to provide the Laplace transformation. Due to the many applications of the Choquet integral for modeling non-deterministic problems, a
generalization of the Choquet integral is recently presented in [21]. Study [22] generalizes the generalized Choquet-type integral in terms of a double set-function Choquet integral for a
real-valued function based on a set-function and fuzzy measure. ...
Pseudo-integral and generalized Choquet integral
• Citing Article
• December 2020
Fuzzy Sets and Systems
|
{"url":"https://www.researchgate.net/scientific-contributions/Radko-Mesiar-9230241","timestamp":"2024-11-05T22:09:23Z","content_type":"text/html","content_length":"329579","record_id":"<urn:uuid:aa07c4c4-5cc4-4553-918c-74d969a0a768>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00316.warc.gz"}
|
homework Directory
allassignmenthelp.com: Allassignmenthelp | Uk 1St Class Assignment Help | Low Prices
We are the best UK Assignment Help service provider. Our experts provides authentic assignment solution online at affordable price like marketing, law, finance assignments.
Keyword: assignment help , homework help
biologyfunfacts.weebly.com: Biology Fun Facts
An educational site aiming to help students better understand concepts of biology ranging from biochemistry to evolution.
Keyword: animals , biology , cells , cellular respiration , facts , fun , help , homework , learn , plants , science , understand , water
math.about.com: Math Tutorials, Resources, Help, Resources And Math Worksheets
Math tutorials, lessons, tips, instructions, math worksheets, math formulas, multiplication.
Keyword: algebra , data , equations , geometry , help , homework , lessons , linear , management , mathematics , measurement , probability , solving , word
familyinternet.about.com: Family Computing
Find the best hardware, software and websites for use with your family computer. Learn about family computing safety while online, ergonomics and other ways to keep your kids safe and healthy in a
high tech world.
Keyword: computer , computing , educational , family , family , filters , games , help , homework , internet , kids , online , safety , software , teens
mathforum.org: The Math Forum @ Drexel University
The Math Forum is the comprehensive resource for math education on the Internet. Some features include a K-12 math expert help service, an extensive database of math sites, online resources for
teaching and learning math, plus much more.
Keyword: algebra , area , calculus , curriculum , education , geometry , help , homework , math , mathematics , perimeter , trig , trigonometry , volume
purplemath.com: Purplemath
Purplemath contains practical algebra lessons demonstrating useful techniques and pointing out common errors. Lessons are written with the struggling student in mind, and stress the practicalites
over the technicalities.
Keyword: algebra , equation , function , graph , guideline , help , homework , lesson , linear , polynomial , problem , quadratic
ptable.com: Dynamic Periodic Table
Interactive Web 2.0 periodic table with dynamic layouts showing names, electrons, oxidation, trend visualization, orbitals, isotopes, search. Full descriptions.
Keyword: chemistry , dynamic , elements , homework , interactive , name , pdf , periodic table , printable
chemicalaid.com: Elements, Chemicals And Chemistry - Chemistry Homework Tutoring
Chemistry, Elements and More Chemistry. Information on the Elements and Chemistry Tools.
Keyword: be , chemical , chemistry , education , elements , h , he , helium , help , homework , hydrogen , li , lithium , online , school , science
mathhelpforum.com: Math Help Forum
Math Help Forum is a free math forum for maths help and answers to mathematics questions of all levels.
Keyword: math forum , math help , math help forum , math homework help , maths forum , maths help
encyclopedia.com: Encyclopedia - Online Dictionary | Encyclopedia.Com: Get Facts, Articles, Pictur
Encyclopedia.com – Online dictionary and encyclopedia with pictures, facts, and videos. Get information and homework help with millions of articles in our FREE, online library.
Keyword: dictionary , encyclopedia , encyclopedia , facts , get , help , homework , information , online , pictures , videos
homeworkhelp.com: Homeworkhelp.Com - The Best Place To Find Online Tutors For Live Homework Help!
Live Online Tutoring. Homeworkhelp.com offers live, online tutoring with personalized programs to help your child. Join us now!
Keyword: e-learning , live homework help , live tutor , live tutoring , online homework help , online tutor , online tutoring , tutor
Add Site or Add URL to Submit Site to the homework Directory
|
{"url":"http://hotvsnot.com/www/homework/","timestamp":"2024-11-02T10:54:29Z","content_type":"application/xhtml+xml","content_length":"10941","record_id":"<urn:uuid:3b7ef0bb-8f40-465e-8fad-8f4b2ad12fab>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00885.warc.gz"}
|
Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models
In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood
(QML) based estimators and tests. General hypothesis testing is considered, where testing for linearity is of particular interest as parameters of non-linear components vanish under the null. To
solve the latter type of testing, we use the so-called sup tests, which here requires development of new (uniform) weak convergence results. These results are potentially useful in general for
analysis of non-stationary non-linear time series models. Thus the paper provides a full asymptotic theory for estimators as well as standard and non-standard test statistics. The derived asymptotic
results prove to be new compared to results found elsewhere in the literature due to the impact of the estimated cointegration relations. With respect to testing, this makes implementation of testing
involved, and bootstrap versions of the tests are proposed in order to facilitate their usage. The asymptotic results regarding the QML estimators extend results in Kristensen and Rahbek (2010,
Journal of Econometrics) where symmetric non-linear error correction are considered. A simulation study shows that the finite sample properties of the bootstrapped tests are satisfactory with good
size and power properties for reasonable sample sizes.
Originalsprog Engelsk
Udgiver Department of Economics, University of Copenhagen
Antal sider 26
Status Udgivet - 2010
Bibliografisk note
JEL Classification: C30, C32
|
{"url":"https://researchprofiles.ku.dk/da/publications/testing-and-inference-in-nonlinear-cointegrating-vector-error-cor","timestamp":"2024-11-14T04:42:27Z","content_type":"text/html","content_length":"44607","record_id":"<urn:uuid:fd0d0580-cd68-43ef-8198-51eb8182b2da>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00171.warc.gz"}
|
Coq devs & plugin devs
I've observed that many users, in particular beginners, often use the naked intuition tactic mindlessly (i.e., without a specified leaf tactic). This invocation, as currently defined, will result in
intuition (auto with *), which can have terrible performance with large hint databases. For this reason, Stdpp redefines naked intuition as intuition auto.
Has it ever been discussed to change the official definition of naked intuition? Was there any specific reason against it besides breaking code?
there was https://github.com/coq/coq/pull/11760 for firstorder
see also https://github.com/coq/coq/issues/4949 https://github.com/coq/coq/issues/7725
and https://github.com/coq/coq/pull/8175
thanks, I suspected this had a long history. I remember the firstorder fix, and it seemed reasonable.
@Gaëtan Gilbert I can't deduce from commenter context why https://github.com/coq/coq/pull/8175 wouldn't be a "proper fix". If nearly all Stdpp users can live with it (intuition auto), shouldn't it be
(for example, in overlays just write out intuition (auto with *). instead of intuition. where you actually need something more than intuition auto)
I think PMP just didn't have much time to put on it and therefore bailed
ah OK, so it may be a manpower issue in the end.
for the record, I'm investigating this since we will likely migrate away from intuition/tauto/etc. to itauto in one of our projects
ah apparently there is a recently discovered reason to not use intuition: https://github.com/coq/coq/issues/15824 (silently assumes axioms)
the fact it uses axioms is probably because it's using auto with *
ah, I wasn't even aware that intuition actually also does (some) first-order reasoning...
Not intuition per se, but the leaf tactic
@Paolo Giarrusso from my understanding of Frédéric Besson's comments here, intuition itself can actually do some first-order stuff. Here is the example we discussed:
Goal forall A (s1 s2 s : list A),
(forall x : A, In x s1 \/ In x s2 -> In x s) <->
(forall x : A, In x s1 -> In x s) /\ (forall x : A, In x s2 -> In x s).
Fail itauto auto.
intuition auto.
itauto has a pure SAT core solver and can't instantiate variables to do the discharge, but somehow intuition can
Paolo Giarrusso said:
Not intuition per se, but the leaf tactic
This is not my understanding. The manual says that tauto is equivalent to intuition fail. I read this such that intuition is a first order solver and you can give a tactic to solve whatever goals
cannot be solved first order.
I'll believe Frédéric Besson saying that intros is first-order reasoning on faith, but the manual (which might need amending?) clearly states that tauto is a solver for _propositional_ logic (https:/
This tactic implements a decision procedure for intuitionistic propositional calculus based on the contraction-free sequent calculi LJT* of Roy Dyckhoff [Dyc92]. Note that tauto succeeds on any
instance of an intuitionistic tautological proposition.
@Michael Soegtrop but you're right except for s/first-order/propositional.
maybe intuition is doing some first-order preprocessing? I'm trying to move away from it, so I'm not tempted to read the code.
intuition idtac.shows the intuition output, and info_auto shows how auto finishes... which fits with what Frédéric Besson claims
both itauto and intuition do "first-order preprocessing" such as intros, but intuition does it also in the middle of the proof
in my book, that's a strike against intuition predictability
hmm, Frédéric also mentions one can use itautor for recursive invocations (which would have multiple first-order processings)
is this perhaps the difference? intuition is recursive by default, which leads to FOL-solver-like behavior on some goals?
for posterity, I think the following is a good demonstration of the differences between itauto and intuition (assuming coq-itauto is installed):
Require Import Cdcl.Itauto.
Require Import List.
Goal forall A (s1 s2 s : list A),
(forall x : A, In x s1 \/ In x s2 -> In x s) <->
(forall x : A, In x s1 -> In x s) /\ (forall x : A, In x s2 -> In x s).
Fail itauto info_auto.
split; itauto info_auto.
itautor info_auto.
intuition idtac.
intuition info_auto.
Last updated: Oct 13 2024 at 01:02 UTC
|
{"url":"https://coq.gitlab.io/zulip-archive/stream/237656-Coq-devs-.26-plugin-devs/topic/intuition.20and.20auto-star.html","timestamp":"2024-11-06T07:29:12Z","content_type":"text/html","content_length":"23058","record_id":"<urn:uuid:66ab1d03-4970-41f0-832f-0dc44d14d4cc>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00058.warc.gz"}
|
Physicist, Stanford University
Conversation With a Slow Student
Student: Hi Prof. I've got a problem. I decided to do a little probability experiment—you know, coin flipping—and check some of the stuff you taught us. But it didn't work.
Professor: Well I'm glad to hear that you're interested. What did you do?
Student: I flipped this coin 1,000 times. You remember, you taught us that the probability to flip heads is one half. I figured that meant that if I flip 1,000 times I ought to get 500 heads. But it
didn't work. I got 513. What's wrong?
Professor: Yeah, but you forgot about the margin of error. If you flip a certain number of times then the margin of error is about the square root of the number of flips. For 1,000 flips the margin
of error is about 30. So you were within the margin of error.
Student: Ah, now I get if. Every time I flip 1,000 times I will always get something between 970 and 1,030 heads. Every single time! Wow, now that's a fact I can count on.
Professor: No, no! What it means is that you will probably get between 970 and 1,030.
Student: You mean I could get 200 heads? Or 850 heads? Or even all heads?
Professor: Probably not.
Student: Maybe the problem is that I didn't make enough flips. Should I go home and try it 1,000,000 times? Will it work better?
Professor: Probably.
Student: Aw come on Prof. Tell me something I can trust. You keep telling me what probably means by giving me more probablies. Tell me what probability means without using the word probably.
Professor: Hmmm. Well how about this: It means I would be surprised if the answer were outside the margin of error.
Student: My god! You mean all that stuff you taught us about statistical mechanics and quantum mechanics and mathematical probability: all it means is that you'd personally be surprised if it didn't
Professor: Well, uh...
If I were to flip a coin a million times I'd be damn sure I wasn't going to get all heads. I'm not a betting man but I'd be so sure that I'd bet my life or my soul. I'd even go the whole way and bet
a year's salary. I'm absolutely certain the laws of large numbers—probability theory—will work and protect me. All of science is based on it. But, I can't prove it and I don't really know why it
works. That may be the reason why Einstein said, "God doesn't play dice." It probably is.
|
{"url":"https://www.edge.org/response-detail/11645","timestamp":"2024-11-05T20:02:05Z","content_type":"application/xhtml+xml","content_length":"49974","record_id":"<urn:uuid:32361278-4da3-4e15-b1d3-2e31866721af>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00164.warc.gz"}
|
(3) Find the pedal equation of the foollowing Polar Gurves
γ2=a... | Filo
Question asked by Filo student
(3) Find the pedal equation of the foollowing Polar Gurves
Not the question you're searching for?
+ Ask your question
Video solutions (1)
Learn from their 1-to-1 discussion with Filo tutors.
2 mins
Uploaded on: 2/26/2023
Was this solution helpful?
Found 6 tutors discussing this question
Discuss this question LIVE for FREE
15 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice more questions on Trigonometry
View more
Students who ask this question also asked
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text (3) Find the pedal equation of the foollowing Polar Gurves
Updated On Feb 26, 2023
Topic Trigonometry
Subject Mathematics
Class Class 12
Answer Type Video solution: 1
Upvotes 96
Avg. Video Duration 2 min
|
{"url":"https://askfilo.com/user-question-answers-mathematics/3-find-the-pedal-equation-of-the-foollowing-polar-gurves-34343238333937","timestamp":"2024-11-10T06:00:39Z","content_type":"text/html","content_length":"343129","record_id":"<urn:uuid:6bfa3b21-ba4f-406a-9285-e0e97f7f52ee>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00120.warc.gz"}
|
2011 – Republic of Mathematics blog
Posted by: Gary Ernest Davis on: January 1, 2011
xOn December 31, 2010 @mathematicsprof posted two interesting tweets: @mathematicsprof FINALLY, a prime number year 2011 …. first one since 2003. x @mathematicsprof 2011 is also the sum of 11
CONSECUTIVE prime numbers: 2011=157+163+167+173+179+181+191+193+197+199+211 x Two prime numbers are “consecutive” if they follow one upon the other, in the collection of prime numbers. So, for […]
|
{"url":"http://www.blog.republicofmath.com/category/2011/","timestamp":"2024-11-03T12:30:18Z","content_type":"application/xhtml+xml","content_length":"44032","record_id":"<urn:uuid:45e7e281-e79e-4030-816a-a7c0a68f3c0f>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00189.warc.gz"}
|
32.67 meters per square second to decimeters per square second
33 Meters per square second = 326.7 Decimeters per square second
Acceleration Converter - Meters per square second to decimeters per square second - 32.67 decimeters per square second to meters per square second
This conversion of 32.67 meters per square second to decimeters per square second has been calculated by multiplying 32.67 meters per square second by 10 and the result is 326.7 decimeters per square
|
{"url":"https://unitconverter.io/meters-per-square-second/decimeters-per-square-second/32.67","timestamp":"2024-11-12T00:46:55Z","content_type":"text/html","content_length":"27047","record_id":"<urn:uuid:c624a7b8-ed89-4594-b108-fda05207c7eb>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00496.warc.gz"}
|
Revision #1 to TR20-084 | 29th June 2021 18:02
Rate Amplification and Query-Efficient Distance Amplification for linear LCC and LDC
The main contribution of this work is a rate amplification procedure for LCC. Our procedure converts any $q$-query linear LCC, having rate $\rho$ and, say, constant distance to an asymptotically good
LCC with $q^{poly(1/\rho)}$ queries.
Our second contribution is a distance amplification procedure for LDC that converts any linear LDC with distance $\delta$ and, say, constant rate to an asymptotically good LDC. The query complexity
only suffers a multiplicative overhead that is roughly equal to the query complexity of a length $1/\delta$ asymptotically good LDC. This improves upon the $poly(1/\delta)$ overhead obtained by the
AEL distance amplification procedure due to Alon, Edmonds and Luby. (FOCS 1995, ).
Our work establishes that the construction of asymptotically good LDC and LCC is reduced, with a minor overhead in query complexity, to the problem of constructing a vanishing rate linear LCC and a
(rapidly) vanishing distance linear LDC, respectively.
Changes to previous version:
In the original version of this paper a rate amplification procedure is constructed for a certainly natural subset of linear LCC. In this revision, we further show that this subset in fact consists
of all linear LCC.
TR20-084 | 31st May 2020 11:28
Rate Amplification and Query-Efficient Distance Amplification for Locally Decodable Codes
In a seminal work, Kopparty et al. (J. ACM 2017) constructed asymptotically good $n$-bit locally decodable codes (LDC) with $2^{\widetilde{O}(\sqrt{\log{n}})}$ queries. A key ingredient in their
construction is a distance amplification procedure by Alon et al. (FOCS 1995) which amplifies the distance $\delta$ of a code to a constant at a $\mathrm{poly}(1/\delta)$ multiplicative cost in query
complexity. Given the pivotal role of the AEL distance amplification procedure in the state-of-the-art constructions of LDC as well as LCC and LTC, one is prompt to ask whether the $\mathrm{poly}(1/\
delta)$ factor in query complexity can be reduced.
Our first contribution is a significantly improved distance amplification procedure for LDC. The cost is reduced from $\mathrm{poly}(1/\delta)$ to, roughly, the query complexity of a length $1/\
delta$ asymptotically good LDC. We derive several applications, one of which allows us to convert a $q$-query LDC with extremely poor distance $\delta = n^{-(1-o(1))}$ to a constant distance LDC with
$q^{\mathrm{poly}(\log\log{n})}$ queries. As another example, amplifying distance $\delta = 2^{-(\log{n})^\alpha}$, for any constant $\alpha < 1$, will require $q^{O(\log\log\log{n})}$ queries using
our procedure.
Motivated by the fruitfulness of distance amplification, we investigate the natural question of rate amplification. Our second contribution is identifying a rich and natural class of LDC and devise
two (incomparable) rate amplification procedures for it. These, however, deteriorate the distance, at which point a distance amplification procedure is invoked. Combined, the procedures convert any
$q$-query LDC in our class, having rate $\rho$ and, say, constant distance, to an asymptotically good LDC with $q^{\mathrm{poly}(1/\rho)}$ queries.
|
{"url":"https://eccc.weizmann.ac.il/report/2020/084/","timestamp":"2024-11-14T21:32:55Z","content_type":"application/xhtml+xml","content_length":"24310","record_id":"<urn:uuid:d1e606b4-436e-42f4-a9d3-7e5c18344fa8>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00630.warc.gz"}
|
Properties of Relational Decompositions
We now turn our attention to the process of decomposition that we used through-out Chapter 15 to decompose relations in order to get rid of unwanted dependencies and achieve higher normal forms. In
Section 16.2.1 we give examples to show that looking at an individual relation to test whether it is in a higher normal form does not, on its own, guarantee a good design; rather, a set of relations
that together form the relational database schema must possess certain additional properties to ensure a good design. In Sections 16.2.2 and 16.2.3 we discuss two of these proper-ties: the dependency
preservation property and the nonadditive (or lossless) join property. Section 16.2.4 discusses binary decompositions and Section 16.2.5 dis-cusses successive nonadditive join decompositions.
1. Relation Decomposition and Insufficiency of Normal Forms
The relational database design algorithms that we present in Section 16.3 start from a single universal relation schema R = {A[1], A[2], ..., A[n]} that includes all the attributes of the database.
We implicitly make the universal relation assumption, which states that every attribute name is unique. The set F of functional dependencies that should hold on the attributes of R is specified by
the database designers and is made available to the design algorithms. Using the functional dependencies, the algorithms decompose the universal relation schema R into a set of relation schemas D = {
R[1], R[2], ..., R[m]} that will become the relational database schema; D is called a decomposition of R.
We must make sure that each attribute in R will appear in at least one relation schema R[i] in the decomposition so that no attributes are lost; formally, we have
This is called the attribute preservation condition of a decomposition.
Another goal is to have each individual relation R[i] in the decomposition D be in BCNF or 3NF. However, this condition is not sufficient to guarantee a good data-base design on its own. We must
consider the decomposition of the universal rela-tion as a whole, in addition to looking at the individual relations. To illustrate this point, consider the EMP_LOCS(Ename, Plocation) relation in
Figure 15.5, which is in 3NF and also in BCNF. In fact, any relation schema with only two attributes is auto-matically in BCNF.^5 Although EMP_LOCS is in BCNF, it still gives rise to spurious tuples
when joined with EMP_PROJ (Ssn, Pnumber, Hours, Pname, Plocation), which is not in BCNF (see the result of the natural join in Figure 15.6). Hence, EMP_LOCS represents a particularly bad relation
schema because of its convoluted semantics by which Plocation gives the location of one of the projects on which an employee works. Joining EMP_LOCS with PROJECT(Pname, Pnumber, Plocation, Dnum) in
Figure 15.2—which is in BCNF—using Plocation as a joining attribute also gives rise to spurious tuples. This underscores the need for other criteria that, together with the conditions of 3NF or BCNF,
prevent such bad designs. In the next three subsections we discuss such additional conditions that should hold on a decomposition D as a whole.
2. Dependency Preservation Property of a Decomposition
It would be useful if each functional dependency X→Y specified in F either appeared directly in one of the relation schemas R[i] in the decomposition D or could be inferred from the dependencies that
appear in some R[i]. Informally, this is the dependency preservation condition. We want to preserve the dependencies because each dependency in F represents a constraint on the database. If one of
the depen-dencies is not represented in some individual relation R[i] of the decomposition, we cannot enforce this constraint by dealing with an individual relation. We may have to join multiple
relations so as to include all attributes involved in that dependency.
It is not necessary that the exact dependencies specified in F appear themselves in individual relations of the decomposition D. It is sufficient that the union of the dependencies that hold on the
individual relations in D be equivalent to F. We now define these concepts more formally.
Definition. Given a set of dependencies F on R, the projection of F on R[i], denoted by π[Ri](F) where R[i] is a subset of R, is the set of dependencies X → Y in F^+ such that the attributes in X ∪ Y
are all contained in R[i]. Hence, the projection of F on each relation schema R[i] in the decomposition D is the set of functional dependencies in F^+, the closure of F, such that all their left- and
right-hand-side attributes are in R[i]. We say that a decomposition D = {R[1], R[2], ..., R[m]} of R is dependency-preserving with respect to F if the union of the
projections of F on each Ri in D is equivalent to F; that is, ((π[R] (F)) ∪ ... ∪1
(π[Rm](F)))^+ = F^+.
If a decomposition is not dependency-preserving, some dependency is lost in the decomposition. To check that a lost dependency holds, we must take the JOIN of two or more relations in the
decomposition to get a relation that includes all left-and right-hand-side attributes of the lost dependency, and then check that the dependency holds on the result of the JOIN—an option that is not
An example of a decomposition that does not preserve dependencies is shown in Figure 15.13(a), in which the functional dependency FD2 is lost when LOTS1A is decomposed into {LOTS1AX, LOTS1AY}. The
decompositions in Figure 15.12, how-ever, are dependency-preserving. Similarly, for the example in Figure 15.14, no mat-ter what decomposition is chosen for the relation TEACH(Student, Course,
Instructor) from the three provided in the text, one or both of the dependencies originally present are bound to be lost. We state a claim below related to this property without providing any proof.
Claim 1. It is always possible to find a dependency-preserving decomposition D with respect to F such that each relation R[i] in D is in 3NF.
In Section 16.3.1, we describe Algorithm 16.4, which creates a dependency-preserving decomposition D = {R[1], R[2], ..., R[m]} of a universal relation R based on a set of functional dependencies F,
such that each R[i] in D is in 3NF.
3. Nonadditive (Lossless) Join Property of a Decomposition
Another property that a decomposition D should possess is the nonadditive join property, which ensures that no spurious tuples are generated when a NATURAL JOIN operation is applied to the relations
resulting from the decomposition. We already illustrated this problem in Section 15.1.4 with the example in Figures 15.5 and 15.6. Because this is a property of a decomposition of relation schemas,
the condition of no spurious tuples should hold on every legal relation state—that is, every relation state that satisfies the functional dependencies in F. Hence, the lossless join property is
always defined with respect to a specific set F of dependencies.
Definition. Formally, a decomposition D = {R[1], R[2], ..., R[m]} of R has the lossless (nonadditive) join property with respect to the set of dependencies F on R if, for every relation state r of R
that satisfies F, the following holds, where [*] is the NATURAL JOIN of all the relations in D: [*](π[R][1](r), ..., π[Rm](r)) = r.
The word loss in lossless refers to loss of information, not to loss of tuples. If a decom-position does not have the lossless join property, we may get additional spurious tuples after the PROJECT (
π) and NATURAL JOIN (*) operations are applied; these additional tuples represent erroneous or invalid information. We prefer the term nonadditive join because it describes the situation more
accurately. Although the term lossless join has been popular in the literature, we will henceforth use the term nonadditive join, which is self-explanatory and unambiguous. The nonadditive join
property ensures that no spurious tuples result after the application of PROJECT and JOIN operations. We may, however, sometimes use the term lossy design to refer to a design that represents a loss
of information (see example at the end of Algorithm 16.4).
The decomposition of EMP_PROJ(Ssn, Pnumber, Hours, Ename, Pname, Plocation) in Figure 15.3 into EMP_LOCS(Ename, Plocation) and EMP_PROJ1(Ssn, Pnumber, Hours,
Pname, Plocation) in Figure 15.5 obviously does not have the nonadditive join property, as illustrated by Figure 15.6. We will use a general procedure for testing whether any decomposition D of a
relation into n relations is nonadditive with respect to a set of given functional dependencies F in the relation; it is presented as Algorithm 16.3 below. It is possible to apply a simpler test to
check if the decomposition is nonaddi-tive for binary decompositions; that test is described in Section 16.2.4.
Algorithm 16.3. Testing for Nonadditive Join Property
Input: A universal relation R, a decomposition D = {R[1], R[2], ..., R[m]} of R, and a set F of functional dependencies.
Note: Explanatory comments are given at the end of some of the steps. They fol-low the format: (* comment *).
Create an initial matrix S with one row i for each relation R[i] in D, and one column j for each attribute A[j] in R.
Set S(i, j):= b[ij] for all matrix entries. (* each b[ij] is a distinct symbol associated with indices (i, j) *).
For each row i representing relation schema R[i] {for each column j representing attribute A[j]
{if (relation R[i] includes attribute A[j]) then set S(i, j):= a[j] ;};}; (* each a[j] is a distinct symbol associated with index ( j) *).
Repeat the following loop until a complete loop execution results in no
changes to S
{for each functional dependency X → Y in F
{for all rows in S that have the same symbols in the columns corresponding to attributes in X
{make the symbols in each column that correspond to an attribute in Y be the same in all these rows as follows: If any of the rows has an a sym-bol for the column, set the other rows to that same a
symbol in the col-umn. If no a symbol exists for the attribute in any of the rows, choose one of the b symbols that appears in one of the rows for the attribute and set the other rows to that same b
symbol in the column ;} ; } ;};
If a row is made up entirely of a symbols, then the decomposition has the nonadditive join property; otherwise, it does not.
Given a relation R that is decomposed into a number of relations R[1], R[2], ..., R[m], Algorithm 16.3 begins the matrix S that we consider to be some relation state r of R. Row i in S represents a
tuple t[i] (corresponding to relation R[i]) that has a symbols in the columns that correspond to the attributes of R[i] and b symbols in the remaining columns. The algorithm then transforms the rows
of this matrix (during the loop in step 4) so that they represent tuples that satisfy all the functional dependencies in F. At the end of step 4, any two rows in S—which represent two tuples in r—
that agree in their values for the left-hand-side attributes X of a functional dependency X → Y in F will also agree in their values for the right-hand-side attributes Y. It can be shown that after
applying the loop of step 4, if any row in S ends up with all a sym-bols, then the decomposition D has the nonadditive join property with respect to F.
If, on the other hand, no row ends up being all a symbols, D does not satisfy the lossless join property. In this case, the relation state r represented by S at the end of the algorithm will be an
example of a relation state r of R that satisfies the depend-encies in F but does not satisfy the nonadditive join condition. Thus, this relation serves as a counterexample that proves that D does
not have the nonadditive join property with respect to F. Note that the a and b symbols have no special meaning at the end of the algorithm.
Figure 16.1(a) shows how we apply Algorithm 16.3 to the decomposition of the EMP_PROJ relation schema from Figure 15.3(b) into the two relation schemas EMP_PROJ1 and EMP_LOCS in Figure 15.5(a). The
loop in step 4 of the algorithm cannot change any b symbols to a symbols; hence, the resulting matrix S does not have a row with all a symbols, and so the decomposition does not have the non-additive
join property.
Figure 16.1(b) shows another decomposition of EMP_PROJ (into EMP, PROJECT, and WORKS_ON) that does have the nonadditive join property, and Figure 16.1(c) shows how we apply the algorithm to that
decomposition. Once a row consists only of a symbols, we conclude that the decomposition has the nonadditive join property, and we can stop applying the functional dependencies (step 4 in the
algorithm) to the matrix S.
Figure 16.1
Nonadditive join test for n-ary decompositions. (a) Case 1: Decomposition of EMP_PROJ into EMP_PROJ1 and EMP_LOCS fails test. (b) A decomposition of EMP_PROJ that has the lossless join property. (c)
Case 2: Decomposition of EMP_PROJ into EMP, PROJECT, and WORKS_ON satisfies test.
4. Testing Binary Decompositions for the Nonadditive Join Property
Algorithm 16.3 allows us to test whether a particular decomposition D into n relations obeys the nonadditive join property with respect to a set of functional dependencies F. There is a special case
of a decomposition called a binary decomposition—decomposition of a relation R into two relations. We give an easier test to apply than Algorithm 16.3, but while it is very handy to use, it is
limited to binary decompositions only.
Property NJB (Nonadditive Join Test for Binary Decompositions). A decomposition D = {R[1], R[2]} of R has the lossless (nonadditive) join property with respect to a set of functional dependencies F
on R if and only if either
The FD ((R[1] ∩ R[2]) → (R[1] – R[2])) is in F^+, or
The FD ((R[1] ∩ R[2]) → (R[2] – R[1])) is in F^+
You should verify that this property holds with respect to our informal successive normalization examples in Sections 15.3 and 15.4. In Section 15.5 we decomposed LOTS1A into two BCNF relations
LOTS1AX and LOTS1AY, and decomposed the TEACH relation in Figure 15.14 into the two relations {Instructor, Course} and {Instructor, Student}. These are valid decompositions because they are
nonadditive per the above test.
5. Successive Nonadditive Join Decompositions
We saw the successive decomposition of relations during the process of second and third normalization in Sections 15.3 and 15.4. To verify that these decompositions are nonadditive, we need to ensure
another property, as set forth in Claim 2.
Claim 2 (Preservation of Nonadditivity in Successive Decompositions). If a decomposition D = {R[1], R[2], ..., R[m]} of R has the nonadditive (lossless) join property with respect to a set of
functional dependencies F on R, and if a decomposition D[i] = {Q[1], Q[2], ..., Q[k]} of R[i] has the nonadditive join property with respect to the projection of F on R[i], then the decomposition D
[2] = {R[1], R[2], ..., R[i][−][1], Q[1], Q[2], ..., Q[k], R[i][+1], ..., R[m]} of R has the nonadditive join property with respect to F.
|
{"url":"https://www.brainkart.com/article/Properties-of-Relational-Decompositions_11504/","timestamp":"2024-11-06T02:53:37Z","content_type":"text/html","content_length":"107590","record_id":"<urn:uuid:e2c0cf98-5b21-4379-9d43-23177b5e9f7f>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00891.warc.gz"}
|
Advances in Algebra
Top 7 papers analyzed
Advances in algebra have long been an essential part of the development of modern mathematics. This is exemplified by the work of mathematician Norman Steenrod, who made notable contributions to the
field of algebraic topology. By studying the properties of homology and cohomology, Steenrod was able to classify different types of topological spaces, which helped advance our understanding of
geometric objects. To aid in this work, he also developed the Steenrod algebra, which provided an invaluable tool for investigating the structure of spaces. Steenrod's work has continued to have a
significant impact on algebraic topology, and many mathematicians still study his findings today. This is a prime example of the importance of abstraction and generalization in mathematics. By
creating broad categories of objects and discovering connections between seemingly unrelated structures, mathematicians can make breakthroughs in multiple fields of study. Another noteworthy advance
in algebra is the concept of Noetherian regular rings. A Noetherian regular ring is a type of ring where every element is equal to the product of three other elements. This concept is fundamental in
the study of matrices, with the full matrix Mn being regular if and only if N is a regular ring. This idea is extended further to additively commutative semi-Noetherian regular δ-near rings, where
the sets of matrices are also rings with new properties. The study of such rings helps in investigating the properties of matrices and their applications in diverse areas of mathematics. In a more
practical sense, even scientific publishers require upgrades and maintenance, as demonstrated by World Scientific's upcoming system upgrade. The company, on October 25th, 2022, will undergo an
upgrade at 2am EDT, which may cause e-commerce and registration services to be temporarily unavailable for up to 12 hours. However, existing users can still access content during this period, and the
company advises those looking to make online purchases to visit their site again after the upgrade is complete. This underlines the importance of ongoing maintenance and updates to ensure that online
systems continue to function correctly. In conclusion, advances in algebra have long played a crucial role in the advancement of mathematics, spanning from the abstract concepts of Noetherian regular
rings to their practical applications. The exploration of these concepts continues to lead to discoveries and new connections between different areas of mathematics, highlighting the profound
importance of algebra in mathematics as a whole.
A Noetherian ring N is called a Noetherian regular Ring if every x N, x = xyx for some y N. It is clear understanding that for any ring N and any positive integer n, the full matrix ring Mn is
regular if and only if N is a regular ring. For a positive integer n and an additively commutative semiNoetherian Regular δ-Near Ring S with zero, let Mn(S) be the set of all n x n matrices over S.
Then under the usual addition and multiplication of matrices, Mn(S) is also an additively commutative semi- Noetherian Regular δ-Near Ring with zero and then n x n zero matrix over S is the zero of
matrix semi- Noetherian Regular δ-Near Ring Mn(S). Definition 1.4: A Commutative ring N with identity is a Noetherian Regular δ-Near Ring if it is Semi Prime in which every non-unit is a zero divisor
and the Zero ideal ia s Product of a finite number of principle ideals generated by semi prime elements and N is left simple which has N0 = N, Ne = N. Definition 1.5: A triple is called a
semi-Noetherian Regular δ-Near Ring if and are Semigroups and is distributive over. Then the Noetherian Regular δ-Near ring Mn(S) is Noetherian Regular δ-Near ring if and only if S is a Noetherian
Regular δ-Near ring.
Published By:
N Nagar - Advances in Algebra, 2011 - researchgate.net
This article discusses the work of mathematician Norman Steenrod in the field of algebraic topology. Steenrod made major contributions to the study of homology and cohomology, which help to classify
different types of topological spaces. He also developed the Steenrod algebra, a powerful tool for studying the structure of spaces. Steenrod's work had a profound impact on the field of algebraic
topology and continues to be studied today. The article concludes by noting that Steenrod's work is an example of the importance of abstraction and generalization in mathematics, which allows
mathematicians to study broad classes of objects and discover new connections between seemingly unrelated structures.
Published By:
S MacLane - The American Mathematical Monthly, 1939 - Taylor & Francis
On October 25th, 2022, World Scientific's system will undergo an upgrade at 2am EDT. Existing users will still be able to access content, but e-commerce and registration for new users might not be
available for up to 12 hours. The company advises people to visit their site again for online purchases and to reach out to customer care for any concerns. In summary, World Scientific's website will
undergo maintenance on October 25th, 2022, which may affect e-commerce and registration for up to 12 hours. Current users will still be able to access existing content, and the company advises people
to visit their site again for online purchases after the upgrade has completed.
Published By:
KB Nam - Advances In Algebra, 2003 - World Scientific
The proceedings volume of the Southern Regional Algebra Conference (SRAC) held in March 2017 covers a range of research topics in algebra. The papers presented in the volume include both theoretical
and computational methods, and cover areas such as ring theory, group theory, commutative algebra, algebraic geometry, linear algebra, and quantum groups. The papers consist of research articles and
survey papers and highlight ongoing research in algebraic geometry, combinatorial commutative algebra, computational methods for representations of groups and algebras, Lie superalgebras, and
tropical algebraic geometry. SRAC has been held since 1988, and this volume showcases the latest findings in computational and theoretical methods in algebra and representation theory. The book is
suitable for graduate students and researchers interested in algebraic research.
Published By:
J Feldvoss, L Grimley, D Lewis, A Pavelescu, C Pillen - Springer
The article discusses the general notion of independence introduced by E. Marczewski in 1958. A set I of the carrier A of an algebra is called M-independent if equality of two term operations f and g
of the algebra on any finite system of different elements of I implies f = g in A. While there are interesting results on this notion of independence, it is not wide enough to cover stochastic
independence, independence in projective spaces, and some others. Hence, weaker notions of independence such as Q-independence, which relies on families Q of mappings into A, have been developed. The
article delves into Q-independence and discusses the Galois correspondence between families Q of mappings and Q-independent sets. The article concludes with some easily formulated and interesting
results. The article is dedicated to the memory of Professor B.H. Neumann.
Published By:
K Głazek - Advances In Algebra, 2003 - World Scientific
From the activities and sub questions that were analysed, all 533 activities in the FPRWs and all 196 activities in the ANAs could be classified into four types of patterns: number patterns,
repeating patterns, shape patterns with growth and other patterns that did not fit into the three categories. The categories were labelled Difficulty Level 1, 2 and 3, respectively - abbreviated to
DL1, DL2 and DL3. DL1 repeating patterns are those patterns where the core of the pattern is interrupted, because these activities simply require the drawing of the next item(s) in the collection of
shapes. Although DL2 patterns do not explicitly employ the notion of core to extend patterns, the given core is not interrupted, as it is in DL1. DL3 repeating patterns are complex patterns, where
the core of the pattern is fully shown and has two or more items with multi-variability. These patterns can be classified according to levels of cognitive engagement, which range from simple patterns
to more complex patterns where the core of the pattern is not easily recognised.
Published By:
J Du Plessis - South African Journal of Childhood Education, 2018 - journals.co.za
The World Scientific website will undergo a system upgrade on October 25th, 2022, at 2 am (EDT). Existing users will be able to log in and access content, but registration of new users and e-commerce
may be unavailable for up to 12 hours. Visitors are advised to check back later for online purchases. Any inquiries can be directed to customercare@wspc.com. In conclusion, World Scientific is
informing users of its website that it will be down for an upgrade on October 25th, 2022, from 2 am (EDT). Customers are urged to be aware that new user registration and e-commerce services may be
unavailable for up to 12 hours. Existing users, however, will be able to log in and view content. For additional information or inquiries, please contact customercare@wspc.com.
Published By:
SK Jain, P Kanwar, JB Srivastava - Advances In Algebra, 2003 - World Scientific
|
{"url":"https://lemad.ai/search/advances-in-algebra-2","timestamp":"2024-11-12T23:55:21Z","content_type":"text/html","content_length":"113987","record_id":"<urn:uuid:0ab2d7ce-5f8a-44a0-bc4d-55588e2ec192>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00858.warc.gz"}
|
Visualizing Sensitivity and Specificity of a Test
In my university course on Psychological Assessment, I recently explained the different quality criteria of a test used for dichotomous decisions (yes/no, positive/negative, healthy/sick, …). A quite
popular example in textbooks is the case of cancer screenings, where an untrained reader might be surprised by the low predictive value of a test. I created a small Shiny app to visualize different
scenarios of this example. Read on for an explanation or go directly to the app here.
Imagine, for example, a disease that affects 1% of a population and you have a blood test for the disease. Your test has a 90% chance to correctly identify someone who carries the disease (
Sensitivity = 0.90) and a 95% chance to correctly identify someone who does not carry the disease (Specificity = 95%). Now you use your blood test as a screening test on the general population. Now,
take a random patient who you have tested and the test has yielded a positive result (e.g. the blood level of a particular enzyme is higher than a pre-defined cut-off value). The interesting question
now: What is the probability of the patient to actually carry the disease, given his positive test result?
Most of the time, people without statistical training will give an answer somewhere along the lines of 90% or 95% based on the values of Sensitivity and Specificity. But probability theory is
somewhat more complicated: While Sensitivity and Specificity represent a conditional probability given the actual health state of the patient, the above question is about the conditional probability
of being healthy or sick given the test result. That means, we are not looking for the probability of identifying someone given his health status (that we do not know), but we are looking for the
probability of actually having the disease when the tests tells us the patient has it. Those two probabilities sound very similar, but can be, in fact, very different from each other. The
relationship between those conditional probabilities is described through a formula called Bayes’ Theorem: $P(A|B) = \frac{P(B|A) P(A)}{P(B)}$
What we are looking for is called positive predictive value (PPV), the probability of having the disease given a positive test result.
In fact, for the above example, the positive predictive value is only 8.33%. The reason lies in the low prevalence of the disease (only 1%): Out of 100,000 people 900 of 1000 would have correctly
been tested positive, but far more (4,950 of 99,000 healthy individuals) have a positive test result despite being healthy.
On the other hand, with a probability of 99.89% you do not have the disease if you have a negative test result (the negative predictive value).
To show that the prevalence is important for the predictive value, imagine that the disease would affect 80% of the population: In this case, the same test would have a positive predictive value of
97.30%. This could be, for example, the case if you only use the screening test on people who have specific symptoms of the disease.
To better visualize the relationship between Prevalence, Sensitivity, Specificity and the predictive values, I created a small Shiny app where you can play around with different scenarios: https://
What follows, is often a discussion on the implications and consequences. Many real-world screening tests have lower Sensitivity and Specificity than above values, but are still being used for
screening and testing purposes. When discussing the usefulness of such screening instruments, you have to consider different aspects such as: who do you test (everyone or only a sub-population
carrying certain risk-factors), what are the costs of false negatives versus false positives and what follows a positive test result (operation, further diagnostics, etc.)? There is no universal
answer to the question and it is not only related to the statistical aspects, but also to ethical considerations. But in any case, it is important – also for patients receiving test results from
their doctors – to understand that a test result always has underlying probabilities and in some cases it is more probable to have a false positive than to actually be sick.
Update (28.10.2016): Felix Schönbrodt has another and better way to visualize the counts and probabilities from scenario described above. Check out his Shiny app here.
One response to “Visualizing Sensitivity and Specificity of a Test”
1. […] Visualizing Sensitivity and Specificity of a Test […]
|
{"url":"https://www.neurotroph.de/2016/06/visualizing-sensitivity-and-specificity-of-a-test/","timestamp":"2024-11-02T02:20:58Z","content_type":"text/html","content_length":"62190","record_id":"<urn:uuid:91fc36af-ba0a-4e64-95d1-7e2b72ec363c>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00722.warc.gz"}
|
OpenGL direction of a vector in degrees (X, Y, Z)
I have an object traveling along a 3D vector in terms of X, Y, Z... I need to rotate the object according to each axis (x-axis, y-axis, and z-axis).
How do I get these measurements in terms of degrees?
(I am using OpenGL and only know of glRotatef(...)) [glRotatef(...) documentation here] Looking at this question, the answer gives
viewvector =<x, y, z>
r = sqrt(x² + y² + z²)
phi = arctan2(y, x)
theta = arccos(z / r)
but from this wiki page I understand that:
[Edit from Ignacio Vazquez-Abrams]
phi => angle around Z-axis
theta => angle between x/y plane
but how do I find Y? or do I need to?
The real question is, How do I represent this in terms of glRotatef(...)?
The third angle is arbitrary, since it doesn't affect the direction of travel. You'll need another constraint if you want to force a particular value -
Vaughn Cato 2012-04-04 04:58
So if I am to understand correctly,
glRotatef(theta, 1.0, 1.0, 0.0)
glRotatef(phi, 0.0, 0.0, 1.0)
will work -
Wallter 2012-04-05 03:22
glRotatef(phi,0,0,1); glRotatef(theta,1,0,0) -
Vaughn Cato 2012-04-05 03:57
thanks for the clarification - Now how do I get it to rotate it with glRotatef(...) -
Wallter 2012-04-04 05:00
First, you can stop worrying about theta and phi, since
takes the vector, not the angles. As for the roll, that's something you'll need to determine for yourself -
Ignacio Vazquez-Abrams 2012-04-04 05:02
|
{"url":"https://stackmirror.zhuanfou.com/questions/10005117","timestamp":"2024-11-03T15:05:03Z","content_type":"text/html","content_length":"9409","record_id":"<urn:uuid:3f486906-17d2-4f79-8f10-38ec9223a2b7>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00780.warc.gz"}
|
\Omega $ each are connected in series across a resistor R, the rate of heat produced in R is ${{J}_{1}}$. When the same batteries are connected in parallel across R, the rate is ${{J}_{2}}$. If ${{J}_{1}}=2.25{{J}_{2}}$ then the value of R in $\
Hint: When we connect two identical batteries in series the effective resistance in series combination will be the sum of those two internal resistances and in while the batteries are connected in
parallel the reciprocal of those two internal resistances are added together to find the effective internal resistances. Then by calculating ${{I}_{1}}$and ${{I}_{2}}$ , we can find the heat in those
two cases. Hence by substituting in the given equation we will get the value of R.
Complete step by step answer:
Given that the internal resistance $1\Omega $ is connected in series with a resistor R.
Then the total resistance in series combination becomes 2r+R.
Hence the current flow through ${{I}_{1}}$ is given by,
Current ${{I}_{1}}=\left[ \frac{2E}{2r+R} \right]$
Heat in the first case can be calculated by using the equation,
$\Rightarrow {{J}_{1}}={{\left( \frac{2E}{2r+R} \right)}^{2}}Rt$
Similarly we can calculate the current ${{I}_{2}}$. Here the same battery is connected in parallel across R. Thus,
${{I}_{2}}=\left[ \frac{E}{\frac{r}{2}+R} \right]$
Hence heat produced in the second case,
$\Rightarrow {{J}_{2}}={{\left( \frac{E}{\frac{r}{2}+R} \right)}^{2}}Rt$
Given that,
Substituting the values of ${{J}_{1}}$and ${{J}_{2}}$ in the above equation we get, ${{\left( \frac{2E}{2r+R} \right)}^{2}}Rt=2.25{{\left( \frac{E}{\frac{r}{2}+R} \right)}^{2}}Rt$
${{\left( \frac{2E}{2r+R} \right)}^{2}}Rt=2.25{{\left( \frac{2E}{r+2R} \right)}^{2}}Rt$
Rearranging the equation and cancelling the common terms we get,
${{\left( r+2R \right)}^{2}}=2.25{{\left( 2r+R \right)}^{2}}$
${{\left( r+2R \right)}^{2}}=\frac{9}{4}\times {{\left( 2r+R \right)}^{2}}$
$4{{\left( r+2R \right)}^{2}}=9{{\left( 2r+R \right)}^{2}}$
Taking the square root the above equation becomes,
$2\left( r+2R \right)=3\left( 2r+R \right)$
Given that,
$r=1\Omega $
Then the value of R in $\Omega $ is,
$R=4\Omega $
Note: In series the effective resistance in series combination will be the sum of those two resistances and parallel the reciprocal of those two internal resistances are added together to find the
effective internal resistances.
|
{"url":"https://www.vedantu.com/question-answer/two-identical-batteries-of-internal-resistance-class-12-physics-cbse-60a917b731f827669ba9d20b","timestamp":"2024-11-10T00:03:01Z","content_type":"text/html","content_length":"183568","record_id":"<urn:uuid:5211c9f6-6c0f-4476-946a-b18138a4f6c7>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00847.warc.gz"}
|
Use other services for machine learning
1. Skill 4.1: Build and use neural networks with the Microsoft Cognitive Toolkit
In this sample chapter from Exam Ref 70-774 Perform Cloud Data Science with Azure Machine Learning, examine other services provided by Microsoft for machine learning including the Microsoft Cognitive
Toolkit, HDInsights, SQL Server R Services, and more.
You have been learning about Azure Machine Learning as a powerful tool to solve the vast majority of common machine learning problems, but it is important to consider that it is not the only tool
provided by Microsoft for that purpose. A previously seen alternative is Cognitive Services, and in this chapter, we look at other systems capable of dealing with large amounts of unstructured data
(HDInsight clusters), data science tools integrated with SQL Server (R Services), and preconfigured workspaces in powerful Azure Virtual Machines (Deep Learning Virtual Machines and Data Science
Virtual Machines).
Skills in this chapter:
• Skill 4.1: Build and use neural networks with the Microsoft Cognitive Toolkit
• Skill 4.2: Streamline development by using existing resources
• Skill 4.3: Perform data sciences at scale by using HDInsights
• Skill 4.4: Perform database analytics by using SQL Server R Services on Azure
Skill 4.1: Build and use neural networks with the Microsoft Cognitive Toolkit
Microsoft Cognitive Toolkit (CNTK) is behind many of the Cognitive Services models you learned to use in Skill 3.4: Consume exemplar Cognitive Services APIs. You can find CNTK in Cortana, the Bing
recommendation system, the HoloLens object recognition algorithm, the Skype translator, and it is even used by Microsoft Research to build state-of-the-art models.
But what exactly is CNTK? It is a Microsoft open source deep learning toolkit. Like other deep learning tools, CNTK is based on the construction of computational graphs and their optimization using
automatic differentiation. The toolkit is highly optimized and scales efficiently (from CPU, to GPU, to multiple machines). CNTK is also very portable and flexible; you can use it with programming
languages like Python, C#, or C++, but you can also use a model description language called BrainScript.
Simple linear regression with CNTK
With CNTK you can define many different types of neural networks based on building block composition. You can build feed forward neural networks (you review how to implement one in this skill),
Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), and others. You can actually define almost any type of neural network,
including your own modifications. The set of variables, parameters, operations, and their connections to each other are called are called computational graphs or computational networks.
As a first contact with the library, you create a Jupyter Notebook in which you adjust a linear regression to synthetic data. Although it is a very simple model and may not be as attractive as
building a neural network, this example provides you an easier understanding of the concept of computational graphs. That concept applies equally to any type of neural network (including deep ones).
The first thing you must include in your notebook is an import section. Import cntk, matplotlib for visualizations, and numpy for matrices. Note that the fist line is a special Jupyter Notebook
syntax to indicate that the matplotlib plots must be shown without calling the plt.show method.
%matplotlib inline
import matplotlib.pyplot as plt
import cntk as C
import numpy as np
For this example, you use synthetic data. Generate a dataset following a defined line plus noise y_data = x_data * w_data + b_data + noise where the variable w_data is the slope of the line, b_data
the intercept or bias term and noise is a random gaussian noise with standard deviation given by the noise_stddev variable. Each row in x_data and y_data is a sample, and n_samples samples are
generated between 0 and scale.
def generate_data(n_samples, w_data, b_data, scale, noise_stddev):
x_data = np.random.rand(n_samples, 1).astype(np.float32) * scale
noise = np.random.normal(0, noise_stddev, (n_samples, 1)).astype(np.float32)
y_data = x_data * w_data + b_data + noise
return x_data, y_data
n_samples = 50
scale = 10
w_data, b_data = 0.5, 2
noise_stddev = 0.1
x_data, y_data = generate_data(n_samples, w_data, b_data, scale, noise_stddev)
plt.scatter(x_data, y_data)
The last line of the code fragment shows the dataset (see Figure 4-1).
FIGURE 4-1 Synthetic linear dataset
You implement linear regression, so you would try to find an estimation of y, normally written as y_hat, using a straight line: y_hat = w*x + b. The goal of this process would be to find w and b
values that make the difference between y and y_hat minimal. For this purpose, you can use the least square error (y – y_hat)^2 as your loss function (also called target / cost / objective function),
the function you are going to minimize.
In order to find these values using CNTK, you must create a computational graph, that is, define which are the system’s inputs, which parameters you want to be optimized, and what is the order of the
operations. With all of this information CNTK, making use of automatic differentiation, optimizes the values of w and b iteration after iteration. After several iterations, the final values of the
parameters approach the original values: w_data and b_data. In Figure 4-2 you can find a graphical representation of what the computational network looks like.
FIGURE 4-2 Computation graph of a linear regression algorithm. The x and y nodes are inputs, w and b nodes are parameters and the remaining nodes are operations. The rightmost node ‘·2’ performs the
squaring operation
If you execute operations from left to right, it is called a forward pass. If you run the derivatives of each operation from right to left, it is a called backwards pass. Note that the graph also
includes the loss calculation. The outputs of the ‘+’ node are the predictions y_hat and, from then on, the graph is computing the loss. The backward pass optimizes the parameter values (w and b) in
such a way as to minimize the loss.
To define your computational graph in CNTK, just use the following lines of code.
x = C.input_variable(1)
y = C.input_variable(1)
w = C.parameter(1)
b = C.parameter(1)
y_hat = x * w + b
loss = C.square(y - y_hat)
Notice that everything here is a placeholder, no computation is done. This code only creates the computational graph. You can see it as a circuit, so you are connecting the different elements but no
electricity is flowing. Notice that the ‘*’, ‘-‘ and ‘+’ operators are overloaded by CNTK, so in those cases the operators have a behavior different from the one they have in standard Python. In this
case, they create nodes in the computational graph and do not perform any calculations.
Now paint the predictions of the model and the data on a plot. Of course, getting good predictions is not expected, since you are only preparing a visualization function that will be used later. The
way in which the visualization is done is by evaluating the model at point x=0 and point x=10 (10 is the value of the scale defined when creating the dataset). After evaluating the model at these
points, only one line is drawn between the two values.
def plot():
plt.scatter(x_data, y_data) # plot dataset
x_line = np.array([[0], [scale]], dtype=np.float32)
y_line = y_hat.eval({x: x_line})
plt.plot(x_line.flatten(), y_line.flatten()) # plot predictions
The result of calling the plot function is shown in Figure 4-3. The initial values of w and b are 0, which is why a constant line is painted in 0.
Train the model using batch gradient descent (all samples are used in each iteration). You can do that because there is little data. The following script performs 600 iterations and every 50
iterations shows the model loss on the training data (it actually performs 601 iterations in order to show the loss in the last iteration). The learner used is Stochastic Gradient Descent (C.sgd) and
uses a learning rate of 0.01.
Note that a test set is not being used and performance is being measured only on the training set. Even if it is not the most correct way to do it, it is a simple experiment that only aims to show
how to create and optimize computational networks with CNTK.
learner = C.sgd(y_hat.parameters, lr=0.01) # learning rate = 0.01
trainer = C.Trainer(y_hat, loss, learner)
for i in range(601):
x: x_data,
y: y_data
if i % 50 == 0:
print("{0} - Loss: {1}".format(i, trainer.previous_minibatch_loss_
Figure 4-4 shows the training output and the plot of the predictions. Now the prediction line is no longer zero and is correctly adjusted to the data.
Print the values of w and b and you will obtain some values near the original ones, w ≈ 0.5 and b ≈ 2. Notice that you need to use the property value to access the current value of the parameters:
print("w = {:.3}, b = {:.3}".format(np.asscalar(w.value), np.asscalar(b.value)))
This example is a very simple one, used to explain the key concepts of CNTK. There are a lot of operations (nodes in the computation graph) that can be used in CNTK; indeed there are losses already
defined so you do not need to explicitly write the least square error manually (‘-‘ and ‘·2’ nodes), you can use a pre-defined operation that implements that loss. For example, you can replace loss =
C.square(y – y_hat) with loss = C.squared_loss(y, y_hat). Defining the least squared error manually is trivial, but for complex ones is not so trivial. You see more examples in the next sections.
Use N-series VMs for GPU acceleration
Although GPUs were initially created to perform computer graphics related tasks, they are now widely used for general-purpose computation (commonly known as general-purpose computing on Graphics
Processing Units or GPGPU). This is because the number of cores that a graphics card has is much higher than a typical CPU, allowing parallel operations. Linear algebra is one of the cornerstones of
deep learning and the parallelization of operations such as matrix multiplications greatly speeds up training and predictions.
Despite the fact that GPUs are cheaper than other computing hardware (clusters or supercomputers), it is true that buying a GPU can mean a large initial investment and may become outdated after a few
years. Azure, the Microsoft cloud, provides solutions to these problems. It enables you to create virtual machines with GPUs, the N-Series virtual machines. Among the N-Series we find two types: the
computation-oriented NC-Series and the visualization-oriented NV-Series. Over time, newer versions of graphics cards are appearing and you can make use of them as easily as scaling the machine.
Those machines are great for using the GPU but they need a previous configuration: installing NVIDIA drivers and installing common tools for data science (Python, R, Power BI…).
In this section you create a Deep Learning Virtual Machine (DLVM). This machine comes with pre-installed tools for deep learning. Go to Azure and search for it in the Azure Marketplace (see Figure
The procedure is quite similar to creating any other VM. The only difference is when you have to select the size of the machine, which is restricted to NC-Series sizes (see Figure 4-6). For testing
purposes select the NC6, which is the cheaper one.
FIGURE 4-6 In the second step of the Create Deep Learning Virtual Machine you must select the NC-Series sizes
Once the machine is created you can connect to it by remote desktop and discover that it comes with a lot of tools installed. Figure 4-7 shows a screenshot of the desktop showing some of the
pre-installed tools. This allows you to start development quickly without having to worry about installing tools. The Skill 4.2 lists most of the pre-installed applications that a Windows machine
FIGURE 4-7 Desktop of the DLVM with a lot of pre-installed tools
Open a command window and check that the virtual machine has all the NVIDIA drivers installed and detects that the machine has a GPU connected. Use the command nvidia-smi to do this (see Figure 4-8).
Build and train a three-layer feed forward neural network
After building a simple example of computation graph at the beginning of this skill, it is time to build a deep model using the same principles exposed there: create a differentiable computational
graph and optimize it by minimizing a cost function. This time you use a famous handwritten digits dataset: MNIST (introduced in LeCun et al., 1998a). The MNIST dataset contains 70000 images of
digits from 0 to 9. Images are black and white and have 28x28 pixels. It is common in the machine learning community to use this dataset for tutorials or to test the effectiveness of new models.
The Figure 4-9 shows the implemented architecture: a three-layer feed forward network. This type of networks is characterized by the fact that neurons between contiguous layers are all connected
together (dense or fully connected layers). As has been said before, data are black and white 28x28 images, 784 inputs if the images are unrolled in a single-dimensional array. The neural network has
two hidden layers of 400 and 200 neurons and an output layer of 10 neurons, one for each class to predict. In order to convert the neural network output into probabilities, softmax is applied to the
output of the last layer.
FIGURE 4-9 Diagram of the implemented architecture
Each of the neurons in the first layer are connected to all input pixels (784). This means that there is a weight associated with this connection, so in the first layer there are 400 neurons x 784
inputs weights. As in linear regression, each neuron has an intercept term. In machine learning this is also known as bias term. So, the total number of parameters of the first layer are 400 x 784 +
400 bias terms. To obtain the output value of a neuron, what is done internally is to multiply each input by its associated weight, add the results together with the bias and apply the ReLU
activation function.
The neurons of the second layer are connected, rather than to the input pixels, to the outputs of the first layer. There are 400 output values from the fist layer and 200 neurons on the second. Each
neuron of the second layer is connected to all the neurons of the fist layer (fully connected as before). Following the same calculations as before, the number of parameters of this second layer is
200 x 400 + 200. ReLU is also used as activation function in this layer.
The third and last layer has 10 x 200 + 10 weights. This layer does not use ReLU, but uses softmax to obtain the probability that a given input image is one digit or another. The first neuron in this
layer corresponds to the probability of the image being a zero, the second corresponds to the probability of being a one and so on.
The number of neurons and the number of layers are actually hyperparameters of the model. As the number of neurons or layers increases, the expressiveness of the network increases, being able to
generate more complex decision boundaries. This is good to some extent, when the number of neurons or layers is very large, in addition to performance problems it is easy to overfit (memorize the
training set making the error of generalization very high). More about overfitting has been said in Chapter 2, “Develop machine learning models.”
Create a new Notebook for this exercise. If you are using a Deep Learning Virtual Machine you can run the Jupyter Notebook by clicking the desktop icon (see Figure 4-7).
First write all the necessary imports. Apart from numpy, matplotlib and cntk, you use sklearn to easily load the MNIST dataset.
%matplotlib inline
import matplotlib.pyplot as plt
import cntk as C
import numpy as np
from sklearn.datasets import fetch_mldata
Fetch the data, preprocess it and split it for training and test. The values of each pixel range from 0 to 255 in grayscale. In order to improve the performance of neural networks, always consider
normalizing the inputs, which is why each pixel is divided between 255.
# Get the data and save it in your home directory.
mnist = fetch_mldata('MNIST original', data_home='~')
# Rescale the data
X = mnist.data.astype(np.float32) / 255
# One hot encoding
y = np.zeros((70000, 10), dtype=np.float32)
y[np.arange(0, 70000), mnist.target.astype(int)] = 1
# Shuffle samples.
p = np.random.permutation(len(X))
X, y = X[p], y[p]
# Split train and test.
X_train, X_test = X[:60000], X[60000:] # 60000 for training
y_train, y_test = y[:60000], y[60000:] # 10000 for testing
Now that the data is loaded you can create the computational graph.
input_dim = 784
hidden_layers = [400, 200]
num_output_classes = 10
input = C.input_variable((input_dim))
label = C.input_variable((num_output_classes))
def create_model(features):
with C.layers.default_options(init=C.layers.glorot_uniform(), activation=C.ops.
h = features
for i in hidden_layers: # for each hidden layer it creates a Dense layer
h = C.layers.Dense(i)(h)
return C.layers.Dense(num_output_classes, activation=None)(h)
label_pred = create_model(input)
loss = C.cross_entropy_with_softmax(label_pred, label)
label_error = C.classification_error(label_pred, label)
The variable hidden_layers is a list with the number of hidden neurons of each layer. If you add more numbers to the list, more layers are added to the final neural network. Those layers are
initialized using Xavier Glorot uniform initialization (C.layers.glorot_uniform) and are using ReLU (C.ops.relu) as activation function. Xavier’s initialization is a way to set the initial weights of
a network with the objective of increasing gradients to speed up training. ReLU activations are commonly used in deep nets because does not suffer from the vanishing gradient problem. The vanishing
gradient problem refers to the fact that using sigmoids or hyperbolic tangents (tanh) as activation functions causes the backpropagation signal to degrade as the number of layers increases. This
occurs because the derivatives of these functions are between 0 and 1, so consecutive multiplications of small gradients make the gradient of the first layers close to 0. A gradient close to zero
means that the weights are practically not updated after each iteration. ReLU activation implements the function max(0, x), so the derivative when x is positive is always 1, avoiding the vanishing
gradient problem.
Notice that this time you have used “multi-operation” nodes like the cross entropy and the dense layer. In this way there is no need to manually implement the Softmax function, or implement the
matrix multiplication and bias addition that takes place under the wood in a Dense layer. Most of the time, CNTK manages the optimization of these nodes in an even more efficient way than
implementing operation by operation.
Once the computational network has been built, all that remains is to feed it with data, execute forward passes, and optimize the value of the parameters with backward passes.
num_minibatches_to_train = 6001
minibatch_size = 64
learner = C.sgd(label_pred.parameters, 0.2) # constant learning rate -> 0.2
trainer = C.Trainer(label_pred, (loss, label_error), [learner])
# Create a generator
def minibatch_generator(batch_size):
i = 0
while True:
idx = range(i, i + batch_size)
yield X_train[idx], y_train[idx]
i = (i + batch_size) % (len(X_train) - batch_size)
# Get an infinite iterator that returns minibatches of size minibatch_size
get_minibatch = minibatch_generator(minibatch_size)
for i in range(0, num_minibatches_to_train): # for each minibatch
# Get minibatch using the iterator get_minibatch
batch_x, batch_y = next(get_minibatch)
# Train minibatch
input: batch_x,
label: batch_y
# Show training loss and test accuracy each 500 minibatches
if i % 500 == 0:
training_loss = trainer.previous_minibatch_loss_average
accuracy = 1 - trainer.test_minibatch({
input: X_test,
label: y_test
print("{} - Train Loss: {:.3f}, Test Accuracy: {:.3f}".format(i, training_
loss, accuracy))
Probably the trickiest part of this script is the function minibatch_generator. This function is a Python generator and the variable get_minibatch contains an iterator that returns a different batch
each time next(get_minibatch) is called. That part really has nothing to do with CNTK, it’s just a way to get different samples in each batch.
During the training, every 500 minibatches, information is printed on screen about the state of the training: number of minibatches, training loss, and the result of the model evaluation on a test
The output of the code fragment above is shown in Figure 4-10.
FIGURE 4-10 Training output log, in 6000 minibatches the network reaches 98.1 percent accuracy
It is always convenient to display some examples of the test set and check the performance of the model. The following script creates a grid of plots in which random samples are painted. The title of
each plot is the value of its label and the value predicted by the algorithm. If these values do not match, the title will be painted in red to indicate an error.
def plotSample(ax, n):
ax.imshow(X_test[n].reshape(28,28), cmap="gray_r")
# The next two lines use argmax to pass from one-hot encoding to number.
label = y_test[n].argmax()
predicted = label_pred.eval({input: [X_test[n]]}).argmax()
# If correct: black title, if error: red
title_prop = {"color": "black" if label == predicted else "red"}
ax.set_title("Label: {}, Predicted: {}".format(label, predicted), title_prop)
np.random.seed(2) # for reproducibility
fig, ax = plt.subplots(nrows=4, ncols=4, figsize=(10, 6))
for row in ax:
for col in row:
plotSample(col, np.random.randint(len(X_test)))
Execute the code. If you use the same random seed, the samples should be like those in Figure 4-11. The value 2 has been chosen as seed because it offers samples in which errors have been made; other
random seed values give fewer or no errors. A very interesting way to evaluate neural networks is to look at the errors. In the Figure 4-11 you can see that both misclassifications occur in numbers
that are cut below. This kind of visualization can help you to discover bugs in data acquisition or preprocessing. Other common bugs that can be discovered visualizing errors are bad labeled
examples, where the algorithm fails because it is actually detecting the correct class but the label is not correct.
FIGURE 4-11 Showing examples of predictions, errors are in red
The model is already properly trained and has been proven to work. The example ends here, but consider playing with some hyperparameters like the number of hidden layers/neurons, minibatch size,
learning rate, or even other learners different to SGD.
In addition to what you have seen, CNTK offers you many options that have not been discussed: a lot of different operations and losses, data readers, saving trained models to disk, and loading them
to later executions.
Determine when to implement a neural network
You have seen something about neural nets on Azure Machine Learning in Skill 2.1. One section of that skill discussed how to create and train neural networks using the Azure Machine Learning module.
It was mentioned that, in order to define the structure of the network, it was necessary to write a Net# script. An example was shown on how to construct a neural network with two hidden layers and
an output of 10 neurons with Softmax activation. In fact, that example addressed the same problem as in the previous section: classification using the MNIST dataset. If we compare the four Net# lines
and the dragging and dropping of Azure Machine Learning modules with the complexity of a CNTK script, you see that there is a big difference. This section lists the advantages and disadvantages of
implementing your own neural network in CNTK as opposed to using the Azure Machine Learning module.
With Net# you can:
• Create hidden layers and control the number of nodes in each layer.
• Specify how layers are to be connected to each other and define special connectivity structures, such as convolutions and weight sharing bundles.
• Specify activation functions.
But it has other important limitations such as:
• It does not accept regularization (neither L2 nor dropout). This makes the training more difficult, especially with big networks. To avoid this problem, you have to limit the number of training
iterations so that the network does not adjust too much to the data (overfitting).
• The number of activation functions that you can use are limited and you cannot define your own activation functions either. For example, there is no ReLU activation that are commonly used in deep
learning due to their benefits in backpropagation.
• There are certain aspects that you cannot modify, such as the batch size of the Stochastic Gradient Descent (SGD). Besides that, you cannot use other optimization algorithms; you can use SGD with
momentum, but not others like Adam, or RMSprop.
• You cannot define recurrent or recursive neural networks.
Apart from the Net# limitations, the information provided during training in the output log is quite limited and cannot be changed (see Figure 4-12).
FIGURE 4-12 Azure Machine Learning output log of a neural network module
With all these shortcomings it is difficult to build a deep neural network that can be successfully trained. But apart from that Net# and log limitations, for deep architectures and to handle a high
volume of data (MNIST is actually a small dataset), training a neural network in Azure Machine Learning can be very slow. For all those cases it is preferable to use a GPU-enabled machine. Another
drawback of Azure Machine Learning is that it does not allow you to manage the resources dedicated to each experiment. Using an Azure virtual machine you can change the size of the machine whenever
you need it.
In CNTK we have all the control during training. For instance, you can stop training when loss goes under a certain threshold. Something as simple as that cannot be done in Azure Machine Learning.
All that control comes from the freedom you get from using a programming language.
Programming can become more difficult than managing a graphical interface such as in Azure Machine Learning, but sometimes building a neural network with CNTK is not that much more complicated than
writing a complex Net# script. On the other hand, CNTK has a Python API, which is a programming language that is very common in data science and easy to learn. If programming is difficult for you,
you should consider using tools like Keras. This tool is a high level deep learning library that uses other libraries like Tensorflow or CNTK for the computational graph implementation. You can
implement a digit classifier with far fewer lines than those shown in the example in the previous section and with exactly the same benefits as using CNTK. Also, CNTK is an open source project, so
there is a community that offers support for the tool and a lot of examples are available on the Internet.
Deep learning models work very well, especially when working with tons of unstructured data. Those big models are almost impossible to implement in Azure Machine Learning due to the computation
limitations, but for simple, small, and structured datasets, the use of Azure Machine Learning can be more convenient and achieve the same results as CNTK.
Figure 4-13 shows a summary table with all the pros and cons listed in this section.
FIGURE 4-13 Comparative table showing the pros and cons of implementing a neural network with Azure Machine Learning versus custom implementations with tools like CNTK
|
{"url":"https://www.microsoftpressstore.com/articles/article.aspx?p=2873371&seqNum=5","timestamp":"2024-11-04T00:56:31Z","content_type":"text/html","content_length":"126190","record_id":"<urn:uuid:e5d4ceea-4253-4081-87da-08f7465e79dd>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00460.warc.gz"}
|
Free Maths Olympiad (CMO) Previous Year Paper for Class 3
>>> Join CREST Olympiads WhatsApp Channel for latest updates. CREST Mathematics Olympiad Previous Year Paper PDF for Class 3:
If your web browser doesn't have a PDF Plugin. Instead you can Click here to download the PDF
Section 1: Numerals, Number Names and Number Sense (4-digit numbers), Computation Operations, Fractions, Length, Weight, Capacity, Temperature, Time, Money, Geometry, Data Handling.
Achievers Section: Higher Order Thinking Questions - Syllabus as per Section 1
Q.1 Q.2 Q.3 Q.4 Q.5 Q.6 Q.7 Q.8 Q.9 Q.10
Q.1 Reh attends swimming classes from 4:00 p.m. for 1 hour. He then takes 15 minutes to reach for his guitar classes. He learns guitar for 50 minutes and then takes 20 minutes to reach his home.
At what time does he reach home?
Q.2 Identify the value of Y - X:
What is the time depicted in the clock?
The given table shows the amount of money that Harsha saved in her piggy bank each month. Find the total amount of money that she saved from January to
│ Month │ Amount in a piggy bank in each month │
│January │ $1050 │
Q.4 ├────────┼──────────────────────────────────────┤
│February│ $810 │
│ March │ $535 │
│ April │ $900 │
│ May │ $1175 │
Q.5 Roger is a businessman. He invested in different firms. He invested $3000 in edutech firm, $2789 in marketing, $3675 in IT firm and $873 in Non IT firm. In which firm does he invested more?
Q.6 Deru has written some alphabets and she wants to know the odd one out of it. Identify the odd one out.
Q.7 At the airport, Alen saw the time as 22:25 hours. Choose the correct representation of the given time in the following clocks:
Q.8 Ren gave $3400 to his college fee and they gave him $300 in return. What was the fee amount?
Q.9 On her birthday, Selina took 60 chocolates to school. Half of the chocolates were eaten by her classmates and one-fourth of the total chocolates were eaten by her friends of a different section.
How many chocolates does she still have?
Q.10 Roh and Sam have $120 and $135 each. Sem has $35 more than the sum of the money of Roh and Sam have taken together. How much money does Sem have?
Your Score: 0/10
Answers to Previous Year Questions from CREST Olympiads:
Q.1 c Q.2 a Q.3 c Q.4 b Q.5 d Q.6 b Q.7 d Q.8 b Q.9 d Q.10 d
|
{"url":"https://www.crestolympiads.com/maths-olympiad-cmo-previous-year-class-3","timestamp":"2024-11-08T04:48:06Z","content_type":"text/html","content_length":"130405","record_id":"<urn:uuid:a09e3288-7fd9-4cea-997b-6fcd78c1fbad>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00856.warc.gz"}
|
The triple Points of Phase Rule
The triple points:
P is the point at which the curves AP, PQ and RP meet and the three phases rhombic, monoclinic and sulphur vapour co-exist. This is, therefore, a triple point. The temperature and pressure at this
point are 95.5^0C and 1 x 10^-5 atmosphere respectively.
Q is a triple point where monoclinic, liquid and sulphur vapour co-exist as the curves PQ, QB and RP meet. The temperature and pressure at this point are 119.3°C and 6 x 10^-3 atmosphere
R is the point at which the curses PR, QR and RT meet. So this is the point in the diagram where rhombic, monoclinic and liquid sulphur co-exist. The temperature and pressure which define this point
are 151°C and 1290 atmosphere respectively.
S is the point at which the curves PS (extension of AP), QS (extension of BQ) and RS (extension of TR) meet. All the three curves show the metastable equilibria between phases as indicated. This also
is a triple point at which rhombic, liquid and sulphur vapour can co-exists in a metastable equilibrium. The temperature and pressure of this metastable triple point are 112.8^0C and 0.028 mm at Hg.
It may be noticed that P and Q lie at pressures below the atmospheric pressure. When rhombic sulphur is gradually heated at one atmosphere pressure it will turn into the monoclinic form at around 115
^0C, and if the heating is continued the monoclinic form starts liquefying once the temperature tracks the value indicated by M. When it is completely in the liquid form temperature may be increased
until the point N is reached. The liquid and vapour are in equilibrium at this temperature and is the boiling point (444.6^0C) of sulphur at one atmosphere pressure.
|
{"url":"https://qsstudy.com/triple-points-phase-rule/","timestamp":"2024-11-11T23:02:17Z","content_type":"text/html","content_length":"24255","record_id":"<urn:uuid:611f89b3-db25-4e31-80de-a1acd76849c4>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00849.warc.gz"}
|
state_infidelity | Quantum information | Nodes | Graphs | Boulder Opal | References | Q-CTRL Documentation
Graph.state_infidelity(x, y, *, name=None)
Calculate the infidelity of two pure states.
• x (np.ndarray or Tensor) – A pure state, $|\psi\rangle$, with shape (..., D). Note that the last dimension must be the same as y, and the batch dimension, if any, must be broadcastable with y.
• y (np.ndarray or Tensor) – A pure state, $|\phi\rangle$, with shape (..., D). Note that the last dimension must be the same as x, and the batch dimension, if any, must be broadcastable with x.
• name (str or None , optional) – The name of the node.
The infidelity of two pure states, with shape (...).
Graph.density_matrix_infidelity : Infidelity between two density matrices.
Graph.inner_product : Inner product of two vectors.
Graph.unitary_infidelity : Infidelity between a unitary and target operators.
The infidelity of two pure states $|\psi\rangle$ and $|\phi\rangle$ is defined as $1 - | \langle \psi | \phi \rangle |^2$.
For more information about the state fidelity, see fidelity of quantum states on Wikipedia.
>>> graph.state_infidelity(
... np.array([0, 1]), np.array([[1, 0], [0, 1]]), name="infidelity"
... )
<Tensor: name="infidelity", operation_name="state_infidelity", shape=(2,)>
>>> result = bo.execute_graph(graph=graph, output_node_names="infidelity")
>>> result["output"]["infidelity"]["value"]
array([1., 0.])
|
{"url":"https://docs.q-ctrl.com/references/boulder-opal/boulderopal/graph/Graph/state_infidelity","timestamp":"2024-11-13T08:33:09Z","content_type":"text/html","content_length":"68677","record_id":"<urn:uuid:553732c0-f04f-49bc-836a-ca8f3b9d8771>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00707.warc.gz"}
|
Physics Colloquium
Complex Behaviour in Classical and Quantum Chaos
Prof. Dr. Marko Robnik
Center for Applied Mathematics and Theoretical Physics, University of Maribor
16:15 - 17:15Tuesday 01 October 2024
I shall first explain how chaotic behaviour can emerge in deterministic systems of classical dynamics. It is due to a sensitive dependence on initial conditions, meaning that two nearby initial
states of a system develop in time such that their positions (states) separate very fast. After a finite time (the Lyapunov time) the accuracy of an orbit characterizing the state of the system is
entirely lost, and the system can be in any allowed state. The system can be also ergodic, meaning that one single chaotic orbit describing the evolution of the system visits any neighbourhood of all
other states of the system.
In the same sense, chaotic behaviour in time evolution does not exist in quantum mechanics. However, if we look at the structural and statistical properties of certain quantum systems, we do find
clear analogies and relationships with the structures of the corresponding classical systems. This is manifested in the eigenstates and energy spectra of various quantum systems (mesoscopic
solid-state systems, molecules, atoms, nuclei, elementary particles) and other wave systems (electromagnetic, acoustic, elastic, seismic, water surface waves etc.), which are observed in nature and
in experiments.
After the general presentation I shall discuss research results we have recently obtained for quantum chaos.
|
{"url":"https://www.if.tugraz.at/workshops/abstract.php?3011","timestamp":"2024-11-08T02:43:35Z","content_type":"text/html","content_length":"4614","record_id":"<urn:uuid:6b85c46e-33f2-4ca4-ade9-1580e256786c>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00520.warc.gz"}
|
The Taub Faculty of Computer Science
The Taub Faculty of Computer Science Events and Talks
Ilan Orlov (Bar-Ilan University)
Wednesday, 27.04.2011, 12:30
Generating random bits is a fundamental problem in cryptography. Coin-tossing protocols, which generate a random bit with uniform distribution, are used as a building box in many cryptographic
protocols. Cleve [STOC 1986] has shown that if at least half of the parties can be malicious, then, in any $r$-round coin-tossing protocol, the malicious parties can cause a bias of $\Omega(1/r)$ in
the bit that the honest parties output. However, for more than two decades the best known protocols had bias $t/\sqrt{r}$, where $t$ is the number of corrupted parties. Recently, in a surprising
result, Moran, Naor, and Segev [TCC 2009] have shown that there is an $r$-round two-party coin-tossing protocol with the optimal bias of $O(1/r)$. We extend Moran et al.~results to the multiparty
model when less than 2/3 of the parties are malicious. The bias of our protocol is proportional to $1/r$ and depends on the gap between the number of malicious parties and the number of honest
parties in the protocol. Specifically, for a constant number of parties or when the number of malicious parties is somewhat larger than half, we present an $r$-round $m$-party coin-tossing protocol
with optimal bias of $O(1/r)$.
|
{"url":"https://cs.technion.ac.il/events/view-event.php?evid=1305","timestamp":"2024-11-06T14:04:56Z","content_type":"text/html","content_length":"15139","record_id":"<urn:uuid:caea50b0-a595-4261-b085-60b8e6cbb3b1>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00331.warc.gz"}
|
Predicting Systematic Risk: Implications from Growth Options Eric Jacquier Sheridan Titman
Predicting Systematic Risk:
Implications from Growth Options
Eric Jacquier
Sheridan Titman
Atakan Yalçın∗
March 2010
Last version before publication
In accordance with the well-known financial leverage effect, decreases in stock prices cause an increase in
the levered equity beta for a given unlevered beta. However, as growth options are more volatile and have
higher risk than assets in place, a price decrease may decrease the unlevered equity beta via an operating
leverage effect. This is because price decreases are associated with a proportionately higher loss in growth
options than in assets in place. Most of the existing literature focuses on the financial leverage effect:
This paper examines both effects. We show, with a simple option pricing model, the opposing effects at
work when the firm is a portfolio of assets in place and growth options. Our empirical results show that,
contrary to common belief, the operating leverage effect largely dominates the financial leverage effect, even
for initially highly levered firms with presumably few growth options. We then link variations in betas to
measurable firm characteristics that proxy for the fraction of the firm invested in growth options. We show
that these proxies jointly predict a large fraction of future cross-sectional differences in betas. These results
have important implications on the predictability of equity betas, hence on empirical asset pricing and on
portfolio optimization that controls for systematic risk.
Jacquier, at CIRANO and HEC Montreal Finance Department, is the corresponding author. Titman is at the
College of Business Administration at University of Texas, Austin. Yalçın is at the Graduate School of Business, Koç
University. The paper has benefited from comments of Eric Ghysels, Edie Hotchkiss, Alan Marcus, Pegaret Pichler,
Peter Schotman, Eric Renault, and the participants of the Imperial College Financial Econometrics Conference and
the Stockholm School of Economics Finance seminar. We are especially grateful to the referee for many insightful
The measurement of systematic risk (beta) is essential for portfolio and risk management,
as well as for joint tests of asset pricing models and market efficiency. Consider, for example,
DeBondt and Thaler (1985, 1987) who show that losers perform very well over the next three-tofive-year period. They conclude that these reversals support the hypothesis that stock prices tend
to overreact to information. However, the well-known financial leverage hypothesis in Hamada
(1972) and Rubinstein (1974) implies that losers, experiencing an increase in financial leverage,
should also have increased levered equity beta. Indeed, Chan (1988) and Ball and Kothari (1989)
argue that the higher subsequent returns of the losers simply reflect higher expected returns, due
to this increase in systematic risk. However, this conclusion may change drastically if the betas of
the losers have not changed, or even decreased. This example shows that to analyze performance,
one must first agree on risk. For losers and winners, one should agree whether risk went up or
down, before concluding for or against market efficiency.
This paper documents, with a simple model and empirically, the determinants of the variation of systematic risk. The empirical literature discusses the financial more than the operating
leverage, even though the latter received some early attention, (see Rosenberg and McKibben
(1973) and Rosenberg (1974, 1985)). Another segment of the literature mostly uses simple time
series methods such as rolling windows or univariate filters to predict betas. In contrast, we show
that operating leverage have a strong impact on betas, and that accounting for its variation can
markedly improve their estimation. For example, we show that for a wide range of initial financial
and operating conditions, the equity betas of the losers decrease. Therefore, an operating leverage
effect counters and dominates the financial leverage effect. This possibility is not explored in the
empirical literature, despite the intense debate on the nature of the well-known reversal results.
We illustrate the impact of operating leverage on betas with a simple model in which a
firm with no debt has both growth options and assets in place. Growth options require more
future discretionary investment expenditures than assets in place, and are akin to out-of-the money
options, while assets in place are in-the-money options. It is easily shown that the betas of both
options increase as moneyness decreases. Therefore, as is the case with the financial leverage effect,
by this “moneyness effect”, negative returns are accompanied by increases in beta. However, we
also show that since the firm is a portfolio of options of different moneyness, a second, opposite
effect is at play. As moneyness decreases, options more out-of-the money lose more value than
options more in-the-money. The beta of the firm is a value-weighted average of the two betas,
growth options, and assets in place. Therefore, the beta of the firm tilts toward the lower of the
two betas, and can actually decrease even though both betas increase. We denote this the “changein-mix effect”. Our model shows that for a central range of firm value weights in growth options,
the change-in-mix effect dominates the moneyness effect.
The financial leverage effect does offset the change-in-mix effect for levered betas. However,
a lot of empirical evidence is at odds with the financial leverage hypothesis. For example, the
equity betas of financially distressed firms decline as their condition deteriorates even though their
unsystematic risk and total risk increase, (see Aharony, Jones, and Swary (1980) and Altman and
Brenner (1981)).1 Braun et al. (1995) incorporate financial leverage in a time series model for
betas, without much success.
Our empirical analysis first seeks to discover how general the dominance of the change-inmix effect over the combined moneyness and financial leverage effects is. We measure changes in
levered betas for losers and winners, and imply out the unlevered beta by also measuring changes
in financial leverage. The patterns are so strong that we do not need to assume a functional
levering relationship to infer the sign of change in unlevered beta. The levered equity betas of
losers decrease and those of winners increase, which is consistent with a dominance of the changein-mix effect over both moneyness and leverage. The result is quite robust to the initial, low or
high, financial leverage; losers with high initial debt still experience subsequent decreases in betas.
We conduct a similar analysis on industries with initially low and high growth options, to test
the robustness of the change-in-mix effect to initial operating leverage. While somewhat weaker,
They attribute the decline in betas to possible decreases in the systematic risk of earnings, but do not explain why
this may happen. The empirical corporate finance literature also fails to vindicate the financial leverage hypothesis.
The equity betas of firms that act to increase their financial leverage do not increase, see e.g., Healy and Palepu
(1990), Dann, Masulis, and Mayers (1991), Denis and Kadlec (1994), or Kaplan and Stein (1990).
the change-in-mix effect often dominates the moneyness and the financial leverage effects. For this
analysis, we need proxies for the weight of the firm’s value in growth option. We use the ratios of
market to book value of asset, earnings over price, and capital expenditures to assets, as well as
the the dividend yield. These proxies have theoretical justification, and have been studied. With
these proxies, we confirm that the change-in-mix effect dominates both the moneyness and financial
leverage effects. Specifically, levered betas are positively related to growth options as measured by
the proxy. Finally, we show, with cross-sectional and panel regressions, that the growth option
proxies are reliable predictors of the cross-section of equity betas.
The paper proceeds as follows: Section 2 develops a simple option pricing model to illustrate
the possible links between stock returns and betas; Section 3 describes the data, proxies, and the
methodology; Section 4 contains the empirical results and analysis; and Section 5 concludes.
Growth Options and Unlevered Equity Betas
The simple option-pricing model below shows how unlevered betas may respond to changes
in firm value. In the model, a firm has assets already in place and future growth options predicated
on further discretionary investments. As the firm is not obligated to make the added investments,
a growth option is viewed as a call option to acquire an asset at an exercise price equal to the
added investment. Assets in place are far in-the-money options. Myers (1977) describes the firm
as a portfolio of assets in place and growth options, a distinction more of degree than kind. All
assets have a varying fraction of their value attributed to call options from discretionary decisions.
Before proceeding, we need to establish the relation between the beta of an option and
moneyness. Consider an option G to undertake a project at an investment cost (exercise price) CG .
The project has an underlying value s and a beta βs . Galai and Masulis (1976) show that the beta
of the option is βG = ηG βs , where ηG is the elasticity of the option. We show in the appendix that
ηG decreases as moneyness
increases. Therefore, for two options A and G on s, with A deeper
in the money than G, we have βG > βA . Growth options have higher betas than assets in place.2
Skinner (1993) reports that firms with relatively more growth options tend to have higher asset betas.
We now consider a firm with these two options, its value is V = VA + VG = ANA + GNG .
A ≡ A(s), G ≡ G(s) are the option value of an asset in place and a growth option. A(s) is deep
in-the-money with investment cost CA s, G(s) is out-of-the-money with CG > s. NA , NG are
the numbers of options held by the firm. The state variable s simultaneously affects the moneyness
of both options. We can therefore distinguish between two types of news; first, possibly separate
news about the number of options NA , NG held by the firm, second, news about moneyness driven
by economy-wide state variables that jointly affect both options in the firm. The single underlying
state variable s reflects this joint effect. For example, s may be related to the price of oil while A
and G represent functioning oil wells and undeveloped oil property, or s could be the price of beef
and A and G could be Mc Donald’s operations in the US versus China.
More generally, one may think of a multivariate state vector s, of which not all elements affect
all the options.3 For our qualitative purpose, there would not be much gain to this generalization.
It is already imbedded in the scale factors NG , NA , which main intuition here is not an actual
number of options as to allow for independent news about each of the two options (see section 2.1).
The unlevered beta of the firm is:
βV =
ANA βA + GNG βG
VA βA + VG βG
Separate news about assets in place and growth options
We first consider news about NG and NA , the scales of the growth options and assets in
place. From (1), it follows that:
(βG − βA ).
That is, an increase in NG causes an increase in the firm’s beta.
follows by swapping indices A
and G in (2); an increase in NA causes a decrease in beta. Therefore, simultaneous news ∆NA , ∆NG
Berk, Green and Naik (1999) model the firm value in terms of fundamental state variables, where growth options
are explicit calls whose value is affected by state variables, as well as cash flows.
of the same sign have opposite effects on βV . Their combined effect is given by the total derivative:
dβV =
βG − βA
(GVA ∆NG − AVG ∆NA ).
It follows that, for simultaneous increases in the numbers of options, dβV is positive if and only if:
or equivalently:
That is, the ratio of the increases must be larger than the current ratio of growth options over
assets in place. For simultaneous bad news, βV decreases if the inequalities in (3) are true.
So what can this tell us about betas of winners and losers? Analysts regularly revise their
assessment of NA and NG . Growth options are most likely harder to assess than assets in place.
Therefore, revisions of NG are likely of a larger magnitude than of NA , and, for revisions in the same
direction, condition (3) will hold. Consequently, we expect large negative returns to be followed
by decreases in betas. This is a first aspect of what we refer to as the “change-in-mix” effect. The
empirical section will show that it is almost always the case.
Common effect of news on the underlying asset
News about an underlying state variable can simultaneously affect both options. Consider
news about s, with NA and NG unchanged.4 This can be information about the price of oil, not the
amount of oil in the ground owned by Texaco. Or, the wholesale price of beef goes down, moving
both US and China operations for MacDonald’s further in the money. Good news about s increases
both A and G, as they both move more into the money. Then, as we already know, both βA and
βG decrease. Yet, we now show that this does not always mean that βV in (1) decreases. This is
We model a single firm. A large change in s for a large fraction of the firms in the economy may lead them
to redirect their efforts toward different growth options, in turn possibly affecting NA , NG . A full multi-period
equilibrium model of the firm would have to account for this.
because A and G increase by different amounts, which affects their weight in V . We compute
GNG 0
ANA 0
βG − βV 0
βA − βV 0
βG +
βA + NG
G + NA
0 , β 0 are derivatives with respect to s. Recall that β = η β for i equal to G or A.
where G0 , A0 , βG
i s
Replace βV with (1), and it can be shown that :
V dβV
NA NG A0 G0 s
= GNG ηG
+ ANA ηA
+ [ηG − ηA ]2
βs ds
The first two terms are negative, the third is positive. A numerical example now shows that
their sum can be positive for a central range of moneyness s/CG and firm weight in growth options
VG /V . We assume CA = 1, CG = 100, s ∈ [2, 90], and, without loss of generality, NA = 1, a risk free
rate of zero and a variance to maturity σ 2 T = 1. For this range of s, the asset in place is far into
the money and the growth option deep out of the money. The weight of growth options VG /V also
matters. To cover a wide range we use three values of NG = 5, 10, 20. The bottom panel in Figure
1 plots the weight of the firm in growth options, VG /V versus moneyness s/CG . The three curves,
for NG = 5, 10, 20 confirm that these values of NG span a wide range for VG /V . For example, for a
moneyness s/CG of 20%, VG /V varies from 20% to 60%. The key is that this wide range of values
has no effect on the pattern uncovered by the top plots, βV versus s/CG .
Figure 1 here
The three curves in the top panel have the same shape, with only minor variations in the
location of the minimum and maximum. There are three regions. First, on the left, the growth
options are very far out of the money, hence worth very little. There, an increase in s decreases βV .
Second, on the right, where the growth options are getting closer to the money and make a larger
fraction VG /V of the firm, an increase in s also decreases βV . In both these regions, the firm is
approximately homogeneous in the type of options held, nearly all in or nearly all out-of-the money.
That is, the firm is approximately a one-option firm, the changes in weights VG /V, VA /V do not
play a role and the moneyness effect dominates. Hence, an increase in s increases moneyness, which
causes the firm’s beta to decrease. In the central region however, the firm is the most heterogeneous
in G and A. So, even though both βG , βA decrease as s increases, the weight VG /V increases enough
so that βV increases.
Implications for empirical analysis
The general implication of this model is that unlevered equity betas are more likely to be
positively, rather than negatively, correlated with firm value. This was argued when changes in
price came from news about the scales NG , NA , and about s that simultaneously affect both option
values. Our empirical analysis will not separate news on scales from news on s, or the moneyness
from the change-in-mix effects. Rather, it will document the dominance of the positive relationship
between betas and returns (due to change-in-mix) over the negative relationship (due to financial
leverage and moneyness).
We sort firms by performance deciles and show that, indeed, the unlevered betas of the
winners (losers) increase (decrease), consistent with this positive association. The model in section
2 does not incorporate financial leverage which creates an negative association between returns and
levered betas, and we only observe levered betas. But the observation of the levered betas and the
debt-to-equity ratio will allow us to imply out the direction of the change in unlevered betas for
losers and winners. This will allow us to conclude whether the change-in-mix effect dominates the
moneyness effect. We will also document the robustness of the positive relationship between beta
and firm value for subgroups of firms with initially high and low debt. The results depend on the
initial weight in growth options, central versus lateral regions of Figure 1. We will use industries
with high and low growth opportunities to explore these lateral regions of Figure 1.
Even this simple model shows that the relationship between unlevered beta and firm value
may be inverse for extreme cases of initial weight in growth option. These are the left and right
regions in the top plot of Figure 1. This begs the question of how wide the central region is in
reality. To check this we study the relationship between beta and performance for industries argued
to have high or low weight in growth opportunities. To characterize these industries as high or low
growth, we introduce proxies for growth opportunities. We then directly check the cross-sectional
relationship between these proxies and levered and unlevered betas.
Returns Data and Proxies for Growth Options
We collect monthly stock returns from CRSP, and annual financial statement data from the
merged CRSP-Compustat database. Accounting data for fiscal years ending in calendar year t-1
are merged with monthly stock returns from July of year t to June of year t+1. That is, all our
sorting presented in the empirical section is done from July to June. Our selection criteria and
construction of firm-specific variables follow Fama and French (2001). We use NYSE, AMEX, and
NASDAQ stocks with CRSP codes of 10 or 11, from June 1965 to June 2007. We exclude utilities,
financial firms, and firms with book value of equity below K$250 or assets below K$500. We use a
firm’s market capitalization at the end of June of year t to calculate its book-to-market, leverage
and earnings-to-price ratios, and dividend yield for that year.
As the weights of firms in growth options and assets in place are not observable, it is common
practice to use proxy variables. These proxies all have some theoretical justification, and a number
of studies evaluate their performance. For example, Goyal, Lehn and Racic (2002) show, with
U.S. defense industry firms, that proxy variables track changes in investment opportunities. Any
given proxy will potentially fail to measure the full extent of the investment opportunity set, and
has its advantages and disadvantages. Erickson and Whited (2006) conclude that all the proxies
for Tobin’s q which they examine contain significant measurement errors. We use several proxies
motivated in the literature and again below, each with its qualities and limitations.
Our first proxy for growth options is the ratio of market to book value of assets, Mba.
The book value of assets is a proxy for assets in place. The market value of assets is a proxy for
the sum of assets in place and growth options. The higher Mba is, the higher the proportion of
growth options to firm value. As in Fama and French (2001), we define the market value of assets
as the book value of assets minus the book value of equity plus the market value of equity. Adam
and Goyal (2008) show that, among common proxies, the Mba ratio has the highest information
content with respect to investment opportunities and is least affected by other confounding factors.
It is also close to the Market to Book value of equity, often found to be a strong predictor of the
cross-section of returns, since Fama and French (1992). Mba is similar to the reciprocal of the ratio
of book value of assets to total firm value used by Smith and Watts (1992).
The second proxy is the ratio of earnings-to-price, Ep, used for example by Smith and Watts
(1992). We use the earnings before extraordinary items minus preferred dividends plus incomestatement deferred taxes if available. The larger Ep is, the larger the proportion of equity value
attributable to earnings generated from assets in place rather than growth options. As this is only
valid for firms with non-negative earnings, we allocate firms with negative earnings to a separate
group when we sort by Ep. Ep and Mba are growth measures often used in the literature.
The third proxy is the dividend yield, Div. Dividends are linked to investment through the
firm’s cash flow identity. Jensen (1986) argues that firms with more growth options have lower
free cash flows and pay lower dividends. When sorting firms based on the dividend yield, we put
zero-dividend firms in a separate group.
The last proxy is the ratio of capital expenditures to net fixed assets, Capex. Capital
expenditures are mostly seen as discretionary investment decisions. The higher capital expenditures
are, the greater the investment made by a firm to create new products, and in turn the greater the
growth options. However, capital expenditures are a pure accounting measure, and may be lumpy.
Adam and Goyal (2008) suggest that Capex alone may not be a very good proxy for investment
We also collect the debt-to-equity ratio, Dtoe. According to contracting theory, firms
with high growth options may have lower financial leverage because equity financing controls the
potential under-investment problem associated with risky debt, see Myers (1977). Here we do not
use Dtoe as a proxy for growth options, but it allows us to infer changes in unlevered betas from
changes in the measured equity betas. We will be able to derive unambiguous conclusion without
resorting to specific levering formulas.
In summary, firms with high growth option should have higher Mba and Capex and lower
Ep, Div. Table 1 reports the aggregate values of the proxies for key years from 1965 to 2007, as well
as summary statistics. To compute portfolio proxies, we sum the numerators and the denominators
separately across firms. Capex ranges between 15 and 23% with quickly decaying autocorrelations.
Mba is more persistent, and has a much larger coefficient of variation. From a low of 98% in 1982,
it rises to 254% before the 2001 crash. The Dividend decreases steadily through the period, as
discussed by Fama and French (2001). It is around 1.3% in 2007, from a high of 5% in 1982. Ep is
at 4.8% in 2007, it has increased since its low of 1.2% in 2002. As expected, Dtoe varies a lot with
the state of the economy and can jump dramatically during a crash year. It averages 33% for the
period. Even in the aggregate, these proxies have high coefficients of variation despite their high
Table 1 here.
Empirical Results
The betas of winners and losers
Every year, from June 1971 to june 2004, we assign firms to deciles on the basis of their total
return for the past 3-year. We exclude firms with missing monthly returns or growth proxies in any
period, or with share prices below $1 at the end of any period to remove illiquid stocks. We lost
very few firms due to missing accounting data. This 3-year “ranking” period and the surrounding
3-year pre-ranking and post-ranking periods are labeled 0, -1, and 1 in the tables. We have 34 such
windows of 9 years separated in 3 periods. Statistics are computed, and then averaged over these
overlapping windows, separately for period -1, 0, and 1. We compute the value-weighted monthly
returns of the ten decile portfolios. We estimate the value-weighted beta βV W of each portfolio
with the standard market model regression in excess-returns form. We use the value-weighted
index of all NYSE, AMEX, and NASDAQ stocks as the market index. We subtract the 30-day
U.S. Treasury bill to compute excess returns. We also compute betas for the 4-factor model with
the usual regression in excess returns:
rit = αi + βi,mkt rmkt,t + βi,smb rsmb,t + βi,hml rhml,t + βi,umd rumd,t + it , t = 1, ..., 36,
where rmkt , rsmb , rhml and rumd are the returns on the market, size, and book-to-market factors.
Finally, we compute the aggregate Dtoe for each decile, for periods -1, 0, 1.
We report averages, an analysis with medians gave similar results. We concentrate on
changes in beta from period -1 to period 1, rather than 0 to 1, because it has been noted that the
estimates of beta in period 0 could be biased if the ranking was done along a variable correlated
with beta. Estimates of betas in period -1 are not related to the sorting effected in period 0.
We also compute t-statistics for the null hypothesis that the mean difference in beta between
periods -1 and 1 is zero. The basic OLS standard errors assume i.i.d. data and are therefore
biased for our overlapping windows. This two-year overlap comes from the fact that the betas
are estimated annually from three-year windows, this induces autocorrelation in the time series of
b Consequently, we compute heteroskedasticity and auto-correlation consistent (HAC) standard
errors based on two lags of autocorrelation for the time series of 34 estimates βb1 − βb−1 .
Table 2, panel A reports these estimates. Consider the total returns for periods -1, 0, 1;
the typical loser decile experiences a 53% cumulative loss over the ranking period while the winner
decile records a 264% return. In the following period 1, their returns are similar, 48% for the loser
and 40% for the winner. Period 1 does not show any strong pattern in returns for the 10 deciles.
Table 2 here
Consider now βvw : the loser β drops from 1.35 in period -1 to 1.26 in period 1 while the
winner β increases from 1.21 to 1.27. To infer the changes on the unlevered betas, we look at the
corresponding Dtoe ratios. As the β of the losers decreases, their Dtoe more than doubles, from
25% in period -1 to 58% in period 1. The Dtoe of winners is halved, from 46% to 23%, as their
β increases. Consequently, we do not need a specific “unlevering” formula to draw unambiguous
inference about the direction of the changes in unlevered betas. A specific formulas would allow us to
quantify the magnitude of the change, but this would be limited to the validity of the hypotheses
made. The changes in the levered betas of losers and winners contradict the predictions of the
financial leverage hypothesis. If unlevered betas were constant, the loser (winner) levered betas
should have increased (decreased) due to their large changes in financial leverage. Therefore, we
conclude from Table 2A that a massive change in unlevered betas has taken place, in the opposite
direction. This is consistent with large losses of growth options for the losers, implying a drop in
their unlevered asset betas. Similarly, the winners exhibit large increases in their unlevered beta,
consistent with gains in growth options.
These results show that the change-in-mix effect dominates both the moneyness and the
financial leverage effect. If returns follow a factor model as in Fama-French (1992) or (5), βvw gives
an incomplete description of systematic risk. Consequently, Table 2 reports on the estimates of
βmkt , βhml from (5). We do not report the size beta which is not clearly related to growth options.
Small and young firms may be growth oriented. However, firms that have lost growth options may
be smaller than average. We do not report on the momentum beta.
The HML factor portfolio is the difference between returns on high and low book to market
equity stocks. Firms with higher growth options should have a lower βhml . We can use βhml to
verify if losers (winners) have lost (gained) growth options. Table 2B shows that βhml has an
inverted U-shape pattern across deciles in period 0, and in period -1. Both winners and losers
initially had lower βhml than average. This means that extreme performers come from firms with
high growth options. Their change in βhml from period -1 to period 1 is then consistent with
the expected changes in operating leverage. The losing decile βhml increases from -0.61 to -0.07;
consistent with losses in growth options. The winner βhml decreases from -0.23 to -0.58, consistent
with gains in growth options. In summary, the evolution of βhml show that the losers are firms
that fail to realize an initially high growth potential, while the winners are already growth oriented
firms that further increase their growth options.
We now turn to the market beta βmkt , it can be used as a robustness check on βvw , to
the extent that the four-factor model is more robust than the one-factor model. Table 2B shows
that βmkt is stable from period -1 to period 1, for both losers and winners, with a slight U-shaped
distribution across the performance deciles. Both winners and losers have larger market betas than
the average firm. Due to the change in Dtoe, we conclude that, as with βvw , the unlevered market
beta of losers (winners) has decreased (increased). The results are not as strong as with βvw , but
we can still conclude that the change-in-mix effect largely dominates the moneyness effect.
To conclude, we note that the total returns of losers or winners in the pre- or post-ranking
periods are nothing out of the ordinary, unlike their returns in the ranking-period. The Dtoe ratio
in the ranking period overstates the long-term change in financial leverage for extreme performers.
The financial leverage of losers rises over time, but not as much as suggested in the ranking period.
Robustness to extreme initial financial leverage
The positive relationship between stock returns and unlevered betas is consistent with the
change-in-mix effect dominating both the moneyness and the financial leverage effects. It is also
consistent with the range of growth options in the middle region of Figure 1. We now ask how
robust this result is to extreme initial growth options, the left and right regions of Figure 1. The
financial leverage effect should be the strongest for the most highly levered firms, since contracting
theory implies that such firms have low growth options. If these firms indeed have a low weight in
growth options, they may be in the left region of Figure 1. There, the change-in-mix effect may
be weaker, letting the inverse relationship between stock returns and betas appear. In brief, firms
with very few growth options do not have much more to lose, and the moneyness and financial
leverage effects may dominate.
We allocate stocks to three groups in period -1, along the 30th and 70th percentiles of Dtoe.
We then form period 0 total return deciles within each Dtoe group, and perform the previous
empirical analysis for these groups. Table 3 reports average returns, betas, and Dtoe for the
winners and losers, deciles 1 and 10, among low and high Dtoe. We do not show the middle Dtoe
group, as expected, their results are consistent with Table 2.
Table 3 here
The first two rows in each panel report on the groups with low initial debt. Their Dtoe
shows that they are initially quasi debt-free. The loser βvw decreases again, from 1.48 in period -1
to 1.29 in period 1, βmkt decreases as well. The increase in Dtoe from 3% to 27% again implies that
unlevered betas have decreased by large amounts. The loser βhml increases from -0.95 to -0.39, also
consistent with a decrease in weight in growth options. The winners’ βvw hardly changes, their βmkt
drops slightly from 1.04 to 0.96, and their Dtoe also hardly changes, from 5% to 7%. Consequently,
the unlevered betas of the winners may have slightly decreased, if at all. This is consistent with
a firm in the right region of Figure 1, where an increase in operating leverage comes with a small
decrease in β. It may also be that for firms with an already very high weight in growth options,
positive news are unlikely to be related to a further increase in this weight. Indeed the low-debt
winners have a βhml of -0.62 in period -1, already lower than the average winner βhml in period 1
of -0.58 seen in Table 2. These results show that, while we do detect an effect similar to the right
region in Figure 1, it is very weak.
We now turn to losers and winners with high initial Dtoe, 117% and 141%, respectively. For
the winners, βvw increases slightly and Dtoe decreases drastically, from 141% to 53%. This implies
that the winners’ unlevered betas must have increased, consistent with a dominating change-in-mix
effect. The same conclusion follows from the decrease in βhml from 0.17 to -0.08, which implies an
increase in the weight of growth options. Winners with high initial financial leverage exhibit the
same positive relationship between returns and unlevered betas as the general group. For the losers
with high initial debt, βvw decreases from 1.34 to 1.27, while Dtoe increases from 117% to 147%.
This implies an even stronger decrease in unlevered beta, consistent with a loss of growth options
and a dominance of the change-in-mix effect. The large increase in βhml from 0.07 to 0.43 confirms
this. So, even firms with initially few growth options, which then suffer a further reduction of their
value, exhibit a dominating change-in-mix effect. This would imply that the central region of the
top plot of Figure 1 extends far to the left.
Robustness to initial operating leverage, industry analysis
We continue to investigate the robustness of the dominance of the change-in-mix effect.
Consider firms with few growth options which incur losses: Does their unlevered beta still decrease
markedly? Or, when firms with an initially high weight in growth options incur large wins, does
their beta increase? In effect, we are still assessing the extent of the central region of Figure 1. To
do this, we now use the growth option proxies.
We use Ken French’s 30-industry classification, we obtained similar results with other industry groupings. We select 3 industries with the most and 3 with the least growth options. We
compute each year the average aggregate value of the growth option proxies for each industry. We
assign to each industry a rank for each proxy. We then build two indices; the first is equal to the average rank over the 4 proxies, and the second is equal to the average rank over Mba and Capex. We
consider only Mba and Capex because of their documented strength as proxies for growth options.
These two indices exhibit good consistencies in their ranking. We selected Business Equipment,
Services, and Health Care as high growth industries, and Steel, Automobile, and Oil as low growth.
The textile industry was ranked as low growth more often than the oil industry. We did not use it
because its small number of firms made it difficult to break it down into performance groups.
We analyze these 6 industries as in Table 2. However, because of the smaller number of
firms in each industry, we use the top and bottom quartiles of 3-year returns to select winners and
losers. Then, as in Table 2, we allocate the firms of an industry into winner and loser portfolios
during period 0. Table 4 reports the 3-year return, betas, and Dtoe over periods -1, 0, and 1. First,
consider βhml in period -1 shown in panel B. As they should, our 3 low (high) growth industries
have strong positive (negative) βhml . The spreads of returns in period 0, panel A, also show that
high growth is associated with more volatile returns than low growth. High growth initial financial
leverage is also lower than low growth.
Table 4 here
Consider first the winning low-growth firms. Their levered βvw ’s all increase, and their Dtoe’s
decrease. Therefore, their unlevered beta increases, which is consistent with a dominating changein-mix effect. We cannot conclude as clearly for the Automobile firms that manages to increase
their debt after winning periods. βhml however decreases for all three industries, consistent with
increased weights in growth options. But it is not be surprising that for firms with initially low
weight in growth options, large positive returns are consistent with increases in that weight.
Now consider the low growth firms that experience large losses. Two of these industries see
their levered beta decrease or remain basically constant, while their Dtoe increases. This again
implies a decreased unlevered beta, and a dominating change-in-mix effect. We cannot conclude
unequivocally for the Steel industry. But, with a levered βvw rising 7.5% from 1.19 to 1.28 while the
Dtoe jumps from 56% to 91%, it is most likely that the unlevered beta went down. In brief, Table
4 confirms that losing firms with initially low weight in growth options experience further decrease
in growth options. Their βhml ’s all rising confirm this. The change-in-mix effect still dominates the
moneyness effect, but is not always strong enough to overcome financial leverage.
Firms with initially high weight in growth options, Health, Services, and Business Equipments, see their levered beta decrease after losses. This, together with the rise in their Dtoe,
implies unequivocally that their unlevered beta has decreased by an even larger amount. This is
not surprising, since they had plenty of growth options to lose. More interesting is what happens
to the high growth firms after large positive returns. Business equipments see their levered beta
rise after wins. As their Dtoe also falls by more than 50%, it is certain that their unlevered beta
increased a lot, consistent with a dominating change-in-mix effect. However, the levered betas of
the Health and Services industries decrease by 11% and 6%, while their Dtoe’s experience large
decreases of 50% and 14%. So, again while not absolutely certain, it is extremely likely that unlevered betas have gone up. This is consistent with a further increase in their weight in growth
options, confirmed by the decrease in their βhml . While the change-in-mix effect does not always
overcome the financial leverage effect, it still appears to dominate the moneyness effect
In summary, the effects uncovered in Table 2 are robust to initial operating leverage. The
change-in-mix effect, although sometimes not strong enough to overcome the financial leverage
effect, still dominates the moneyness effect.
Betas and growth options
These results are consistent with the hypotheses that gains and losses of firms are generally
driven by a strong change-in-mix effect which in most situations dominates both the opposing
moneyness and financial leverage effects. Also, growth options have higher betas and more volatile
returns than assets in place. The previous analysis was akin to a time series study of changes in
growth options. We now document the cross-sectional link between betas and proxies for growth
options. This will also help gauge the quality of these variables as proxies of growth options.
At the end of every June from 1968 to 2004, we allocate firms to increasing growth option
deciles on the basis of increasing Capex and Mba, and decreasing Div and Ep. Firms with zero
dividends or non-positive earnings are grouped into an extra 11th portfolio. For the following 3-year
period, we compute for each group, its aggregate proxy value and Dtoe at the end of the period,
and its βvw and the four-factor model betas. Table 5 reports the average of these statistics over
the 37 windows from 1968-1971 to 2004-2007.
Table 5 here
Panel A reports results on Capex based growth deciles. The average cross-sectional spread
in Capex is 12% to 36%, from decile 1 to 10. This is much larger than the time series range of
the market aggregate seen in Table 1, which was 15% to 24%. Recall that we allocate firms to
portfolios based on ranking period values, but the values reported in Table 5 are computed in the
following period. Consequently, the large spread is not induced by a selection bias. The monotone
increase of Capex from decile 1 to 10 shows that the proxy persists over time, from period 0 to
period 1. Deciles 1 and 10 have levered βvw ’s of 1.11 and 1.44, and Dtoe’s of 62% and 18%. This
means that the unlevered beta of low Capex stocks is even lower than that of high Capex stocks.
Similar conclusions follow for βmkt : although the relation is U-shaped, higher growth deciles have
larger βmkt than lower growth ones. The book-to-market risk factor βhml , argued to be inversely
related to growth options, is indeed inversely related to Capex. The size factor βsmb has a U-shaped
relationship with Capex. This is consistent with the fact that both low and high growth option
groups may include small firms. There is as strong relationship between the weight in growth option
and returns standard deviation σR .
Panel B, with Mba results, leads to similar conclusions, although not as strong as for Capex.
As expected by construction, there is a strong negative monotone relationship between Mba and
βhml . The size factor βsmb and Dtoe are also negatively related to Mba, suggesting that high growth
firms, as proxied by Mba, tend to be smaller stocks with lower financial leverage than low growth
firms. Again σR shows a strong link between growth as proxied by Mba and total variance
Panels C and D report the results based on Div and Ep. the conclusions are similar to those
for Capex and Mba, with differences due to the nature of their group 11. Group 11 contains firms
with zero dividends and non positive earnings in the ranking period. Decile 10, high growth in
both panels, tends to have higher βvw and βmkt , and lower Dtoe than decile 1. These relationships
are nearly monotonic from decile 1 to 10. Both div and Ep display an unambiguous positive relationship between proxy and beta and an inverse relationship between proxy and financial leverage.
Therefore, unlevered betas must also have a positive relationship with the growth option proxies.
The conclusion is robust to any reasonable levering relationship for betas, only requiring the levered beta to increase with financial leverage for a given unlevered beta. We can also infer that the
positive relationship between growth options and unlevered betas is stronger than that reported
with equity betas. The book-to-market risk factor βhml is inversely related to both proxies, further
supporting the view that unlevered betas are positively related to the growth option proxies. One
difference with the first two proxies is that σR shows that the link between growth and return
variance is quite tenuous for Div and Ep.
Zero-dividend firms, row Div=0 in panel C, have the highest equity betas but not the lowest
Dtoe. Their 40% Dtoe places them around deciles 4 of financial leverage, and their βhml of -0.52
places them between deciles 9 and 10 of growth. So zero-dividend firms are mostly growth firms,
with average debt. Firms with negative earnings, row Ep ≤ 0 in panel D, are harder to interpret;
while lower Ep ratios can be related to higher growth options, it is hard to extend the reasoning
to negative earnings, it might get you into a bubble burst as in 2001! Indeed, the βhml of 0.11 and
the Dtoe of 74% indicate that these firms have fewer growth options than average. Note that the
positive Ep and Div in column 2 is measured at the end of the 3-year period following the portfolio
formation. Some of these firms resume positive dividends or obtain positive earnings again. But
both these Div and Ep are quite low.
In summary, the relations observed in Table 5 between growth proxies and, indirectly, unlevered betas are consistent with the hypothesis that higher growth options result in higher betas.
As the firms are grouped in the ranking-period and systematic risk is measured in the post-ranking
period in our analysis, the cross-sectional relationship between betas and proxies may have some
predictive power on betas. We now show how this can be exploited.
Predicting cross-sectional variations in systematic risk with growth proxies
The above analysis uncovered strong univariate relationships between betas and the proxies.
It also had a predictive aspect since portfolios were formed on the basis of the growth proxies,
one period before the estimation of the betas. We now examine the joint ability of the growth
option proxies to predict cross-sectional variations in asset betas. To do this, we estimate the
cross-sectional regression:
βi,t = δt−1 Xi,t−1 + γt ∆Xi,t + i,t i = 1 . . . 25,
where Xt−1 is the vector of proxies (Mba, Capex, Div, Ep, Dtoe) measured at t − 1 and ∆X is the
change in proxies from t − 1 to t. The proxies at time t − 1 are used to explain the cross-sectional
distribution of βt . The rational for including the changes in proxies from t − 1 to t, is to check if
recent changes in proxies have marginal power over the long-run values of the proxies, to predict the
cross-section of betas. The literature documents low cross-correlations across proxies, e.g., Adam
and Goyal (2008), which we verified. Using them jointly will not cause any multi-collinearity.
Every year, from July 68 to June 04, we allocate firms to 25 portfolios on the basis of past
3-year returns. Aggregate portfolio growth proxies for this period t − 1 and the following 3-year
period t are calculated. We estimate factor betas with thirty-six monthly value weighted portfolio
returns over period t. We first run (6) as thirty-seven cross-sectional regressions. To facilitate
an economic interpretation of δ and γ, we standardize the proxies by their cross-sectional mean
and variance, recomputed annually. That is, δ measures the change in β for a one cross-sectional
standard deviation change in the proxy. This standardization may also be preferable if proxies
exhibit strong time trends or changes through the sample period.
Table 6A, reports results for the dependent variables βvw and βmkt . Subscripts (−1 ) refer to
the estimates of δt−1 in (6). The column “mean” shows the average of the 37 estimates, Q1 and
Q3 are the first and third quartiles of the distribution of the 37 estimates, “#” is the number of
estimates with the expected sign, next to it is the the HAC corrected t-statistics for the mean of
2 report
the time series of estimates, accounting for two lags of autocorrelation. Finally, R̄2 and R̄−1
on the adjusted R-squares of the regressions, with all the independent variables and with only the
lagged, (−1 ), variables.
Table 6 here
The fit, shown by the R-squares is clearly high. The proxies jointly explain a large fraction of
the future cross-section of systematic risk. For βvw , the R-square averages 72%, with three quarters
above 65%. For βmkt , the average R-square is around 44%. We can verify the validity of each proxy
individually. We expect positive coefficients for Capex and Mba, negative for Div and Ep. Capital
expenditures and dividends are significant and with the correct sign about every year. However,
for both βvw and βmkt , the slope estimate for Mba−1 is contradictory to what we would expect.
The coefficient of Ep−1 for βvw has a similar pattern. The coefficient of Dtoe−1 is insignificant,
but with the correct sign about two-thirds of the time. This shows that one can recover the sign
expected by the financial leverage hypothesis, once growth proxies are also accounted for.
The recent changes in proxies, from t − 1 to t, add to the explanatory power, but do not
explain as much as the lagged proxies themselves. This is seen clearly by comparing the R-squares
of the full regression in (6) with R̄−1
shown below, the R-square of a regression without ∆X.
The coefficient estimates for the change variables ∆X confirm this result: the coefficient for ∆Div
is significant but of the right sign most of the time, the other coefficients for changes in proxies
are mostly insignificant, and about half the time of the correct sign, as expected under the null
One use of the regression (6) is to assess to what extent we can filter out estimation error in
beta through the use of growth option proxies. As has been known since Fama-MacBeth (1973),
the larger the portfolio, the smaller the estimation error for beta. Indeed regressions as in Table
6A, but with 50 or 100 portfolios have similar results albeit with lower R-squares of 59% and 46%,
due to the higher noise in β̂.
While convenient and robust, the time averaging of cross-sectional regressions is not the
most powerful estimation technique. We report in Panel B the results of a panel data regression,
pooling the time series and cross-sections. The R-squares are lower, 39% for βvw , as expected since
the coefficients are constrained to be constant through 40 years. The panel data approach brings
up the issue of the autocorrelation in errors, due to the overlap in the returns used to compute β,
and the autocorrelation of the regressors. Appropriate corrections of the standard error in these
regressions has gotten renewed attention in the finance literature, (See Petersen (2009)). We also
estimated a feasible GLS with very similar results. See also Bauer et al. (2009) for a bayesian
approach combining time series and cross-sectional estimation. Given multiple observations for the
same portfolio (or firm), and persistent regressors (here the proxies), there can be strong residual
autocorrelation within portfolios over time. In addition, since variables like stock returns pick up
systematic changes in value, there can also be strong residual correlation across portfolios for a
given time period. This ”within” and ”between” portfolio residual correlation creates a bias in
traditional OLS standard errors. We use the method in Thompson (2009) which allows for time
and cross-sectional clustering. We do not report the basic OLS errors which are much lower. The
results of the panel data regression are not very different from those in panel A, apart for the
inevitably lower fit, from the cross-sectional analysis.
To summarize, the lagged growth option proxies predict a large fraction of cross-sectional
differences in systematic risk. The relationship between betas and each proxy most often has the
desired sign given that growth options have higher betas. Recent changes in the proxies have limited
marginal explanatory power over and above the lagged values of the proxies.
The empirical literature still makes much more of acase of the financial than of the operating
leverage effect. This is somewhat surprising given that the predictions of the financial leverage are
often at odds with the evidence. In particular, as we emphasize in this paper, betas of stocks that
have had high returns do not decline, while betas of stocks with large negative returns decline
significantly, facts that are inconsistent with the pure financial leverage hypothesis. This evidence
means that another, more important, effect opposes it. This paper shows that indeed, operating
leverage in most situations opposes and dominates the financial leverage. This is consistent with a
strong change-in-mix effect, whereby good news is associated with an increase in the weight of the
firm in growth options. This causes the unlevered beta of the winning firm to increase.
Several recent papers link growth options to systematic risk. For instance, Berk, Green
and Naik (1999), Anderson and Garcia-Feijoo (2006) and Carlson, Fisher and Giammarino (2004)
show that the exercise of growth options changes a firm’s systematic risk. Cao, Simin and Zhao
(2008) argue that a significant portion of the upward trend in idiosyncratic risk can be explained
by changes in the level and variance of growth options, as well as in the capital structure of firms;
subsiding the profitability-based explanations in the literature. Recent general equilibrium models
link the evolution of betas to latent state variables that include measures of growth options. These
structural models of betas are however difficult to implement. Our cross-sectional regressions can
be justified as practical and robust reduced forms for these models.
Using Myers’s (1977) view of the firm as a portfolio of assets in place and growth options,
we show that even a simple option pricing model can yield subtle implications on the link between
positive news and unlevered betas. Two effects counter each other: by the moneyness effect, good
news moves each of the firm’s projects more in the money, and their betas decrease; by the change22
in-mix effect, good news causes the weight of the projects more out-of-the money to increase, so
the beta should increase. Our empirical analysis shows that the change-in-mix effect almost always
dominates the moneyness effect, and very often dominates the financial leverage effect as well.
We analyze the links between betas and growth opportunities by examining the changes in
betas of losers and winners. We show the robustness of this link to extreme initial financial and operational leverage. In these extremes, there are only a few instances where the change-in-mix effect
does not overcome the financial leverage effect. We, then, study directly the link between growth
option proxies and betas. We show that, together, these proxies and their recent changes predict a
large fraction of the future cross-section of betas. Because they are predictive, these cross-sectional
regressions have important implications. They can be used as a tool to reduce predictive error in
betas, resulting in improved estimation of expected returns in asset pricing models. They can also
help portfolio optimization, which is often performed conditional on constraints on systematic risk.
Finally, these results, together with similar work on unsystematic risk as in Cao et al. (2008), can
be a starting point for a study of the joint evolution of the parameters of the basic market or factor
model; total, systematic, and idiosyncratic risks, and factor variance.
APPENDIX: Partial derivative of ηG =
with respect to moneyness.
Consider N (d1 ), N (d2 ), rf , T as in the standard Black-Scholes notation. Denote S the underlying value, C the strike price, and m = S/C the moneyness ratio. The elasticity of the option
G with respect to the underlying S is
SN (d1 )
SN (d1 )
N (d2 )
ηG =
= 1 − e−rf T
N (d1 )
SN (d1 ) − Ce
N (d2 )
≥ 1.
The partial derivative of ηG with respect to the moneyness ratio m is
= e−rf T ηS2
= e−rf T ηS2
N (d2 )
mN (d1 )
N (d2 )
− 2
m N (d1 ) mN 2 (d1 )
where Z(·) is the standard normal density function, d1 =
Note that
mσ T
ln(m)+(rf +0.5σ 2 )T
σ T
and d2 = d1 − σ T .
recall that ηG = SN (d1 )/G, and replace m with
(7) simplifies as
C 2 e−rf T
Z(d2 )
Z(d1 )
√ N (d2 )N (d1 ) σ T −
N (d2 ) N (d1 )
S2σ T
Galai and Masulis (1976, p. 76-77) show that σ T >
Z(d2 )
N (d2 )
Z(d1 )
N (d1 ) .
is negative.
Adam T, Goyal V.K., 2008, The Investment Opportunity Set and its Proxy Variables: Theory and
Evidence, Journal of Financial Research, Volume 31, No 1, pp 41-63, Spring 2008.
Aharony J., Jones C.P. and Swary, I., 1980, An Analysis of Risk and Return Characteristics of
Corporate Bankruptcy Using Capital Market Data, Journal of Finance, 1001-1016.
Altman E. and Brenner M., 1981, Information Effects and Stock Market Response to Signs of
Firm Deterioration, Journal of Financial and Quantitative Analysis, 35-51.
Anderson C. and Garcia-Feijo, L., 2006. Empirical evidence on capital investment, growth options,
and security returns. Journal of Finance 61, No. 1, 171-194.
Ball R. and Kothari S.P., 1989, Nonstationary Expected Returns: Implications for Tests of Market
Efficiency and Serial Correlation in Returns, Journal of Financial Economics 25, 51-74.
Bauer R., Cosemans M., Frehen R., and P. Schotman, 2009 Efficient Estimation of Firm-Specific
Betas and its Benefits for Asset Pricing and Portfolio Choice, working paper, Maastricht University.
Berk J.B., C.G. Green and V. Naik, 1999, Optimal Investment, Growth Options, and Security
Returns, Journal of Finance 54, 1553-1607.
Braun P.A., Nelson D. and Sunier A., 1995, Good news, Bad news, Volatility and Betas, Journal
of Finance 50, 1575-1603.
Brown G. and Kapadia N., 2007, Firm-specific risk and equity market development, Journal of
Financial Economics, 84, pp 358-388.
Campbell J. and J Mei, 1993, Where Do Betas Come From? Asset Price Dynamics and the Sources
of Systematic Risk, Review of Financial Studies 6, 567-592.
Cao C., Simin T., and J. Zhao, 2008, Can Growth options explain the trend in firm specific risk?
Review of Financial Studies 21: 2599 - 2633.
Carlson M., Fisher A., and R. Giammarino, 2004, Corporate Investment and Asset Price Dynamics:
Implications for the Cross Section of Returns, Journal of Finance, Vol. 59, 2577-2603.
Chan K.C., 1988, On the Contrarian Investment Strategy, Journal of Business 61, 147-163.
Dann L.Y., Masulis R.W. and Mayers D., 1991, Repurchase Tender Offers and Earnings Information, Journal of Accounting and Economics 14, 217-251.
DeBondt W. and Thaler R., 1985, Does the Stock Market Overreact?, Journal of Finance 40,
DeBondt W. and Thaler R., 1987, Further evidence on investor overreaction and stock market
seasonality, Journal of Finance 42, 557-581.
Denis D.J. and Kadlec G.B., 1994, Corporate Events, Trading Activity, and the Estimation of
Systematic Risk: Evidence From Equity Offerings and Share Repurchases, Journal of Finance 49,
Erickson T. and Whited T, 2006, On the Accuracy of Different Measures of Q, Financial Management, 35.
Fama E. and French K., 1992, The Cross-Section of Expected Stock Returns, Journal of Finance,
47, 427-465.
Fama, E. and French K., 2001, Disappearing Dividends: Changing Firm Characteristics or lower
propensity to pay?, Journal of Financial Economics, 60, 3-43.
Galai D. and Masulis R.W., 1976, The Option Pricing Model and the Risk Factor of Stock, Journal
of Financial Economics 3, 53-81.
Goyal V.K., Lehn K. and Racic S., 2002, Growth Opportunities and Corporate Debt Policy: The
Case of the U.S. Defense Industry, Journal of Financial Economics 64, 35-59.
Hamada R.S., 1972, The Effects of the Firm’s Capital Structure on the Systematic Risk of Common
Stocks, Journal of Finance 27, 435-452.
Healy P.M. and Palepu K. , 1990, Earnings and Risk Changes Surrounding Primary Stock Offers,
Journal of Accounting Research 28, 25-48.
Jagannathan R. and Z. Wang, 1996, The Conditional CAPM and the Cross-Section of Expected
Returns, Journal of Finance 51 (1), 3-53.
Jones S.L., 1993, Another look at time-varying risk and return in a long-horizon contrarian strategy,
Journal of Financial Economics 33, 119-144.
Kaplan S.N. and Stein J.C., 1990, How risky is the debt in highly leveraged transactions?, Journal
of Financial Economics 27, 215-245.
Myers S., 1977, Determinants of Corporate Borrowing, Journal of Financial Economics 5, 147-175.
Petersen M., 2009, Estimating Standard Errors in Finance Panel Data Sets: Comparing Approaches, Review of Financial Studies 22, 435-480.
Rosenberg B. and McKibben W., 1973, The Prediction of Systematic and Specific Risk in Common
Stocks, Journal of Financial and Quantitative Analysis, 317-333.
Rosenberg B., 1974, Extra-Market Components of Covariance in Security Returns, Journal of
Financial and Quantitative Analysis, 263-274.
Rosenberg B., 1985, Prediction of Common Stock Betas, Journal of Portfolio Management, Winter,
Rubinstein, M.E., 1973, A mean-Variance Synthesis of Corporate Financial Theory, Journal of
Finance 28, 167-181.
Smith W.S., Watts R.L., 1992, The Investment Opportunity Set and Corporate Financing, Dividend
and Compensation Policies, Journal of Financial Economics 32, 263-292.
Skinner D.J., 1993, The Investment Opportunity Set and Accounting Procedure Choice, Journal
of Accounting and Economics 16, 407-445.
Thompson S., 2009, Simple Formulas for Standard Errors that Cluster by Both Firm and Time.
SSRN working paper, forthcoming Journal of Financial Economics.
Table 1
Descriptive statistics of growth proxies and financial leverage
Std. Dev.
The table reports averages for some selected years, and summary statistics for the number of firms,
aggregate growth option proxies, and financial leverage from 1968 to 2007. The proxies are the
ratios of aggregate capital expenditures to fixed assets (Capex), of market to book value of assets
(Mba), and of earnings to price ratio (Ep), as well as the dividend yield (Div). Financial leverage is
measured by the debt to equity ratio (Dtoe). All are reported in percent. The statistics include the
autocorrelation functions up to 3 lags.
Table 2
Systematic risks of portfolios sorted by total returns
Panel A: Total period return, leverage and βV W
% Return
% Dtoe
βV W
75 -53 48
1.35 1.34 1.26
58 -28 48
1.21 1.17 1.12
53 -10 52
1.07 1.07 1.01
1.03 0.98 0.95
0.99 0.91 0.93
0.96 0.95 0.95
0.96 0.93 0.95
0.98 0.96 0.99
1.04 1.04 1.07
Winner 40 264 40
1.21 1.20 1.27
Panel B: Four-factor model market
βM KT
t diff
1.12 1.17 1.13
1.06 1.10 1.11
1.01 1.06 1.05
1.02 0.99 1.00 (-0.45)
0.99 0.94 0.99 (-0.14)
1.00 0.96 0.98 (-0.64)
0.99 0.94 0.96 (-0.83)
0.98 0.99 0.97 (-0.37)
1.02 1.01 0.97 (-1.29)
Winner 1.07 1.02 1.04 (-0.50)
and book-to-market
βHM L
-0.61 -0.26 -0.07
-0.34 -0.15
-0.18 -0.05
0.04 -0.01
0.07 -0.11
-0.10 -0.07 -0.29
-0.23 -0.42 -0.58
t diff
t diff
Firms are allocated to deciles on the basis of cumulative returns, over 34 overlapping 3-year periods ending
June 71 to June 04. We compute the cumulative returns, the Debt to equity ratio, Dtoe, the market
beta βvw , and two of the four-factor model betas, βM KT and βHM L , for this period, and the previous
and following 3-year periods. We average these values over the the 34 windows. They are reported under
columns 0, -1, and 1 for the allocation period and the two surrounding periods. The t-statistic t-diff is
that of the mean difference in the 34 changes in beta from period -1 to 1, it uses HAC standard errors
with two lags.
Table 3
Systematic risk of portfolios sorted by total return and financial leverage
Panel A: Total period return, leverage and CAPM beta
% Return
% Dtoe
Leverage Return
Loser 143 -57 47
t diff
βV W
Panel B: Four-factor model market and book-to-market betas
βM KT
βHM L
Leverage Return
t diff
Loser 1.12 1.22
1.06 (-0.74)
-0.95 -0.59 -0.39
Winner 1.04 0.98
0.96 (-0.76)
-0.62 -0.71 -0.88
t diff
Panel C: Four-factor model size and momentum betas
βSM B
βU M D
Leverage Return
t diff
Loser 0.38 0.30
0.05 -0.55 -0.21
Winner 0.24 0.13 -0.01 (-1.97)
0.27 -0.02
t diff
Firms are allocated to Low, Medium, and High financial leverage groups every June at the end of each of
34 pre-ranking periods, ending June 68 to July 2001, based on 30th and 70th percentiles of Dtoe. Within
each leverage group, firms are allocated to deciles of total returns in the following three-year ranking
period. We report on the High and Low Dtoe groups, and deciles 1 and 10 of total returns. Columns -1,
0, 1, denote the pre-ranking, ranking, and post-ranking (July 71-June 74 to July 04-June 07) periods.
We then follow the same procedure as in Table 1.
Table 4
Systematic Risks of Portfolios sorted by return and industry
Panel A: Total period return, leverage
% Return
35 -26 28
55 -18 37
59 -17 50
and CAPM beta
Dtoe in %
t diff
Bus. Equip.
Bus. Equip.
Panel B: size and book-to-market betas
βSM B
t diff
0.32 (-1.78)
0.19 (-0.70)
Winner -0.11 -0.09 -0.03
Bus. Equip.
Bus. Equip.
βV W
βHM L
t diff
We study firms in 6 industries from the 30 industry groups defined in Ken French’s data library. Steel, Auto,
and Oil are deemed low growth options, while Health, Services, and Business Equipment are high growth.
For these 6 industries, we then follow the same procedure as in Table 1, with the following difference: We
allocate firms to loser and winner portfolios on the basis of the 25th and 75th percentiles of total returns.
Table 5
Systematic risk characteristics versus growth proxies
Panel A: Capex,
Growth Capex
proxy increasing in growth
σ σR
βV W βM KT βHM L
2.9 31
2.9 32
2.1 30
2.4 34
2.9 38
3.0 39
4.0 46
4.1 37
3.4 66
6.9 69
βSM B
Panel B: Mba, proxy increasing in growth
σ σR
βV W βM KT
βHM L
βSM B
Panel C: Div, proxy decreasing in growth
σ σR
βV W βM KT
4.94 1.74 30
4.19 1.48 35
3.71 1.38 37
3.22 1.11 32
2.96 1.19 37
2.52 1.04 34
2.22 0.95 35
1.72 0.75 37
1.33 0.71 37
0.68 0.39 44
0.31 0.32 57
βHM L
βSM B
Panel D: EP,
EP ≤ 0 4.91
decreasing in growth
βV W βM KT βHM L
βSM B
Every end of June from 1968 to 2004, we allocate firms to deciles of growth options on the basis
of Capex, Mba, Div, or Ep. Firms with zero dividends or non-positive earnings are in separate
portfolios. For the following three-year period, we compute the aggregate proxy, Dtoe, betas, and
standard deviation for each portfolio. This is repeated annually for 37 overlapping 3-year reanking
periods until June 07. The table reports the average proxy and its standard deviation, the returns
standard deviation, the portfolio’s average betas, standard deviation, and Dtoe. σ is the timeseries standard deviations of proxy values. σR is the time-series standard deviation of the 36-month
portfolio holding period return.
Table 6
Cross-sectional variation in systematic risk
Panel A: Individual cross-sectional regressions
βV W
Q3 #
1.142 37
0.032 -0.012
0.074 26
-0.190 -0.260 -0.143 36 -13.73
-0.080 -0.153
0.025 13
-0.002 -0.100
0.095 15
0.013 -0.052
0.051 21
0.013 -0.021
0.043 24
-0.101 -0.126 -0.052 35
-0.077 -0.143
0.028 13
0.021 -0.073
0.092 16
0.005 -0.044
0.068 20
Panel B: Panel Data regressions
βV W
βM KT
1.100 71.40
-0.193 -8.50
-0.042 -0.95
-0.099 -5.28
-0.026 -0.82
βM KT
The table reports the cross-sectional regression of βV W and βM KT on growth proxies Capex, Mba, Div,
and Ep, and financial leverage Dtoe. For each of the 37 overlapping 3-year periods ending from June 68
to June 04, firms are grouped into 25 portfolios on the basis of total returns. Portfolio growth proxies are
computed at the end of this ranking period, and of the following, post-ranking, 3-year period. Portfolio
betas are calculated using the 36 value weighted monthly portfolio returns of the post-ranking period.
We regress the betas on the ranking period proxies, subscripted −1 , and on their changes from ranking
to post-ranking period, denoted ∆. These variables are cross-sectionally standardized for each of the
thirty-seven periods. Panel A shows the results from the 37 individual cross-sectional regressions, namely
the time-series average and first and third quartile values of parameter estimates, the number of estimates
with the expected sign (#), and t-statistics based upon HAC standard errors with two lags. We also
report the average of the 37 adjusted R-squares with all the independent variables, R̄2 , and with only
the lagged variables, R̄−1
. Panel B shows the results of a pooled time-series and cross-section regression
with two-way clustered standard errors robust to time (cross-correlation) and group (auto-correlation)
Ng = 20
Ng = 10
Ng = 5
Vg / V
Ng = 20
Ng = 10
Ng = 5
Figure 1: Firm β and weight in growth options versus moneyness
Top panel: βV = (ANA βA + GNG βG )/V vs s/CG , where V = ANA + GNG = VA + VG . A and G
are calls on s with exercise prices CA = 1, CG = 100. rf = 0, σ 2 T = 1, NA = 1, NG = 5, 10, 20.
Bottom panel: weight in growth options VG /V vs moneyness s/CG .
|
{"url":"https://studylib.net/doc/11763213/predicting-systematic-risk--implications-from-growth-opti...","timestamp":"2024-11-07T13:06:55Z","content_type":"text/html","content_length":"128290","record_id":"<urn:uuid:576e2ea5-6fc8-471d-aa99-6d92c2c30704>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00416.warc.gz"}
|
Simulink Homework Help
Simulink can be a block diagram environment with regard to multi-domain simulation as well as Model-Based Design. That helps simulation, automatic code generation, as well as constantly make sure
confirmation is associated with embedded systems. Simulate dynamic techniques using leveraging graphical editors, easy to customize block libraries, as well as solvers with regard to modeling.
You should use Simulink in order to design the system after which simulate the dynamic behavior of this system. The fundamental techniques you utilize to produce the actual simple model would be the
exact same methods that you simply make use of with regard to more complicated models.
To produce this particular easy model, you'll need 4 Simulink blocks. Blocks would be the model elements that comprise the actual mathematics of the system and supply input signals:
• Sine Wave — Produce a good input signal for that model.
• Integrator — Procedure the actual input signal.
• Bus Creator — Mix several signals into 1 signal.
• Scope — Imagine as well as evaluate the actual input signal using the output signal.
What is Simulink used for?
Simulink is a visual tool with regard to performing computational simulations. This makes use of a drag as well as drop system for simulation elements that may after that link in between all of them
along with lines. The elements you utilize could be set up and also you possess elements that allow for mathematical operations.
It's mainly used for engineering, where one can signify your own system because some parameters as well as model it mathematically. A few examples tend to be electrical circuits; mechanical systems
that have to be managed, such as vehicles, planes, and so on; chemical industry pipelines; robotics; etc.
It's a terrific way to model your systems, because it's very visual, and it is suited to a few applications that are not too specific or even performant. However, it is just a proprietary system as
well as fairly costly. It is used by industry and I do not know of any viable open-source alternative.
We offer homework help to make it easy for students to create unique projects and develop the skill to work with available information and apply knowledge of concepts and theories in real-world
issues. Our aim is to evolve the mind of students and to imbibe the learning skills among students so that they become self-sufficient in academics as well as their professional life.
the difference between MATLAB and Simulink
MATLAB is used for a wide range of applications, in industries and academics, including machine learning and deep learning, signal processing and communications, test and measurement, image and video
processing, control systems, computational finance, and computational biology by millions of engineers and scientists worldwide. Matlab program and script files are always saved by giving filenames
ending with ".m" extension such as ‘program.m’.
The full form of MATLAB is Matrix Laboratory. It is a programming language developed by MathWorks which operates as communal programming circumstances.
It is mainly used as a matrix programming language where linear algebra programming was easy and simple. It can be used both under interactive sessions and as a bunch job.
MATLAB is used for scientific and technical computing, it is a high-performance language. MATLAB allows students to give fluidity in MATLAB programming language. It collaborates computation,
calculation, visualization, and programming in an easy-to-use and simple domain where problems and solutions are shown in recognizable mathematical symbols. Problem-oriented MATLAB samples have been
expressed in a simple and easy way to build your learning fast and successfully.
You can make a comparison between MATLAB and Simulink.
There isn't any assessment in between Matlab as well as Simulink, simply because Simulink is actually one of several tools which are supplied with Matlab. Whenever you through Matlab you are able to
because of it along with or even without having individuals tools. ... You should use Matlab to simulate a system, however, you need to program your own routines put into those that are supplied
through Matlab.
With our talented and dedicated team of writers, you can always trust us for high scores in your homework tasks. You can check our free samples available in every subject quality of our work and
decide whether to place an order with us. You can reach us anytime from anywhere in the world. It doesn’t matter you are in which university and in which country you are residing, just give us a call
and get your work done before the time.
What should I do to learn Simulink in MATLAB?
MATLAB is one of the most important tools. You can do lots of stuff with it.
I think the best way to learn MATLAB is to start with basic things like:
1. Implementation of various transforms like DFT, DTFT, wavelet transform, Z-transform, Laplace transform, etc.
2. Work on image processing techniques. MATLAB has a powerful library for that.
3. Use Simulink to simulate systems such as ABS, TCS, and other closed-loop systems.
4. You can also work on root locus and analyze your output for different values of poles and zeroes.
5. You can do music analysis.
This way you'll learn lots of built-in functions and get used to MATLAB. You can also refer to documentation whenever necessary.
Why is Simulink used?
Simulink, an add-on product to MATLAB, provides an interactive, graphical environment for modeling, simulating, and analyzing dynamic systems. ... It includes a comprehensive library of pre-defined
blocks to be used to construct graphical models of systems using drag-and-drop mouse operations.
The reason why students require Homework Help tutors’ assistance
Choosing an online homework help website is very important for improving academic performance. Our customer service people are always available through online chat service at any time even while
we're working 24 hours, 7 days a week for instant homework online.
Contact us for homework writing services for all college and university subjects. 100% on-time delivery guaranteed. We will write your homework for A+ grades. So, send your homework doubt or sample
project along with the homework deadlines to us and all homework gets completed on time within the agreed deadline.
|
{"url":"https://www.abcassignmenthelp.com/simulink-homework-help","timestamp":"2024-11-04T23:37:25Z","content_type":"text/html","content_length":"93907","record_id":"<urn:uuid:2464c4b1-fb32-49b5-828f-a7369f42b1b4>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00244.warc.gz"}
|
Currently there may be errors shown on top of a page, because of a missing Wiki update (PHP version and extension DPL3).
Topics Help • Register • News • History • How to • Sequences statistics • Template prototypes
│ Workload type │ ECPP, BLS75, │
│ │ BPSW, AKS │
│ First release │ 2013 │
│ Latest version │ 1.04 │
│ │ 2014-08-16 │
ECPP-DJ is a primality testing program created by Dana Jacobsen.
It is written in C using the GMP library. It is a standalone version of the ECPP implementation written for the Perlmodule Math::Prime::Util::GMP (MPU) in 2013.
Most of the utility functions closely follow the algorithms presented in Henri Cohen's book "A Course in Computational Algebraic Number Theory" (1993). The ECM factoring and manipulation was heavily
inspired by GMP-ECM by Paul Zimmermann and many others.
It can verify ECPP certificates, which were generated by this program (or MPU) as well as Primo. It can be linked with MPZ APR-CL code to enable the APR-CL test.
Unlike Primo, this is an open-source implementation of ECPP test. However, Primo runs much faster for 1000+ digit numbers, especially on multi-core machines (ECPP-DJ is single threaded).
|
{"url":"https://rieselprime.de/z/index.php?title=ECPP-DJ&oldid=6009","timestamp":"2024-11-04T07:24:54Z","content_type":"text/html","content_length":"22745","record_id":"<urn:uuid:7777b346-33c5-4fd6-8da9-0bea118699e5>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00643.warc.gz"}
|
What Is The Integral Of 2X? | Hire Someone To Do Calculus Exam For Me
What Is The Integral Of 2X? PxD’s Real Property “Won is a very ancient discovery, that was done by a man.” –Marcus Biondini, The Philosophy of the Talmud A major part of both of the traditional
(still evolving) world beliefs is this, that has since been written down, remains the universal truth: for love is great enough for great children to love, to care for and to give. (Source: Liddell
of Love, David C. The Talmud, and Paulus Bodmer, The Jews and the Jewish Faith, pp. 1-8.) It is certainly true that love is “great.” And perhaps to a significant extent, this is actually because it
is something that you find you love, while there is a bigger gulf (because by love we mean being an object of loving behavior, and love is a social construct). (See the Introduction section for
further details.) Consider an example: Yiddish phrase “love is for the whole, the whole” which by Maimonides suggests is a thought by an ancient Jewish ancestor of the S parte (5b), since “it is an
interpretation made in order to preserve the Jewish word for one reason: to make it something to the end of our heart” (Maimonides, Yiddish עדקריומע. Clicking Here דר ד ךלא זמריו, אבל בר האנεכה בר
מפרשי בר האנו It’s almost like having a “precious” reason for making fun of someone, someone who is unreasonable but knows nothing about them (the origin of adultery), and is behaving so at odds with
an adversary, like a child abusing someone on some previous occasion. To this today, for instance, is the concept essentially “overkill,” saying that if someone is unreasonable yet they will show
sufficient intelligence and time to have an intent to abuse someone. The problem has, of course, not just what could have been expressed in such a simple way, but what is, in fact, the best possible
response. So to be an asshole, a liar and a thief, an effective person, is not simply “overkill,” but a much deeper concern than what could be said in such a brief moment of time to someone who is a
good listener. Its most obvious use would be as an argument for self-assertion (and for being able to say “wrong” things) but another, and potentially more powerful, way of saying “he’s well past his
prime.” Another way of saying “I’m for the whole, this is what I do” (on the other hand, a perfect example of “not at all”). Finally, let’s consider what this idea should mean for our minds, since it
should be a powerful part of what being good (or being clever) is going to look like. Some of the things that I’ve told you about the “least intelligent” today seem to be quite trivial things (which
has not been discussed, since the modern version of this one didn’t exist or existed at all back in the 1990s within the age of the Jewish-Christian philosophy) but they do produce a fascinating bit
of knowledge. For instance, the last verse of theWhat Is The Integral Of 2X? {#sec0170} ==================== The first aspect is essential for getting rid of the conection of two results, i.e. this
integral does not make any sense, but this view cannot confirm the discover this two results.
I Need Someone To Take My Online Class
Our aim is to provide a definition for the integral of a complex number (that is the integral $\int_\Omega J(\theta)d\theta$) in terms of the following matrix dimension-free upper triangular
inequality: $$<...> = \sum\limits_{p\opl q}\left\{ \prod\limits_{m=p+q}^m\left( -1 -R_{m,{\mathbf{p}}}^m\right),~ \frac{-1}{4p}\prod\limits_{n=p+q}^{m+p+q} \right\}$$ where the first sum over n
refers also to the number of the products and following the right-derivative of the sum is over the product composed with the products over $m$ : $$\begin{aligned} {\left\|{\boldsymbol{x}}\right\|} &
{\overset{{\mathrm{\underset{+}{\!X}}}}{\longrightarrow}} \\ Q &{\overset{{\mathrm{\underset{+}{\!X}}}}{\longrightarrow}} Q\left({\boldsymbol{u}\setminus\{11\}}\right)^{\mathrm{\scriptsize{B}}} \\ {\
underset{{\mathrm{\underset{+}{\!X}}}}{\longrightarrow}} &{+}\left[{\sum\limits_{m=p+q}^{\mathrm{\scriptsize{k}}}\left\|{\boldsymbol{u}\setminus\{11\}}\right\|} {\sum\limits_{(m,q)\opl p+q}\left\|{\
boldsymbol{u}\setminus\{11\}}\right\|} Q\left({\boldsymbol{u}},m,q\right)\right] \\ &{\overset{{\mathrm{\underset{+}{\!X}}}}{\longrightarrow}} Q\left({\boldsymbol{u}\setminus\{1\}}\right)^{\mathrm{\
scriptsize{B}}}\\ Q &{\overset{{\mathrm{\underset{+}{\!X}}}}{\longrightarrow}} Q\left({\boldsymbol{u}\setminus\{1\}_{1}}\right)^{\mathrm{\scriptsize{B}}}\\ {\cup} &{\mathrm{\setminus}}{\left\{\ {\sum
\limits_{m=1}^{\mathrm{\scriptsize{k}}}\left\|{\boldsymbol{u}\setminus\{1\}_{1}}\right\|} {{\underline{\mathbf{x}}}}\cap Q\left({\boldsymbol{u}},m,1\right)\right\}} \end{aligned}$$ \[lemma26\]Let ${\
mathbf{p}}\in L^{+}(D_{{\mathbb{R}},\mathit{\mathbb{R}}}\subset \partial\mathbb{Z})$, ${\mathbf{u}\in B_{{\mathbb{R}},\mathit{\mathbb{R}}}^{{\mathbb{Z}}}}$, $\mathcal{P}$ i.e. ${\mathbf{p}\in L^{-}
(D_{{\mathbb{R}},\mathit{\mathbb{R}}}\subset \partial\mathbb{Z})}\subset B_{\mathbb{R}-\mathit{\mathbb{R}}\times \mathcal{P}}$, over $\mathbb{Z}$. Let $\theta \in \mathbb{R}$, then $\mathbf{u} := Tr^
{\kappa}\chi^{\mathrm{covWhat Is The Integral Of 2X? In his book, Pisa/Burgos has studied this fact over a long friendship with the Belgian philosopher Hengel. The answer, given by Pisa in the case
of the Spanish writer José Luis Cunoa by mentioning it for the first time, is that however no one can say what kind of facts Cunoa made known to him from which the probability of the main event is
determined(2)? The question now arises: What type of two-way correlation can be determined? For there is a general interpretation in read review binary programming (also known as Boolean
programming): 1. What is the probability distribution of two numbers? 2. What is the probability distribution of each real number? This interpretation to the second part is interesting to a number
theorist and to us. Some have found a very interesting mathematical system of his famous famous Heisenberg isomorphism: It specifies the probability distribution of a number. Heisenberg suggests that
if a function f x (n,n,=2), xo is written, then (1) xo and (0) o can refer to F(n). We shall focus on this image here which could be interpreted as point (1), as just one example shows. To continue,
we can consider the probability distribution of a number ω: it can be written up as (n)o = (π′)o, where π is the number of the roots of x n. That probability distribution is then described as (0)o =
(π‚±2)o, with n and ρ being the number of their associated roots by one unit. The Hilbert space of such distributions is a plane space, but their dimension is 2. The probability, although (2) o
stands for double counting, is in fact not additive, so there is no significance and no natural group element. Instead the probability distribution is parameterized by half the f (number), if you
like but you won‚… 2. What is the shape of the distribution of ω e^xo‚, q≈π”o and o = (π‚±2)o, then a more proper class of functions f are those, e.g.
Hire Someone To Do Your Homework
(2o 1!1 2o=o) e = (π‚±2)e2 is obtained by o (0)o = (π‚±2)e, E(e)1 = (π‚±2)e1 = (−1)e1 = (π‚±2)e2 = (π‚±2)ee = 4e(-exp-2x); e (2o) equals (π‚±2)e. 3. Is that the probability of a single real number
equal to the probability of several real numbers? Again Pisa is not expecting any question about this kind of statistical mechanics. 4. If we can formulate therefrom a possible structure that gives
these two theories of 1-dimensional probability: the probability distribution of two real numbers P2 /a2, which is known to be the probability distribution of two real numbers, PA2 /a2, we can see a
factorized probability distribution similar to the one with the 2P4 one-parameter system pop over here which is known to be the probability distribution for the real numbers PA2 /a2, although the 2P4
model is not exactly correct. This leads us to the following possibilities: There is no two-way correlation If a three-dimensional particle is to a point on a plane or in 2D space, one can have
three-dimensional probability distributions of its position, then three-dimensional probability distribution for the position of a point on this plane will be (2×1)P(t,t)2/2 = (π‚2)P(t,t)2/2 if t is
the time and t is the unit length. By the classical argument \[bounds, R.H.M\] this can be observed always until one day – the pair xi/y=1, where “℔” denotes now the probability that the direction of
the component of the vector y that
|
{"url":"https://hirecalculusexam.com/what-is-the-integral-of-2x","timestamp":"2024-11-10T22:44:32Z","content_type":"text/html","content_length":"104713","record_id":"<urn:uuid:052e038c-54e7-42ae-9c8d-07912786ba82>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00208.warc.gz"}
|
Advancing Quantum Algorithm Design with GPTs | NVIDIA Technical Blog
AI techniques like large language models (LLMs) are rapidly transforming many scientific disciplines. Quantum computing is no exception. A collaboration between NVIDIA, the University of Toronto, and
Saint Jude Children’s Research Hospital is bringing generative pre-trained transformers (GPTs) to the design of new quantum algorithms, including the Generative Quantum Eigensolver (GQE) technique.
The GQE technique is the latest in a wave of so-called AI for Quantum techniques. Developed with the NVIDIA CUDA-Q platform, GQE is the first method enabling you to use your own GPT model for
creating complex quantum circuits.
The CUDA-Q platform has been instrumental in developing GQE. Training and using GPT models in quantum computing requires hybrid access to CPUs, GPUs, and QPUs. The CUDA-Q focus on accelerated quantum
supercomputing makes it a fully hybrid computing environment perfectly suited for GQE.
According to GQE co-author Alan Aspuru-Guzik, these abilities position CUDA-Q as a scalable standard.
Learning the grammar of quantum circuits
Conventional LLMs can be a useful analogy for understanding GQE. In general, the goal of an LLM is to take a vocabulary of many words; train a transformer model with text samples to understand things
like meaning, context, and grammar; and then sample the trained model to produce words, which are then strung together to generate a new document.
Where LLMs deal with words, GQE deals with quantum circuit operations. GQE takes a pool of unitary operations (vocabulary) and trains a transformer model to generate a sequence of indices
corresponding to unitary operations (words) that define a resulting quantum circuit (document). The grammar for generating these indices is a set of rules trained by minimizing a cost function, which
is evaluated by computing expectation values using previously generated circuits.
Figure 1. Comparing GQE to a LLM
Figure 1 shows that GQE is analogous to a LLM. Instead of adding individual words to construct a sentence, unitary operations are added to generate a quantum circuit.
GQE-enabled algorithms.
In the era of noisy, small-scale quantum (NISQ) computers, quantum algorithms are limited by several hardware constraints. This has motivated the development of hybrid quantum-classical algorithms
like the Variational Quantum Eigensolver (VQE), which attempts to circumvent these limitations by offloading onerous tasks to a conventional computer.
Figure 2. Comparison between the GQE and VQE
All optimized parameters are handled classically in the GPT model and are updated based on the expected values of the generated circuits. This enables optimization to occur in a more favorable deep
neural network landscape and offers a potential route to avoiding the barren plateaus that impede variational algorithms. This also eliminates the need for the many intermediate circuit evaluations
required in techniques like reinforcement learning.
The GQE method is the first hybrid quantum-classical algorithm leveraging the power of AI to accelerate NISQ applications. GQE extends NISQ algorithms in several ways:
• Ease of optimization: GQE builds quantum circuits without quantum variational parameters (Figure 2).
• Quantum resource efficiency: By replacing quantum gradient evaluation with sampling and backpropagation, GQE is expected to provide greater utility with fewer quantum circuit evaluations.
• Customizability: The GQE is very flexible and can be modified to incorporate a priori domain knowledge, or applied to target applications outside of chemistry.
• Pretrainability: The GQE transformer can be pretrained, eliminating the need for additional quantum circuit evaluations. We discuss this later in this post.
Results from GPT-QE
For the inaugural application of GQE, the authors built a specific model inspired by GPT-2 (referred to explicitly as GPT-QE) and used it to estimate the ground state energies of a set of small
The operator pool of vocabulary was built from chemically inspired operations such as excitations and time evolution steps that were derived from a standard ansatz known as ‘unitary coupled-clusters
with single and double excitations’ (UCCSD). An ansatz is an approach to parameterizing quantum circuits.
Variational algorithms must be started with a ‘best guess’ initial state, generated with existing classical methods. To demonstrate GPT-QE, the authors generated an initial state using the
Hartree-Fock method with an STO-3G basis set. The GPT model used in this work was identical to OpenAI’s GPT-2 model, including 12 attention layers, 12 attention heads, and 768 embedding dimensions.
For more information and a comprehensive technical explanation of the training process, see 2.2. GPT Quantum Eigensolver in The generative quantum eigensolver (GQE) and its application for ground
state search.
A great advantage of this technique is that it is highly parallelizable, both in terms of using GPU acceleration for the classical component and in using multiple QPUs for the quantum calculations.
Since the publication of the paper, the workflow has been accelerated by parallelizing the expectation value computations of the GPT-QE sampled circuits using the NVIDIA CUDA-Q multi-QPU backend,
The mqpu backend is designed for parallel and asynchronous quantum co-processing, enabling multiple GPUs to simulate multiple QPUs. As the availability of physical quantum hardware increases, these
backends can trivially be replaced with access to multiple instances of potentially varying QPU hardware.
Figure 3 shows the speedup realized by using the nvidia-mqpu backend on a much larger 18-qubit CO[2 ]GQE experiment. Baseline CPU computations were obtained by calculating the expectation value of 48
sampled circuits on a 56-core Intel Xeon Platinum 8480CL E5.
Using a single NVIDIA H100 GPU instead of the CPU provided a 40x speedup. The CUDA-Q mqpu backend provides an additional 8x speedup by enabling asynchronous computation of the expectation values
across eight GPUs using an NVIDIA DGX-H100 system.
The authors also trained a 30-qubit CO[2 ]GQE experiment for which the CPU failed. The model trained in 173 hours on a single NVIDIA H100 GPU, which was reduced to 3.5 hours when parallelized across
48 H100 GPUs.
Figure 3. Expectation value computation for GQE circuit samples
Figure 3 shows GQE circuit samples accelerated with a single NVIDIA H100 GPU or asynchronous evaluation across multiple GPUs using an NVIDIA DGX-H100.
As the scale of quantum computations continues to increase, the ability to parallelize simulation workloads across multiple GPUs, and eventually QPUs, will become increasingly important.
Beyond access to these hardware capabilities, implementing GPT-QE using CUDA-Q provided additional benefits like interoperability with GPU-accelerated libraries such as PyTorch to accelerate the
classical parts of the algorithm. This is a huge benefit of the CUDA-Q platform, which also has access to the world’s fastest implementations of conventional mathematical operations through the
GPU-accelerated CUDA-X libraries.
The CUDA-Q QPU agnosticism is also key in enabling future experiments on multiple physical QPUs. Most importantly, by embodying hybrid quantum computing and offloading gradient calculations to
classical processors, large-scale systems can be explored and open the door to useful quantum computing applications enabled by AI.
Opportunities to extend the GQE framework
This collaboration is a first step towards understanding the broad opportunities for how GPT models can enable quantum supercomputing applications.
Future research will hone exploring different operator pools for GQE and optimal strategies for training. This includes a focus on pretraining, a process where existing datasets can be used to either
make the transformer training more efficient or aid in the convergence of the training process. This is possible if there is a sufficiently large data set available containing generated circuits and
their associated expectation values. Pretrained models can also provide a warm start for training other similar models.
For example, the output from a prior run would create a database of circuits and their associated ground state energies. Poorly performing circuits can be thrown away and the transformer can be
trained using only the better-performing circuits, without the need for a quantum computer or simulator. This pretrained transformer can then be used as the initialization point for further training,
which is expected to converge quicker and exhibit better performance.
There is also a huge scope for applications using GQE outside of quantum chemistry. A collaboration between NVIDIA and Los Alamos National Lab is exploring using the ideas of GQE for geometric
quantum machine learning.
For more information about the GQE code, including examples, see the GQE GitHub repo.
Explore NVIDIA tools for quantum research
The GQE is a novel example of how GPT models and AI in general can be used to enable many aspects of quantum computing.
NVIDIA is developing hardware and software tools such as CUDA-Q to ensure scalability and acceleration of both the classical and quantum parts of hybrid workflows. For more information about NVIDIA’s
quantum efforts, visit the Quantum Computing page.
|
{"url":"https://developer.nvidia.com/blog/advancing-quantum-algorithm-design-with-gpt/","timestamp":"2024-11-09T09:28:52Z","content_type":"text/html","content_length":"215689","record_id":"<urn:uuid:e2c532ec-e48c-4d02-bf3f-4337b908524a>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00658.warc.gz"}
|
Harmonizing Algebra and Geometry: The Dance of a Brilliant Mathematician - Best books shelves
Harmonizing Algebra and Geometry: The Dance of a Brilliant Mathematician
Harmonizing Algebra and Geometry: Wei Ho, the first director of the Women and Mathematics program at the Institute for Advanced Study, blends geometry and algebra in her research on the oldest class
of curves.
As with many who eventually become mathematical scientists, Wei Ho began her career in math competitions. In the eighth grade, she took home the Math counts state championship in Wisconsin and her
team won third place at nationals.
In contrast to many mathematicians in the future She wasn’t certain she’d ever want to be one.
“I was a nerd who wanted to be everywhere every day,” Ho said. “I was very serious about ballet until the beginning of high school. Editorial duties included the magazine literary. I participated in
debates and forensics. I played soccer and tennis as well as violin and piano.”
In contrast the majority of mathematicians who were successful appeared to be enthralled by math to the exclusion all other activities. What could she do as an individual with many interests be able
to compete at that level of dedication?
The truth is that Ho was attracted to mathematics’ rigor. She is still a ballet fan and reading novels, as well as doing mysterious crossword puzzles in her work to improve the mathematical
machineries that support the most fundamental mathematical concepts, like polynomial equations which are a recurring and enigmatizing open-ended questions associated with their use.
Also Read: Mindset Mathematics Grade 8 By Jo Boaler
Ho is a scientist who studies geometric objects of all kinds however she reframes the issues to place them within the rational numbers – numbers that are able to be written in fractions. “Then the
concept of number theory begins to become a part these other subjects,” she said.
She is particularly attracted to elliptic curves that are defined by the polynomial equation, which has applications across different fields of mathematics.
They are found in the field of analysis — generally that is the study of things that are continuous such as real numbers — as well as within algebra which is concerned with the creation and
definition of precise mathematical patterns. (Though their subject matter is different the two are separated more by sensibility rather than the strict boundaries, because there’s a lot of overlap.)
In a preprint that broke the barriers in the year 2018, Ho and her co-worker Levent Alpoge from Harvard University discovered a new upper limit to the total number of polynomials that are the basis
of an elliptic curve.
The method they used is based on the long-standing research conducted by Louis Mordell, an American mathematician, who immigrated from the United States to Britain at the age of 1906.
In their research, Ho and Alpoge were capable of gaining new knowledge regarding the distribution of these integer solutions that had been elusive to other teams that were studying similar problems.
Ho is presently spending the whole time (on off from her position as a professor in Michigan’s University of Michigan) as an adjunct faculty member in the Institute for Advanced Study, where she was
recently appointed to be the inaugural director for the institute’s Women and Mathematics program. She’s also a fellow in 2023 from the American Mathematical Society and a research scholar at
Princeton University.
She’s hoping that her Women and Mathematics program will “at minimum, assist the community more, assist more people, rather than only me in my office, doing maths research on my own or with
colleagues,” she added. “I have the ability to demonstrate theories and maybe one day I’ll prove a theorem in 100 years it will be important. Maybe, but not sure. However, I felt that I wasn’t having
enough impact on the world or the people around me.”
Quanta interviewed Ho in videoconferences. The interviews were cut down and edited to make them more clear.
Also Read: Free Mathematical Mindsets By Jo Boaler
What would you say about your method of doing math?
Mathematicians can be divided into analytic and algebraic people. My maths work is on both however, at the core I’m an arithmetic person although I am geometric in my thinking. I tend to think of the
two as being the identical.
It’s not entirely accurate However, in general because of the work of Descartes and, more specifically, during the last century, the two fields have become extremely close. There’s a very precise
dictionary which can in certain situations assist in translating a geometric image into algebraic results.
In my case the geometric picture usually assists in the formulation of questions and conjectures. It also gives an intuition, but we convert them into math when we are writing. It’s much easier to
spot errors because algebra is usually more exact. It’s also easier to apply algebra when geometry becomes difficult to comprehend.
What are the ideas you’ve been thinking about in recent work?
A significant portion of my work has to deal with elliptic curves which are very common objects in arithmetic and number theory geometry.
It shouldn’t be that difficult to find integer solutions for equations such as these. We would expect that, in general that most curves not have integer solutions. However, it’s extremely difficult
to prove it.
Levent and I looked into this distribution of that integral points. We employ a classic construction that Mordell wrote in his 1969 volume Diophantine Equations. We can give an upper bound for how
many integrals that can be found on an Ellipsic curve. Some people have provided higher limits. We have found a new bound that is easy to establish.
What was the role that Mordell’s previous work contribute to your latest outcome?
The question we’re asking is about integral points on curvatures with elliptic curvatures. Mordell is able of connecting it to other things that we are able to examine.
It’s what we do every day in math. We want to comprehend an object, but we need to locate a proxy for understanding it. Sometimes, the proxy is precise. Sometimes, it isn’t. But, in reality, it’s
something that we have access to.
“It could also be simpler to utilize algebra if geometry becomes too difficult to grasp,” Ho said.
Caroline Gutman for Quanta Magazine
When did you make the decision to concentrate on math?
I don’t believe there was a turning point for me. I’m content with my career and life currently, but I think that if the circumstances had been a bit different, I might be content in a variety of
jobs or in other areas.
Perhaps that’s something mathematicians don’t want to say because they love to talk about how fervently they are about math, and how they would never consider any other thing. Personally I don’t
believe that’s the case.
I am interested in a lot of things. Maybe I became a mathematician as I was unhappy by the lack of discipline in other fields. As a kid I was taught to see myself as a mathematician certain ways, as
that was the way we lived at the home. My dad would play games of math with me, and I learned logic from an early age. I always wanted things to be tested.
However, I wasn’t certain that I’d be a great mathematician.
When I was a kid I didn’t realize that there were numerous math experts were exactly like me in many ways. We make up a lot of buzzwords about roles models. It’s not that I didn’t have many women. I
also didn’t see enough Asian American women.
What I am referring to is I did not see anyone who was interested in anything apart from math. It made me doubt myself quite a bit. How can I succeed in math when I don’t devote 100 percent of my day
doing math?
This is what I observed all around me. I was able to tell that other people were thinking differently about math than I did as well as my colleagues and those who were older. I believed it was hard
to find a job I didn’t want to be that way. I’d have other things to do.
Humanity is a topic I did not see other people paying attention to as much. I was worried that a part of me would make me a terrible a mathematician.
You’ve been named director for the Institute of American Studies’ Women and Mathematics program. What is the program’s offer to mathematicians who are women?
This is a week-long course designed for women who are at various levels of their careers, including undergraduate women and postdocs, graduate students and a few senior and junior faculty. The goal
is to learn math in a safe and supportive setting.
Students who might not have been aware that they’d like to pursue math are meeting extremely senior mathematicians, and receiving mentoring all the way to the top.
They will meet with a variety of individuals at different stages of their careers and have conversations with people about their experiences. I don’t believe that there are any other programs with
this broad range of experience and are specialized in specific subfields.
The 2023 plan is called “Patterns of integers.” The program will feature many people from the fields of additive combinatorics and analytical number theory. We invite individuals from various career
paths to make connections.
For graduate students who are older who are already working in this field They’ll be meeting postdocs as well as senior and junior professors in their field and have the opportunity to work with them
for one week.
You’re also part of the Stacks project which is a huge online resource. What’s unique about it?
The sheer quantity and accessibility of the project is staggering and easy to access. It’s this huge with more than 7,500 pages if you print it out, and an on-line collaborative projects. In reality,
it’s the Columbia University mathematician] Aise Johan de Jong writes almost all of it.
It’s a meticulous, well written guide for those who are algebraic geometers. It’s an incredible contribution to the community.
Every week or so the number of users increases. It’s a reliable reference for just about everything. It covers a wide range of algebraic geometry one would have to read through 20 textbooks.
The living is in items can be edited and added. If there are errors that are not corrected, they’ll be discovered.
Another thing that’s interesting about this document is its tag-based system. While the document is growing constantly it is still possible to reference the same tag for a long time.
There are more than 21,000 permanent tags with specific results that you may want to reference. Pieter Belmans built the whole backend of the system, and it has been utilized in various other
projects too. Some other people have modified the technology.
The problem is and Johan is aware of that he’s not likely to be able to continue writing this. In the future, if we wish to see this continue, we’ll need others to get involved.
What role will your workshops contribute to The Stacks initiative?
The goal is to begin having younger children involved. They’re being asked to create pieces and bits of text which could eventually be included in it.
There are tensions due to the fact that for the site to stay up-to-date and top-quality as an information source, it has to be carefully controlled.
Therefore, Johan has to continue to do much of the work that goes into creating content for it. It’s not as open as Wikipedia in that anyone is able to access it. This is a bit unfortunate, but must
happen if you wish for this to work.
We’re looking for ways to gradually get more participants within this Stacks project. We’re inviting mentors to collaborate on projects with postdocs and graduate students. They study algebraic
geometry. They then write something down.
We recently published the volume, which contains several expository pieces which we hope to eventually be included in the Stacks project.
The Stacks project can continue to be extremely influential for many years to come when enough people take part and maintain it.
MUST READ: ANDREW CHILD, ANDY WEIR, ANN SUTTON, Elin Hilderbrand, EMILY ORGAN, ERIN JOHNSON ,J. K. Rowling, GILLIAN FLYNN, GENA SHOWALTER, HEATHER GRAHAM
|
{"url":"https://bookinfomaster.com/harmonizing-algebra-and-geometry-the-dance-of-a-brilliant-mathematician/","timestamp":"2024-11-07T00:08:50Z","content_type":"text/html","content_length":"104080","record_id":"<urn:uuid:0474097b-6aae-4cdd-834a-50caffed6fa1>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00086.warc.gz"}
|
Naive Bayes - Part One: Building from Scratch
If you’re enthusiastic to learn Naive Bayes and want to go more in-depth with a step-by-step example, look no further and read along.
A deep dive into Naïve Bayes for text classification
Naive Bayes is one of the most common machine learning algorithms that is often used for classifying text into categories. Naive Bayes is a probabilistic classification algorithm as it uses
probability to make predictions for the purpose of classification. If you are new to machine learning, Naive Bayes is one of the easiest classification algorithms to get started with.
In part 1 of this two-part series, we will dive deep into the theory of Naïve Bayes and the steps in building a model, using an example of classifying text into positive and negative sentiment. In
part 2, we will dive even deeper into Naïve Bayes, digging into the underlying probability. For those of you who are enthusiastic to learn this common algorithm and want to go more in-depth with a
step-by-step example, this is the blog for you!
For a practical implementation of Naïve Bayes in R, see our video tutorial on Data Science Dojo Zen – Naïve Bayes Classification (timestamp: from 1.00.17 onwards).
Training phase of the Naïve Bayes model
Let’s say there is a restaurant review, “Very good food and service!!!”, and you want to predict that whether this given review implies a positive or a negative sentiment. To do this, we will first
need to train a model (which essentially means to determine counts of words of each category) on a relevant labelled training dataset. New reviews are then classified into one of the given sentiments
(positive or negative) using the trained model.
Let’s say you are given a training dataset that looks something like the below (a review and its corresponding sentiment):
A quick side note: Naive Bayes classifier is a supervised machine learning algorithm in that it learns the features that map back to a labeled outcome.
Let’s begin!
Step 1: Data pre-processing
As part of the pre-processing phase, all words in the corpus/dataset are converted to lowercase and everything apart from letters and numbers (e.g. punctuation and special characters) is removed from
the text.For example:
Tokenize each sentence or piece of text by splitting it into words, using whitespace to split/separate the text into each word. For example:
Keep each tokenized sentence or piece of text in its own item within a list, so that each item in the list is a sentence broken up into words.
Each word in the vocabulary of each class of the train set constitutes a categorical feature. This implies that counts of all the unique words (i.e. vocabulary/vocab) of each class are basically a
set of features for that particular class. Counts are useful because we need a numeric representation of the categorical word features as the Naive Bayes Model/Algorithm requires numeric features to
find out the probabilistic scores.
Lastly, split the list of tokenized sentences or pieces of text into a train and test subset. A good rule of thumb is to split by 70/30 – 70% of sentences randomly selected for the train set, and the
remaining 30% for the test set.
Step 2: Training your Naïve Bayes model
Now we simply make two bag of words (BoW), one for each category. Each of bag of words simply contains words and their corresponding counts. All words belonging to the ‘Positive’ sentiment/label will
go to one BoW and all words belonging to the ‘Negative’ sentiment will have their own BoW. Every sentence in training set is split into words and this is how simply word-count pairs are constructed
as demonstrated below:
Great! You have achieved two major milestones – text pre-processing/cleaning and training the Naïve Bayes model!
Now let’s move onto the most essential part of building the Naive Bayes model for text classification – i.e. using the above trained model to predict new restaurant reviews.
Testing phase– Where prediction comes into play!
Let’s say your model is given a new restaurant review, “Very good food and service!!!”, and it needs to classify to which category it belongs to. Does it belong to positive review or a negative one?
We need to find the probability of this given review of belonging to each category and then assign it either a positive or a negative label depending upon which particular category this test example
was able to score more probability.
Finding the probability of a given test example
*A quick side note: Your test set should have gone through the same text pre-processing as applied to the train set. *
Step 1: Understanding how probability predicts the label for test example
Here is the not-so-intimidating mathematical form of finding the probability of a piece of text belonging to a class:
• i is the test example, so “Very good food and service!!!”• The total number of words in i is 5, so values of j (representing the feature number) vary from 1 to 5
Let’s map the above scenario to the given test example to make it more clear!
Step 2: Finding Value of the Term — p of class c
Step 3: Finding value of term: product (p of a test word j in class c)
Before we start deducing probability of a test word j in a specific class c let’s quickly get familiar with some easy-peasy notation that is being used in the not so distant lines of this blog post:
i = 1, as we have only one example in our test set at the moment (for the sake of understanding).
A quick side note: During test time/prediction time, we map every word from the test example to its count that was found during the training phase. In this case, we are looking for 5 total word
counts for this given test example.
Still with me? Let’s take a short break. Here’s a random, cute cat:
Finding the probability of a test word “j” in class c
Before we start calculating the product (p of a test word “j” in class c), we obviously first need to determine p of a test word “j” in class c. There are two ways of doing this as specified below:
Let’s try finding probabilities using method 1, as it is more practically used in the field.
Now we can multiply the probabilities of individual words (as found above) in order to find the numerical value of the term: product (p of a test word “j” in class c)
Now we have numerical values for both the terms i.e. (p of class c and product (p of a test word “j” in class c)) in both the classes. So we can multiply both of these terms in order to determine p
(i belonging to class c) for both the categories. This is demonstrated below:
The p (i belonging to class c) is zero for both the categories, but clearly the test example “Very good food and service!!!” belongs to positive class. So there’s a problem here. This problem
happened because the product (p of a test word “j” in class c) was zero for both the categories and this in turn was zero because a few words in the given test example (highlighted in orange) NEVER
EVER appeared in our training dataset and hence their probability was zero! Clearly, they have caused all the destruction!
So does this imply that whenever a word that appears in the test example but never ever occurred in the training dataset it will always cause such destruction? And in such case our trained model will
never be able to predict the correct sentiment? Will it just randomly pick a positive or negative category since both have the same zero probability and predict wrongly? The answer is NO! This is
where the second method (numbered 2) comes into play. Method 2 is actually used to deduce p(i belonging to class c). But before we move on to method number 2, we should first get familiar with its
mathematical brainy stuff!
After adding pseudo-counts of 1s, the probability p of a test word that never appeared in the training dataset will not default to zero and therefore, the numerical value of the term product (p of a
test word “j” in class c) will also not end up as a zero, which in turn p (i belonging to class c) will not be zero. So all is well and there is no more destruction by zero probabilities!
The numerator term of method number 2 will have an added, 1 as we have added a 1 for every word in the vocabulary and so it becomes:
Total count of word “j” in class c = count of word “j” in class c + 1
Similarly, the denominator becomes:Total counts of words in class c = count of words in class c + |V| + 1
And so the complete formula becomes:
Total count of word “j” in class c = 0 + 1
So any unknown word in the test set will have the following probability:
Why add 1 to the denominator?
We are considering that there are no more unknown test words. The numerator will always be “1” even when a word that never occurred in the training dataset occurs in the test set. In other words, we
are assuming that an unknown test word occurred once (i.e. it’s count is 1) and so this also needs to be adjusted in the denominator. This is like adding the unknown word to the vocabulary of your
training dataset, implying that the total count will be |V| + 1.
Now take the time to give yourself a pat on the back for getting this far! You still have little bit to go in finding the probabilities and completing the testing phase. You are almost there.
Finding the probabilities using method 2
Now as the probability of the test example “Very good food and service!!!” is more for the positive class (i.e. 9.33E-09) as compared to the negative class (i.e. 7.74E-09), so we can predict it as a
Positive Sentiment! And that is how we simply predict a label for a test/unseen example.
A quick side note: As like every other machine learning algorithm, Naive Bayes too needs a validation set to assess the trained model’s effectiveness. But we deliberately jumped to the testing part
in order to demonstrate a basic implementation of Naive Bayes.
Continue learning with Part 2.
This blog was originally published on towardsdatascience.com
Written by Aisha Jawed
|
{"url":"https://datasciencedojo.com/blog/naive-bayes-from-scratch-part-1/","timestamp":"2024-11-10T21:52:43Z","content_type":"text/html","content_length":"803064","record_id":"<urn:uuid:a18ccd11-b8cc-4f60-a7a9-3052eadf3267>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00406.warc.gz"}
|
Lebesgue Measure & Integration - Frank Burk : Summary
Lebesgue Integration is usually introduced in either one of the two ways , in most of the books. Either using measure theory OR skipping measure theory altogether and using approximating functions
like step functions, monotone functions etc. The second approach is usually taken so that one can actually understand all the concepts of Lebesgue integral with out going through measure theory.
However that approach would make it difficult for a reader to connect Lebesgue Integral and concepts relating to stochastic processes. This is one of rare books which combines both the approaches. It
introduces measure theory as well as approximations in the book and uses them in various proofs, wherever the introduction makes the proof easier to follow. Let me attempt to summarize this book.
Chapter 1 : Historical Highlights
The first chapter gives a historical perspective of “Integration”. Right from 408 B.C.E till 1904 , there were lots of brilliant people who contributed to the understanding of the process of
integration. The main personalities mentioned are
• Exodus (408 B.C.E)
• Hippocrates(430 B.C.E)
• Archimedes(287- 212 B.C.E),
• Pierre Fermat (1601 – 1665)
• Leibnitz (1646 – 1716)
• Cauchy(1789-1857)
• Bernhard Riemann(1826-1866),
• Emile Borel(1871-1956)
• Camille Jordan(1838-1922)
• Giuseppe Peano(158-1932)
• Henri Lebesgue(1875-1941)
• William Young(1863-1942).
Exodus and Archimedes pioneered the technique called Exhaustion. Fermat, Newton and Leibnitz were masters in providing anti-derivatives for specific functions. But it was Cauchy who laid out a
radically different view of analysis, i.e using limits. He described everything from the view of limits. Karl Weierstrass provided the algebraic language needed to express the concepts like limits,
continuity. It was followed up by Reimann and Darboux who generalized the work of Cauchy and extended the concepts to integrals for bounded functions. However there were a lot of problems with
Riemann Calculus as well as FTC (Fundamental theorem of Calculus) using Riemann Integral. Most of the pathological examples perplexed mathematicians and at the same time raised serious doubts about
Riemann Calculus. Peano-Jordan used the concept of inner content and outer content to find integrals. The biggest limitation with Jordan measure was that it was building measure using finite
partitions. They seemed to be a ray of hope but their definition missed an important aspect of measurability, i.e content of rationals in [0,1] + content of irrationals in [0,1] > 1 which flew
against the usual requirement that measure of disjoint sets needs to be equal to measure of the union of disjoint sets. Finally Lebesgue used the concept of countable covers and introduced outer
measure and Lebesgue measure to create a breakthrough in mathematics – Lebesgue Integration. All these developments are summarized in the book with the help of the following visual.
Chapter 2 : Preliminaries
This chapter mentions all the concepts needed to understand the three core chapters of the book, Lebesgue Measure, Lebesgue measurable function and Lebesgue Integral.
It starts off by introducing basic operations on sets like union, intersection, complements etc. Then it introduces monotone sequence of sets and defines limit superior of a sequence of sets (lim
sup) , limit inferior of a sequence of sets ( lim inf). Well, if one is reading this book with absolutely no exposure to set theory, one might wonder the importance of limiting operations of sets. I
mean, in any analysis text , one usually comes across lim inf and lim sup for sequences. Lim inf and Lim sup are related to the limit of sequence. However the lim inf and lim sup for a sequence of
sets become extremely important in the context of probability. If you have events A1, A2, A3,…An then lim sup (An) means An infinitely often , and lim inf(An) means An almost always.
Functions are then introduced. This is followed up by explaining concepts of cardinality. Terms such as finite set, infinite set, countable, uncountable are explained. For a first time reader of
Lebesgue theory, it is difficult to understand the importance of cardinality concepts to measure theory. For all such readers, they would gain immense understanding by going over the historical
development of measure theory. Many mathematicians tried to wrestle with the concept of measure considering only finite intersections and finite unions and thus limiting the collection of sets for
which measure and integral could be defined. It was Lebesgue/Borel who were the first mathematicians to combine the concepts of cardinality with nowhere dense concepts( Baire’s contribution).Books
such as these are good reads after you have gone through the struggles that various mathematicians went through in defining mathematical objects.With out historical background, these books are
difficult to follow. But if you have some historical background , Books such as these are very illuminating. Anyway, coming back to the summary, the countability and uncountability are concepts which
are key to understanding Lebesgue measure. This is followed up with some concepts relating to the construction of Real line. Nothing too difficult to follow . Just some basic stuff like least upper
bound, greatest lower bound, supremum, infimum definitions are mentioned. Personally I felt Abbot’s book “Understanding Analysis”, does a fantastic job of introducing the Real Line. If you have
worked through Abbott, you can quick read this entire chapter.
The highlight of this chapter is the concept relating to sequence of real numbers.lim sup and lim inf of sequences are introduced with pictures. I am always on the lookout for books that explain
things using images, for our minds are tuned to visual processing. 80% of our mind is dedicated to visual processing and hence images, more so in math, can make you remember stuff for a LOONG time.
Cauchy sequences are then introduced in the book. Well, Cauchy sequences are the life blood to everything in analysis, from as simple as convergence of a series, to understanding metric spaces.
Nowadays I tend to reflect on my sheer ignorance of these concepts for a very long time in my life. I had crammed up stuff, learnt to crack a few exams without any good understanding of “Real
Analysis”. My first face-off with Real Analysis came out of sheer necessity. I had to teach Calculus to undergraduates during my masters, to take care of my daily expenses. You can’t teach Calculus
without a thorough understanding of Analysis. If someone had asked me, whether a sequence can fill up all the numbers between (0,1) back then, in all probability I would have said yes. While in my
first semester of my masters, I taught the calculus course to a few students and it was a disaster to say the least. I knew all the formulae, methods, tricks, ways to solve problems relating to
limits, differentiation, integration etc. I taught them to crack exams . But my fundamentals were extremely shaky. It was the first time I painfully realized the importance of SOUND fundamentals in
Real Analysis. Slowly over the course of time, I improved my teaching skills and brought in the discussion of basic principles in to the class to appreciate the beauty behind all the formulae. Ok..I
am deviating from the intent of the post.
Coming back to the summary of this book, the chapter subsequently gives a quick recap of some topological definitions / terms / theorems such as Open sets, Closed Sets, Compact Sets, Limit points,
Open Cover, Finite Sub cover, Heine-Borel theorem, Bolzano – Weierstrass theorem. Obviously this chapter provides a good recap ONLY. This is NO GOOD for a beginner. If you want to know thoroughly the
topological terms and concepts, you must pick up a book with enough coverage and problems in it. Math can only be learnt from doing. You cannot read math / listen to math .You understand ONLY when
you DO math and probably TEACH math. Books by Abbott / Rudin / Victor Byrant would be the best way to start off on understanding the principles. I found Rudin very tough to go through and chose
Abbott’s book instead. For topology, I found Victor Byrant’s book on Metric spaces to be a nice learning resource. I remember the first time when I came across these terms and really felt clueless.
If I want to integrate, why should I know about compactness? But slowly I realized that there was no shortcut to understanding Lebesgue integral or for that matter anything in calculus without
actually slogging through Real Analysis and understanding the theorems behind it.
The chapter then talks about Continuous functions and Differentiable functions in a breeze. As I start spending more time on math, I have started to realize that you cannot work on one book at a
time. You have got to read at least 2-3 books simultaneously. For example , a proof of theorem might be using Nested Interval property in one book while it might be using Bolzano – Weierstrass
theorem to prove the same in another book. A third book might contain the motivation behind the formulation of a theorem. Only when you see things from at least two or three perspectives, you get a
decent idea of math principles.
The chapter ends with the discussion of a sequence of functions and uniform convergence of a sequence of functions. Is the limit of derivative same as derivative of the limit? Is the limit of the Sum
the same as Sum of limit? Is the limit of Integral same as Integral of limit? These are some of the crucial questions which lead mathematicians to make real analysis more precise. Bressoud’s books
Radical Introduction of Real Analysis and Radical Introduction to Lebesgue theory start off with Fourier Series example that perplexed mathematicians for over a century. Fourier series is an example
where you can approximate a constant with infinite trigonometric series. You differentiate each term of the series and you get some non sense result. Nobody had a clue the reason for such anomalous
behaviour when it was first introduced. Perplexing behaviour of Fourier Series was one of the biggest motivations for mathematicians to bring out the rigor in defining terms like differentiation and
Overall this chapter makes tremendous read for someone who is already familiar with the concepts of real analysis and historical developments of real analysis. My favourite in this genre is “The
Calculus Gallery”. I have read it about 3-4 times. Every time I read, I find something that makes me wonder at the immense achievements of the people behind Calculus and Lebesgue Integration.
Chapter 3 : Lebesgue Measure
Chapter 3 introduces Lebesgue measure in a systematic manner. Instead of diving right in to the definition as most books on measure theory do, the approach taken here is very interesting. It lists
down a set of 8 desirable attributes of any measure for a subset of R. It then shows the conditions that Lebesgue outer measure satisfies. Out of the laundry list of desirable attributes, there are
two attributes which outer measure fails to possess. One is that you cannot have all the subsets of R. Second is that the outer measure does not satisfy countable additivity for a collection of
disjoint sets. Outer measure actually carries countable subadditivity property. Hence Lebesgue introduced a new measure which satisfies all the desirable attributes except the attribute that it is
applicable to all subsets of R. Such sets for which countable additivity holds good are called Lebesgue measurable sets. There are some sets in R which are not Lebesgue measurable, whose construction
is very difficult. In this context, Caratheodory’s condition comes in handy, who came up with a simple condition to check the measurability of the set. One can easily see that null set and R is
Lebesgue measurable set. At the same time 2^R , all the subsets of R are not all measurable.
The book then logically introduces sigma algebra, a collection of sets which satisfy certain properties such as countable unions, countable intersections, complements and limits are all present in
the collection. One “not to be missed” point is that “Once you are inside a sigma algebra, it is hard to get outside”. All intervals are Lebesgue measurable. However it is desirable to have much more
than simple intervals. In any case collection of open intervals is not a sigma algebra. This is where Borel sigma algebra comes in which is nothing but the smallest sigma algebra of all the sigma
fields generated by the intervals. Borel Sigma Algebra happens to be a subset of Lebesgue measurable sets. One of the highlights of this book is good visuals. The following visual summarized the
structure of all the discussed concepts in this chapter.
The chapter ends with discussing the topological structure of a measurable set. The discussion helps in understanding that an open set can enclose a measurable set and a closed set can almost exhaust
a measurable set. Meaning, Lebesgue measurable sets are “almost open”, “almost closed”
Chapter 4 : Lebesgue Measurable Function
A measurable function is one whose preimages are measurable sets. Functions in the Lebesgue world are not “measured” but integrated. Besides the definition, various forms of functions are explored
in this chapter to check their measurability. With the wide range of applicable functions, the inevitable conclusion is that Lebesgue by defining measurable functions via Measurable sets had cast his
net very very wide. The beauty of measurable functions is related to sequence of functions. If one takes a sequence of functions which are measurable, the function to which the sequence converges is
also measurable. Basically it means that we cannot escape the world of measurable functions even by taking point wise limits. This is not true about bounded, Riemann-integrable functions, functions
in Baire class 1. In those situations, the family of functions was too restrictive to contain all of its pointwise limits. Measurable functions, by contrast, are strikingly inclusive.
The highlight of this chapter is the careful construction of simple functions to approximate a measurable function. The detailed description of the algo for approximation , is by far the best
description that I have found amongst all the books on Lebesgue. Here’s a visual to demonstrate the procedure. For example a function like y = x^2 and y = 1/x are approximated using a series of
monotonically increasing
The chapter ends with discussing almost uniform convergence by proving Egoroff theorem. This theorem states that if {fk} is a sequence of measurable functions converging almost everywhere to f, then
on a subset of that space where the sequence converges, the convergence is uniform and hence the name almost uniform convergence. I feel this part could have been improved by taking the Bressoud-II
approach where the author discusses, convergence in measure, Riesz’s theorem etc. At least by discussing all aspects of convergence like pointwise, almost everywhere, almost uniform convergence,
convergence in measure, it is more likely that one gets an nice overall idea. In that sense, this final part of the chapter is a bit let down as it randomly introduces almost uniform convergence
concept and ends it abruptly.
Chapter 5 : Lebesgue Integral
The chapter starts off with a basic introduction to Riemann Integral. In most of the real analysis books you find the introduction to Riemann using partitions that eventually concludes with an
elegant formulation using Darboux integrals. This book however takes a different approach, the step function approach. The definition of lower Riemann Integral, Upper Riemann Integral, the properties
of Riemann Integral like homogeneity, additivity, monotonicity, additivity on the domain and mean value are all derived using step functions. Basically it’s an old wine in new bottle approach but
somehow the proofs are much easier to follow. The proofs using step function approach are easier to follow than using let’s say the sup/inf of lower/upper sums that one usually comes across. Even the
FTC(Fundamental theorem of Calculus) , the anti-derivative part and the evaluation part are both derived using the step function approach. One basic takeaway from this introduction is that you can
formulate Riemann Integrability condition purely based on step functions and that is an intuitively easy way to understand the condition.
The chapter then talks about Lebesgue integral for Bounded functions on sets of finite measure. Homogenity, Additivity, Monotonicity, Additivity on Domain hold good for Simple functions. One of the
best proofs in the book is for the theorem, ” Let f be a bounded function on the interval [a,b] If f is Riemann integrable on [a,b] then f is Lebesgue integrable on [a,b]. ” Since step functions are
a subset of simple functions, it is proved using the fact that you can sandwich a Lebesgue integrability condition with in Riemann Integrability condition. Subsequently, Homogenity, Additivity,
Monotonicity, Additivity on Domain properties are explored for Lebesgue integrals.
How does one decide whether a function is Lebesgue integrable or not ? For functions whose domain is of finite measure, it is easy. If you can sandwich the function between two simple functions such
that the difference between their integrals can be made as small as one wants, then the function is Lebesgue integrable. Another obvious way to decide is whether the function one is trying to
integrate , is measurable or not. Can there be a Lebesgue integrable function which is not measurable ? No, says the proof of a theorem from this book.
Books such as these should not be missed for a simple reason. You get both intuition and rigorous explanation of results. It is a standard thing that a Lebesgue measurable function can be
approximated using Monotone Sequence of simple functions. I always used to wonder, the reason why one should not , in a dumb fashion generate sequence of simple functions and basically use it for
integration purpose. However the book makes the reader realize that it is not always a nice way to do that way. Some of the examples in the book show that it is technically extremely cumbersome to
work with sequence of simple functions to approximate the integral. Instead the book says that it is elegant to approximate the lebesgue integral with a sequence of monotone measurable functions.
Again examples come before the theory so that reader is well motivated to see the relevance of the theory. Fatou’s lemma is usually used in proving Monotone Convergence theorem. But this book takes a
different approach where Monotone Convergence Theorem is used to prove Fatou’s lemma.
Finally the chapter extends it to integration to all measurable functions and not just nonnegative functions. One must be very clear with the difference between the two statements, “Lebesgue integral
of f is blahblah“ and “ f is Lebesgue integrable on E and equals blah blah blah”. The finiteness for the integral is crucial to call a function Lebesgue integrable on a set E. So in one sense f is
Lebesgue integrable on E only if |f| is integrable. This is very different from Riemann case where there is a chance the negative part of function might cancel some part of positive summands and make
the Riemann integral converge. Not in the case of Lebesgue. This means that there would some unbounded indefinite integrals which are Riemann Integrable but not Lebesgue integrable on a measurable
set. The chapter ends with a discussion on Lebesgue Dominated Convergence theorem which gives a very broad sufficient (note : it is not a necessary) condition for interchanging limits and integral
sign for a sequence of functions. One of the highlights of the book is the detailed derivation of cantor set using ternary expansion, and the construction of devil’s staircase function.
The book is very focused in treatment. Instead of covering a laundry list of topics related to Lebesgue theory , the book focuses on three core concepts i.e Lebesgue Measure, Lebesgue Measurable
function and Lebesgue integral and explains them very clearly. The author uses visuals to explain concepts, thus making the content very interesting. The highlight of this book is the conversational
tone that the author adopts in explaining stuff, thus enabling the reader connect seemingly disparate theorems/concepts/problems from the book.
|
{"url":"https://www.rksmusings.com/2011/03/22/lebesgue-measure-integration-frank-burk-summary/","timestamp":"2024-11-04T01:19:31Z","content_type":"text/html","content_length":"31440","record_id":"<urn:uuid:60cc7c09-2137-439e-ad2b-25e894d82d20>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00483.warc.gz"}
|
Know How I Complete My Math Homework in Less Time – University Homework Help
Home is provided in schools and colleges in order to make the concepts clear of the students. This will help them to revise everything whatever is done in class. But when we talk about my maths
homework then it is very comprehensive task as it requires well detailed plan which can be prepared by students in completing the work soon.
Why students face couples of problems in math’s homework?
Students face couples of problem in math’s homework due to many reasons
Students face couples of problems when the concepts of the chapter are not clear. If the basics are not known to them, they will not get the best solution for completing my maths homework on time and
with prescribed format.
• Difficulty in calculation-
Students needs help in my maths homework as they find it difficulty in calculations parts. They face couples of problems in understanding the concepts of division and multiplication accompanied by
algebraic expressions which seems to be difficult for them.
Students fail to understand subject formulas and theorems which are important for them in completing the assignments.
• Difficulty in solving problems-
When students face couples of problems in understanding the word problems and they fail to get the right answer for them.
All these problems will help the students to go for my maths homework help service where the entire task of the students get solved in the perfect way by taking less time.
Steps followed by students in completing Math’s home work
This is the first step which must be followed by students in the best way. They should prepare themselves fully by taking all the books, notebooks and other study material at one place. This will
help them to complete my math homework properly by looking down different concepts which is needed by students in doing the homework.
• Note down all formulas and theorems on a piece of paper–
It is important for the students that they should note down all the formulas along with theorems on a piece of paper, so that they can look for formula and theorems needed by them to complete the
work. They don’t have to look the book again and again for theorems and formulas.
• Try to make the place tidy-
If students are planning to do my maths homework, then in that case they should make the place tidy and clean. This will help them in developing their interest and they will take more interest in
doing their entire work. They should keep all the important things at one place so that need not have to go here and there for completing the work.
• Sit with your class mate–
If students face couples of problems in doing math’s homework then in that case, they may take the help from their friends. He will guide him in the best way, so that student will get the tricks in
completing the work fast.
• Try to remove all the distractions-
When students are about to sit for my math homework, then in that case they should remove all the distractions from place. This will not lack their interest on seeing the things like TV, play station
and mobile phones in their study room.
• Better to read the chapter once–
It is better for the students that they should read the chapter once before starting my maths homework. It will help them to revise the whole chapter before starting the work. All the concepts get
cleared by the students before starting the work.
• Try to complete the simple solution–
It is important for the students that they should complete the simple solution first ignoring the hard one as it will take more time to complete. Once the simple problem is completed then they can
give more time to complex one.
It is better for the students that they should take the break in between while doing my math homework,. If they don’t take break then they become bore in completing the work. They should divide their
time for work and play. It will make them fresh and they will take more interest in completing the work fast.
• Take the help from parents-
Students should also take the help from parents for my maths homework, so they can sit with them to solve their entire work nicely without creating any kind of problem for the same. Parents are the
best teachers; they can give them some basic techniques to complete the work on time.
Students can also take help from private tutors as they will also help the students in completing the work fast without taking any kind of problem for the same. They are more expensive in nature as
compared to online professionals.
When students are not satisfied with parents and friends, then they should move for online platform for professional help. They will solve the work of them in the best manner by allotting special
tips along with techniques that will help them to complete the work fast before the deadline.
• Try to revise the chapters daily–
It is important for the students that they should revise the chapter daily so that they will easily remember the concepts. They should make small note and stick up to the wall of their study room.
They can look to the piece of paper again and again while entering the room and when they are about to exit.
Why Online Professionals are best?
Online professionals are best as they will help the students to
• Complete the homework fast
• Makes their concepts clear regarding the chapters
• Facility of 24 hours service
• Flexibility in services
• Error free solutions
Thus it is important for the students that they should move for online platform for math’s homework help. It is better option for the students that they should avail the services of professionals in
this field as they will help them in completing their entire work soon before the deadline.
|
{"url":"https://universityhomeworkhelp.com/know-how-i-complete-my-math-homework-in-less-time/","timestamp":"2024-11-11T05:15:05Z","content_type":"text/html","content_length":"241662","record_id":"<urn:uuid:bbd2641d-93cd-4d0f-aeac-b1d4ae5fed41>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00827.warc.gz"}
|
The Change in the SIF of an Internal Semi-Elliptical Surface Crack Due to the Presence of an Adjacent Nonaligned Corner Quarter-Circle Crack in a Semi-Infinite Plate Under Remote Bending
Fracture mechanics-based failure theory has been used for analyzing structural integrity in Fitness-for-Service assessments of structures containing cracks. The stress intensity factors (SIFs) along
the crack front are the key information in order to assess the remaining service life of a cracked component. In the case of a multiply cracked component, according to Fitness-for-Service (FFS)
standards, these cracks must be first identified as to whether they are on the same cross-sectional plane, to be considered aligned cracks, or whether they are on parallel planes and thus be
considered non-aligned parallel cracks. Extensive studies have been carried out on the mutual influence of adjacent parallel cracks. However, the scenario of a semi-elliptical surface crack under the
influence of a quarter corner circular crack under remote bending has never been addressed. The present analysis addresses this problem by evaluating the effect of a corner circular crack of length a
[2] on the SIF of an adjacent nonaligned parallel semi-elliptical surface crack of length 2a[1] and depth b[1]. A parametric study of the effect on the SIF as a function of the horizontal separation
(S) and vertical gap (H) distances between the two cracks and the crack length ratio a[2]/a[1] is conducted. Mode I SIFs are evaluated for a wide range of the normalized crack gaps of H/a[2] = 0.4~2,
and normalized crack separation distances S/a[2] = -0.5~2. As in the case of tension, the presence of the corner quarter-circle crack affects the stress intensity factor along the semi-elliptical
crack front. The present results clearly indicate that the effect of the corner quarter-circle crack on the surface semi-elliptical crack is weaker in the case of bending, when compared to the
tension case. The largest percentage differences (5-22%) occur at the farthest crack tip of the semi-elliptical surface crack from the corner crack. The largest percent differences occur for a[2]/a
[1] < 1. In general, the deeper the semi-elliptical crack, the lower this percentage difference. When comparing maximal absolute values of the SIF, that normally occurs when b[2]/a[2]=1 for the cases
undertaken. In general, the maximum values are found at the closest tip of the semi-elliptical crack to the corner quarter-circle crack. When b[1]/a[1] is different from 1, then the maximum’s
location can depend on the value of H/a[2] and S/a[2] irrespective of the ratio a[2]/a[1]. In these cases, the absolute maximum can occur in the vicinity of the deepest point or in the vicinity of
the farthest crack tip of the semi-elliptical crack. As in the case of tension, in the case of bending the presence of the Corner Quarter-Circle Crack changes the stress intensity factor along the
semi-elliptical crack front. The change reaches its maximum at the tip of the semi-elliptical crack closest to the corner crack, and it monotonically decreases moving away from this tip for the case
b[1]/a[1]=1. For most of the cases b[1]/a[1] < 1, the maximum occurs in the vicinity of the midpoint of the semi-elliptical crack and decreases monotonically in both directions from the midpoint.
Publication series
Name American Society of Mechanical Engineers, Pressure Vessels and Piping Division (Publication) PVP
Volume 2
ISSN (Print) 0277-027X
Conference ASME 2023 Pressure Vessels and Piping Conference, PVP 2023
Country/Territory United States
City Atlanta
Period 16/07/23 → 21/07/23
• Fitness-for-Service
• Non-aligned
• Quarter-Circle Corner Crack
• Semi-elliptical crack
• Stress Intensity Factors
ASJC Scopus subject areas
Dive into the research topics of 'The Change in the SIF of an Internal Semi-Elliptical Surface Crack Due to the Presence of an Adjacent Nonaligned Corner Quarter-Circle Crack in a Semi-Infinite Plate
Under Remote Bending'. Together they form a unique fingerprint.
|
{"url":"https://cris.bgu.ac.il/en/publications/the-change-in-the-sif-of-an-internal-semi-elliptical-surface-crac","timestamp":"2024-11-12T16:07:19Z","content_type":"text/html","content_length":"70750","record_id":"<urn:uuid:419b10fc-cd56-4b2f-8096-9ee602993556>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00579.warc.gz"}
|
Is this a real problem, or just an anomaly? – Anomaly Detection | Vexpower
Before anyone starts to panic, you need to triage the problem and estimate the size of the issue. We only have a week's worth of data, could this just be an anomaly?
When you see something abnormal in the data, it may just be random noise that will revert to the mean...More
Hey just sending you the latest data: it looks like around 20-30% of our conversions are being attributed to direct instead of Facebook.
This course is a work of fiction. Unless otherwise indicated, all the names, characters, businesses, data, places, events and incidents in this course are either the product of the author's
imagination or used in a fictitious manner. Any resemblance to actual persons, living or dead, or actual events is purely coincidental.
One simple way to determine if an event was significant or just noise, is to do some basic anomaly detection. This works by finding the standard deviation and mean of the data prior to the event,
then seeing how many standard deviations out from the mean the value is. In using this statistical technique instead of guessing or eyeballing the data gives you a reliable, consistent method for
determining how important a deviation a worrisome new data point is. You find the upper bound for anomalies by adding 1x, 2x, or 3x the standard deviation to the mean, and taking it away to find the
lower bound. If it’s between those values, it’s an anomaly.
This is key if you want to avoid continuously chasing your tail as an analyst, because it can tell you when to dive in to solve a problem, and when it makes sense to just relax and wait for more
data. It's likely that if something isn't a true anomaly it will revert back to the mean given a few more days or weeks. This technique is closely related to quartiles, for example dividing the data
into 4 buckets using the QUARTILE function in GSheets, you’ll see the difference between the 3rd and 1st quartile, is approximately 1.35 times the standard deviation.
Complete all of the exercises first to receive your certificate!
|
{"url":"https://app.vexpower.com/sim/is-this-a-real-problem-or-just-an-anomaly/","timestamp":"2024-11-01T22:04:48Z","content_type":"text/html","content_length":"102374","record_id":"<urn:uuid:1f9ff7a2-70d9-4180-89b8-14fd9ba66315>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00375.warc.gz"}
|
Heat Transfer on Flow Past a Linearly Vertical Accelerated Plate With Constant Temperature and Variable Mass Diffusion
PECTEAM - 2018 (Volume 6 - Issue 02)
Heat Transfer on Flow Past a Linearly Vertical Accelerated Plate With Constant Temperature and Variable Mass Diffusion
DOI : 10.17577/IJERTCON076
Download Full-Text PDF Cite this Publication
M Sundar Raj , G Nagarajan, 2018, Heat Transfer on Flow Past a Linearly Vertical Accelerated Plate With Constant Temperature and Variable Mass Diffusion, INTERNATIONAL JOURNAL OF ENGINEERING RESEARCH
& TECHNOLOGY (IJERT) PECTEAM – 2018 (Volume 6 – Issue 02), http://dx.doi.org/10.17577/IJERTCON076
• Open Access
• Total Downloads : 29
• Authors : M Sundar Raj , G Nagarajan
• Paper ID : IJERTCON076
• Volume & Issue : PECTEAM – 2018 (Volume 6 – Issue 02)
• DOI : http://dx.doi.org/10.17577/IJERTCON076
• Published (First Online): 17-04-2018
• ISSN (Online) : 2278-0181
• Publisher Name : IJERT
• License: This work is licensed under a Creative Commons Attribution 4.0 International License
Text Only Version
Heat Transfer on Flow Past a Linearly Vertical Accelerated Plate With Constant Temperature and Variable Mass Diffusion
1 M Sundar Raj, 2G Nagarajan
1,2Department of Mathematics, Panimalar Engineering College, Chennai 600 123 , Tamilnadu, India. sundarrajkani@gmail.com, sridin_naga@yahoo.co.in
Abstract-An analysis of flow past a linearly accelerated infinite vertical plate is offered in the presence of variable mass diffusion with constant temperature. The temperature of the plate is
raised to Tw and species concentration level closer to the plate rises linearly with respect to the time. The non-dimensional governing equations are solved by Laplace-transform methods. The effects
of concentration , temperature and velocity are calculated for different parameters like thermal Schmidt number , Prandtl number , Grashof number, mass Grashof number and time. It shows that the
considered in horizondal direction . At time t 0 , the plate and fluid are at the same temperature T . The plate is linearly accelerated at time t 0 , with a velocity u u0t in its own
plane and the temperature of the plate is raised to Tw and the mass is diffused to the fluid from the plate with respect to time
1. Then by Boussinesq's approximation the governing equations
of unsteady flow are as follows:
increases due to increasing values of thermal Grashof or mass Grashof number. It is also observed that the velocity increases with decreasing values of the Schmidt number.
Keywords: accelerated, constant temperature, isothermal, vertical plate, heat transfer, mass diffusion.
C '
B0 y
1. INTRODUCTION
The effects of heat and mass transfer plays an important role are spacecraft design, solar energy collectors, filtration processes, nuclear reactors the drying of porous materials in textile
industries and the saturation of porous materials by chemicals, design of chemical processing equipment and pollution of the environment. Effects of mass transfer on flow past a uniformly
accelerated vertical plate was studied by Soundalgekar[1]. The above problem
was extended to include heat and mass transfer effects subjected to variable suction or injection by Kafousias and Raptis[2]. It is proposed to study the effects of on flow past a linearly
accelerated isothermal infinite vertical plate in the presence of variable mass diffusion with constant temperature. The dimensionless governing equations are solved using the
Laplace-transform technique. The solutions in terms of error complementary function and also exponential form.
2. MATHEMATICAL FORMULATION
The flow of a incompressible fluid past a linearly vertical accelerated infinite plate with constant temperature and variable mass diffusion has been considered. The x -axis is taken in the
vertical direction along the plate and the y-axis is
Figure: Physical model of the problem
( t = 0.2,0.4,0.6),Gr = Gc = 5 at t = 0.2 are studied and
The initial and boundary conditions in non- dimensional form are
3. METHOD OF SOLUTION
The non-dimensional governing equations (4) subject to the conditions (5) are solved by Laplace method and we get the following solutions
presented in fig 1. It is observed that the velocity increases with increasing values of t. Fig 2. demonstrates the effects of different thermal Grashof number (Gr =2,5) and mass Grashof
number (Gc=2,5) on the velocity at time t = 0.2. It was observed that the velocity increases with increasing of the thermal Grashof or mass Grashof number.
Fig 3 represents the result of concentration at time t=0.2 for varies Schmidt number (Sc=0.16, 0.3,0.5,2.01). The result of concentration is important in concentration field. The profiles
have the common feature that the concentration decreases in a monotone fashion from the surface to a zero value far away in the free stream. It was observed that the concentration increases
with decreasing values of the Schmidt number.
= Y
2 t
4. RESULTS
In sequence to get a physical problem, numerical computations are carried out for different parameters Gr, Gc, Sc, Pr and t upon the nature of the flow and transport. The value of the Schmidt
number Sc is taken to be 0.6 which corresponds to water-vapor. Also, the values of Prandtl number Pr are chosen such that they represent air (Pr =0.71) and water (Pr =7.0). The values of the
concentration ,velocity and temperature are calculated for different parameters like Prandtl number, Schmidt number
,thermal Grashof(Gr), mass Grashof number(Gc) and time. The velocity profiles for different Sc = 0.6, Pr = 0.71
The result of velocity for varies values of the Schmidt number (Sc=0.16, 0.3,0.6,2.01) ,Gr=Gc=5 and time t=0.2 are shown in fig 4. The trend shows that the velocity increases with decreasing
Schmidt number . It was observed that the variation of the velocity with the magnitude of the Schmidt number.
The temperature profiles are calculated for water and air from Equation (6) and these are shown in Fig 5. at time t = 0.2. The result of the Prandtl number plays an important role in
temperature field. It was observed that the temperature increases with decreasing Prandtl number. This shows that the heat transfer is more in air than in water.
5. CONCLUSIONS
The effects of Heat and Mass transfer on flow past a linearly accelerated infinite vertical plate in the presence of variable mass diffusion have been studied. The non- dimensional governing
equations are solved by the usual Laplace-method. The result of concentration ,velocity and temperature and for various parameters like thermal Grashof (Gr), mass Grashof number(Gc), Schmidt number
(Sc), Prandtl number(Pr) and time (t) are studied graphically. The study concludes that the velocity increases with increasing values of thermal Grashof number(Gr), mass Grashof number(Gc) and time
(t). But the velocity increases with decreasing Schmidt number (Sc). The wall concentration increases with decreasing values of the Schmidt number.
You must be logged in to post a comment.
|
{"url":"https://www.ijert.org/heat-transfer-on-flow-past-a-linearly-vertical-accelerated-plate-with-constant-temperature-and-variable-mass-diffusion","timestamp":"2024-11-12T17:04:08Z","content_type":"text/html","content_length":"69888","record_id":"<urn:uuid:da033f18-0e17-44c6-abd7-b71f984f9708>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00644.warc.gz"}
|
Concatenation theory
Jump to navigation Jump to search
Concatenation theory, also called string theory, character-string theory, or theoretical syntax, studies character strings over finite alphabets of characters, signs, symbols, or marks. String theory
is foundational for formal linguistics, computer science, logic, and metamathematics especially proof theory.^[1] A generative grammar can be seen as a recursive definition in string theory.
The most basic operation on strings is concatenation; connect two strings to form a longer string whose length is the sum of the lengths of those two strings. ABCDE is the concatenation of AB with
CDE, in symbols ABCDE = AB ^ CDE. Strings, and concatenation of strings can be treated as an algebraic system with some properties resembling those of the addition of integers; in modern mathematics,
this system is called a free monoid.
In 1956 Alonzo Church wrote: "Like any branch of mathematics, theoretical syntax may, and ultimately must, be studied by the axiomatic method".^[2] Church was evidently unaware that string theory
already had two axiomatizations from the 1930s: one by Hans Hermes and one by Alfred Tarski.^[3] Coincidentally, the first English presentation of Tarski's 1933 axiomatic foundations of string theory
appeared in 1956 – the same year that Church called for such axiomatizations.^[4] As Tarski himself noted using other terminology, serious difficulties arise if strings are construed as tokens rather
than types in the sense of Pierce's type-token distinction, not to be confused with similar distinctions underlying other type-token distinctions.
|
{"url":"https://static.hlt.bme.hu/semantics/external/pages/%C3%BCres_sor/en.wikipedia.org/wiki/Concatenation_theory.html","timestamp":"2024-11-11T14:04:05Z","content_type":"text/html","content_length":"29259","record_id":"<urn:uuid:73d69d3d-efc8-4f95-a752-edcc70c93715>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00383.warc.gz"}
|
Writing custom ops, kernels and gradients in TensorFlow.js
This guide outlines the mechanisms for defining custom operations (ops), kernels and gradients in TensorFlow.js. It aims to provide an overview of the main concepts and pointers to code that
demonstrate the concepts in action.
Who is this guide for?
This is a fairly advanced guide that touches on some internals of TensorFlow.js, it may be particularly useful for the following groups of people:
• Advanced users of TensorFlow.js interested in customizing behaviour of various mathematical operations (e.g. researchers overriding existing gradient implementations or users who need to patch
missing functionality in the library)
• Users building libraries that extend TensorFlow.js (e.g. a general linear algebra library built on top of TensorFlow.js primitives or a new TensorFlow.js backend).
• Users interested in contributing new ops to tensorflow.js who want to get a general overview of how these mechanisms work.
This is not a guide to general use of TensorFlow.js as it goes into internal implementation mechanisms. You do not need to understand these mechanisms to use TensorFlow.js
You do need to be comfortable with (or willing to try) reading TensorFlow.js source code to make the most use of this guide.
For this guide a few key terms are useful to describe upfront.
Operations (Ops) — A mathematical operation on one or more tensors that produces one or more tensors as output. Ops are ‘high level’ code and can use other ops to define their logic.
Kernel — A specific implementation of an op tied to specific hardware/platform capabilities. Kernels are ‘low level’ and backend specific. Some ops have a one-to-one mapping from op to kernel while
other ops use multiple kernels.
Gradient / GradFunc — The ‘backward mode’ definition of an op/kernel that computes the derivative of that function with regards to some input. Gradients are ‘high level’ code (not backend specific)
and can call other ops or kernels.
Kernel Registry - A map from a (kernel name, backend name) tuple to a kernel implementation.
Gradient Registry — A map from a kernel name to a gradient implementation.
Code organization
Operations and Gradients are defined in tfjs-core.
Kernels are backend specific and are defined in their respective backend folders (e.g. tfjs-backend-cpu).
Custom ops, kernels and gradients do not need to be defined inside these packages. But will often use similar symbols in their implementation.
Implementing Custom Ops
One way to think of a custom op is just as a JavaScript function that returns some tensor output, often with tensors as input.
• Some ops can be completely defined in terms of existing ops, and should just import and call these functions directly. Here is an example.
• The implementation of an op can also dispatch to backend specific kernels. This is done via Engine.runKernel and will be described further in the “implementing custom kernels” section. Here is an
Implementing Custom Kernels
Backend specific kernel implementations allow for optimized implementation of the logic for a given operation. Kernels are invoked by ops calling tf.engine().runKernel(). A kernel implementations is
defined by four things
• A kernel name.
• The backend the kernel is implemented in.
• Inputs: Tensor arguments to the kernel function.
• Attributes: Non-tensor arguments to the kernel function.
Here is an example of a kernel implementation. The conventions used to implement are backend specific and are best understood from looking at each particular backend’s implementation and
Generally kernels operate at a level lower than tensors and instead directly read and write to memory that will be eventually wrapped into tensors by tfjs-core.
Once a kernel is implemented it can be registered with TensorFlow.js by using registerKernel function from tfjs-core. You can register a kernel for every backend you want that kernel to work in. Once
registered the kernel can be invoked with tf.engine().runKernel(...) and TensorFlow.js will make sure to dispatch to the implementation in the current active backend.
Implementing Custom Gradients
Gradients are generally defined for a given kernel (identified by the same kernel name used in a call to tf.engine().runKernel(...)). This allows tfjs-core to use a registry to look up gradient
definitions for any kernel at runtime.
Implementing custom gradients are useful for:
• Adding a gradient definition that may not be present in the library
• Overriding an existing gradient definition to customize the gradient computation for a given kernel.
You can see examples of gradient implementations here.
Once you have implemented a gradient for a given call it can be registered with TensorFlow.js by using registerGradient function from tfjs-core.
The other approach to implementing custom gradients that by-passes the gradient registry (and thus allows for computing gradients for arbitrary functions in arbitrary ways is using tf.customGrad.
Here is an example of an op within the library of using customGrad
|
{"url":"https://www.tensorflow.org/js/guide/custom_ops_kernels_gradients?authuser=3","timestamp":"2024-11-09T07:12:09Z","content_type":"text/html","content_length":"117909","record_id":"<urn:uuid:ef28d246-ccee-480e-b2b6-34e81de5e5a8>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00039.warc.gz"}
|
Bond Blocks
• Number: Counting, Place Value
Building Understanding and Fluency in Addition and Subtraction with basic facts and beyond
Bond Blocks are a new, innovative manipulative developed by Narelle Rice and supported by Dr Paul Swan. There are a few key features of Bond Blocks:
• The blocks are not scored, reducing the tendency to simply count.
• The natural wood colour of the sustainably-sourced New Zealand Pine reduces distraction of colourful plastic and focuses children on the numbers.
• Ten is represented with two different blocks: the Linear Ten and an Empty Ten Frame, similar to a ten strip and ten frame.
• They are a ratio of one unit : 2 cm making them easy to manipulate.
• They are self-checking, encouraging number sense and estimation.
There’s a lot you can do with just the Blocks themselves, but the Bond Blocks are also part of a larger Bond Blocks system comprised of Games, Assessment Materials and Teacher Notes. Lessons on the
Bond Blocks system are publically accessible at www.bondblocks.com
Bond Blocks include two of each linear block 1 to 9, four linear 10 blocks, two blank five blocks and a marked and blank empty ten frame block.
The earliest stages of counting with Bond Blocks include counting forwards, building a set of steps from 1 to 10 where students touch the block, physically moving down the steps. Eventually the
student begins to count along covered numbers and starts at points other than one.
Typical Classroom Requirements
One set of Bond Blocks between 2 or 4 students.
A class of 32 will need 8 or 16 sets of Bond Blocks
Support and Complementary Materials
Bond Blocks has a complementary series of materials including 106 board games and assessment materials.
The Bond Block System targets:
• Fluency with number bonds, leading to recall, to add and subtract to 20.
• Robust understanding of addition and subtraction concepts, and relationships between them.
• Flexible, efficient calculating strategies.
• Number concepts including place value.
• Mathematical reasoning and problem solving.
|
{"url":"http://mathsmaterials.com/bond-blocks/","timestamp":"2024-11-13T02:33:17Z","content_type":"text/html","content_length":"139551","record_id":"<urn:uuid:a534f5f4-77cd-4973-9d42-1264c5637de6>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00753.warc.gz"}
|
From AWF-Wiki
Languages: English • Deutsch
Formulas in AWF-Wiki are created with the Tex markup language. Everyone who used LaTeX before is familiar with this. For those users that do not have any experiences with TeX it might be hard work
for the first formula! Nevertheless it is defenetly worth the trouble, because the outcome is perfect. If you once formated formulas in TeX, you can also import them in many other applications.
Unfortunately it is defenetly not possible to write down a formula without any knowledge about TeX. The following example shows a medium complex estimator for the variance of a stratified sample:
\[\hat {var} (\bar y) = \sum_{h=1}^L \left\lbrace \left( \frac {N_h}{N} \right)^2 \hat {var} (\bar y_h) \right\rbrace = \frac{1}{N^2} \sum_{h=1}^L N^2_h \frac {N_h-n_h}{N_h} \frac {S^2_h}{n_h}\]
This formula is the outcome of the following syntax:
:<math>\hat {var} (\bar y) = \sum_{h=1}^L \left\lbrace \left( \frac {N_h}{N} \right)^2 \hat {var} (\bar y_h) \right\rbrace = \frac{1}{N^2} \sum_{h=1}^L N^2_h \frac {N_h-n_h}{N_h} \frac {S^2_h}{n_h}</
The above estimator is not complicate, we can imagine much more complex expressions in (Forest)science and you can imagine that it takes time to get familiar with this kind of markup. In case that
you want to contribute to AWF-Wiki but you are afraid of this, let us know and we can help! We are a strong community of scientists and there are always some experienced people who can help!
We created a template for this purpose, so if you are not able to express you formular you can enter {{formulahelp}} at the position the formula should appear. If you have any other possibility to
present your formula (perhaps you can export it as a image out of your application) enter it here, so that people know what to do!
You can find a very complete Dokumentation about TeX in the Wikipedia Metawiki (Shift + click opens this page as a new window). We abdicate from copying this content into AWF-Wiki, it is more
efficient to use this source. If you open this link in a seperate window you can learn and copy from the examples presented.
If you use a formel editor that is able to export TeX markup you are lucky and able to just copy and paste your formulas in between the <math> </math> tags!
|
{"url":"http://wiki.awf.forst.uni-goettingen.de/wiki/index.php?title=Help:TeX&oldid=320","timestamp":"2024-11-09T00:22:52Z","content_type":"text/html","content_length":"21575","record_id":"<urn:uuid:e54bdaf1-579d-438d-9a7e-ea52e1a0144f>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00719.warc.gz"}
|
Help with a geometry demos thing
I want to create a repeating polygon in desmos with the number of repetitions equal to the number of sides. I don’t know how. I have hard coded 10 of them but I dont know who to make them repeating.
This may be more of a calculator question, but folks may find it valuable for activities as well. Could you share a graph or activity link? Without seeing it, I’d say using list comprehension can
probably achieve something fairly clean, particularly if it’s a regular polygon.
Sorry I can’t figure out how to share a link
For sharing, after publishing your activity, in the upper right three dots menu, there is “Share activity”. For a desmos calculator graph, upper right has a “Share graph” button that is just a URL.
Simply past them into your post or use the link button next to the italic button.
Here’s an example using list comprehension.
Sorry but i am trying to post a link and i get the message “you cant post a link to that host”
Unsure what you mean by repeating polygon, something like this?
This isn’t loading for me, but looking back I see the repeating portion. Here’s repeating polygons arranged in a circle, where the radius changes so there’s no overlap. You could possibly do in a
grid as well.
1 Like
just for fun a little mod of Daniel’s file: Repeated polygons | Desmos
1 Like
ok I will just share the code,
and repeating 10 times, but I want it to repeat n times
recursive midpoints
You may also be interested in this graph, Spiraling midpoints:
I dont think this is what you are wanting, but you may be able to riff off of this. I use this to show how all the parts of a polygon relate:
You can also see it used in a problem on slide 15 of this activity:
|
{"url":"https://cl.desmos.com/t/help-with-a-geometry-demos-thing/7426","timestamp":"2024-11-13T21:08:31Z","content_type":"text/html","content_length":"40918","record_id":"<urn:uuid:a4db8939-302f-423e-852f-d028aa3c6051>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00896.warc.gz"}
|
A first course in differential equations zill 9th edition solution manual
Since f is continuous it can only change signs at a point where it is 0. Math 230 differential equations spring 2016 course syllabus. A first course in differential equations prindle, weber and
schmidt series in mathematics dennis g. The classic fifth edition 5th edition solution manuals or printed answer keys, our. Student resource with solutions manual for zills a first.
Differential equations is the mathematics majors course mth 912. A first course in string theory, 2004, barton zwiebach. Zill solution manual a first course in probability 7th edition by sheldon m.
Zill specification for first file of 9th edition extension pdf pages 508 size 17. Unlike static pdf student solutions manual for zillsfirst course in differential equations. Zill differential
equation with boundary value problems by dennis g zill 3rd edition differential equations dennis g zill 10th edition solution manual pdf download.
Solution manual for a first course in differential equations with modeling applications, 11th edition, dennis g. Title slide of differential equations zill cullens 5th solution slideshare uses
cookies to improve functionality and performance, and to provide you with relevant advertising. Differential equations zill solution 9th edition zip by. Shed the societal and cultural narratives
holding you back and let free stepbystep a first course in differential equations with modeling applications textbook solutions reorient your old paradigms. Solutions manual for zills a first course
in differential equations with modeling applications 7th and 9th edition authors. Problems, 9th edition solutions manual for zillcullens differential equations with. I have the pdf of the teachers
solutions manual though, and it has every. How is chegg study better than a printed a first course in differential equations student solution manual from the bookstore. Complete solution manual a
first course in differential equations. Student resource and solutions manual, by warren s.
The classic fifth edition 5th edition solution manuals or printed answer keys, our experts show you how to solve each problem stepbystep. The books on our website are also divided into categories so
if you need a handbook on world war ii, go to the history section. Solution manual for a first course in differential equations. Solutions manual for first course in differential equations. Student
solutions manual for zills a first course in differential equations with modeling applications, 11th by dennis g.
If you want to downloading pdf solution manual differential equations 9th edition zill, then youve come to the loyal website. Student resource with solutions manual for zills a first course in
differential equations, 9th 9th edition. Differentialequationsbyzill3rdeditionsolutionsmanualengrebooks blogspot com. A first course in differential equations 9th edition. Ask our subject experts for
help answering any of your homework questions. A first course in probability 7th edition by sheldon m. Now is the time to redefine your true self using slader s free a first course in differential
equations with modeling applications answers. Your study of differential equations and its applications will be supported by a bounty of pedagogical aids, including an. Verify that the indicated
function is an explicit solution of the given differential equation. Student solutions manual for zill sfirst course in differential equations. View stepbystep homework solutions for your homework.
Oct 21, 2018 differential equations zill solution 9th edition zip 23 find dennis g zill solutions at now. Differential equations with modeling applications, 9th edition, is intended for.
I am using same text book, so this is a recommendation for solutions manual for first course in differential equations with modeling applications 11th edition by zill ibsn 9785965720 instant download
link. Only fresh and important news from trusted sources about differential equation 9th edition by zill solution manual today. Download free sample here to see what is in this testbank for a first
course in differential equations with modeling applications, 9th edition by zill. Buy a first course in differential equations with modeling applications by dennis g zill online at alibris.
Differential equations textbook solutions and answers. Pdfa course in ordinary differential equations instructor solutions manual. A first course in differential equations institutional. The longer
version of the text, differential equations with boundaryvalue problems. Be in trend of crypto markets, cryptocurrencies price and charts and other blockchain digital things. Mar 27, 2019 free
stepbystep solutions to a first course in differential equations applications, tenth edition solutions manual for zillcullens differential equations with 11th edition differential equations with
boundaryvalue problems, 9th edition. Free stepbystep solutions to a first course in differential equations with. Thus, fy is completely positive or completely negative in each region ri.
Swift, wirkus pdf a first course in abstract algebra 7th ed. Differential equations dennis g zill 10th edition solution. No need to wait for office hours or assignments to be graded to find out where
you took a wrong turn. Solutions manual for first course in differential equations with modeling applications 11th edition by zill ibsn 9785965720 download at. Solution manual for differential
equations with boundary. Student solutions manual for zillsfirst course in differential equations. How to get the first course in differential equations. Solutions to a first course in differential
equations with modeling. Textbook solutions for a first course in differential equations with modeling applications.
Differential equations dennis zill 9th solutions manual. Student solutions manual for zills a first course in. Testbank for a first course in differential equations with modeling applications, 9th
edition by zill. A first course in differential equations with modeling applications 9th and 10th edition authors. Fraleigh pdf a first course in differential equations the classic fifth edition
instructor solutions manual. Student solutions manual for zills a first course in differential equations with modeling applications, 11th, 11th edition. A first course in differential equations with
modeling applications 9th, differential equations.
Buy student solutions manual for zills a first course in differential equations with modeling applications, 11th on free shipping on qualified orders student solutions manual for zills a first course
in differential equations with modeling applications, 11th. Complete solution manual a first course in differential. A first course in differential equations, 9th ed by dennis g. May 24, 2011 title
slide of differential equations zill cullens 5th solution slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. Buy and download a first
course in differential equations with modeling applications, 11th edition dennis g. Oct 29, 2018 i am using same text book, so this is a recommendation for solutions manual for first course in
differential equations with modeling applications 11th edition by zill ibsn 9785965720 instant download link. Differential equations dennis g zill 10th edition solution manual pdf. Equations by zill
7th edition solution manual get instant access to your differential equations solutions manual on a first course in differential. Jul 11, 2015 a first course in differential equations with modeling
applications 9th and 10th edition authors. Student solutions manual for zills a first course in differential. Zill file specification for 9th edition extension pdf pages 426 size 4mb file
specification for 10th edition extension pdf pages 480 size 12mb request sample email explain submit request we try to make prices affordable. Course in differential equations with modeling
applications solutions 9th edition. If you continue browsing the site, you agree to the use of cookies on this website. No need to wait for office hours or assignments to be graded to.
Zill differential equations, 7th and 8th edition differential equations with boundaryvalue problems, 8th edition strikes a balance between the analytical, qualitative, and quantitative. A first
course in differential equations with modeling applications, 11th edition, dennis g. How to get the first course in differential equations with. Shed the societal and cultural narratives holding you
back and let free stepbystep a first course in differential equations textbook solutions reorient your old paradigms. A first course in differential equations 9th edition 1873 problems solved.
Straightforward and easy to read, a first course in differential equations with modeling applications, 11th edition, gives you a thorough overview of the topics typically taught in a first course in
differential equations. Student solutions manual for zills differential equations. Solutions manual for a first course in differential. Its easier to figure out tough problems faster using chegg
study. See more ideas about textbook, manual and study test. Solution manual differential equations 9th edition zill. Student resource with solutions manual for zills a first course in differential
equations with modeling applications, 10th by loyola marymount university dennis g zill, 97813491927, available at book depository with free delivery worldwide. This is not an original text book or
test bank or original ebook. Our interactive player makes it easy to find solutions to a first course in differential equations problems youre working on just go to the chapter for your book.
Textbook solutions for a first course in differential equations with modeling 11th edition dennis g. A first course in differential equations dennis zill. Always update books hourly, if not looking,
search in the book search column. A first course in differential equations solution manual. Manual differential equations 9th edition zill pdf as fast as possible. Differential equations dennis g
zill 10th edition solution manual pdf solution manual pdf of book a course of differential equation by dennis g zill free download. A first course in differential equations with modeling applications
10th edition 2060. Get ebooks a first course in differential equations with modeling applications on pdf, epub, tuebl, mobi and audiobook for free.
Solution manual a first course in differential equations the classic fifth edition by zill, dennis g solution manual a first course in differential equations, 9th ed by dennis g. Now is the time to
make today the first day of the rest of your life. There are more than 1 million books that have been enjoyed by people from all over the world. Differential equations with boundaryvalue problems
student solutions manual. The classic fifth edition 5th edition 1877 problems solved. Find answer by real cryptoprofessionals to your questions at. Complete solutions manual a first course in
differential equations with modeling applications ninth edition dennis g. Straightforward and easy to read, differential equations with boundaryvalue problems, 9th edition, gives you a thorough
overview of the topics typically taught in a first course in differential equations as well as an introduction to boundaryvalue problems and partial differential equations. Ross solution manual a
first course in probability theory, 6th edition, by s. Solution manual for differential equations with boundaryvalue problems 9th edition by dennis g. Unlike static pdf student solutions manual for
zill sfirst course in differential equations. Solutions to a first course in differential equations. Save up to 80% by choosing the etextbook option for isbn. Free stepbystep solutions to a first
course in differential equations applications, tenth edition solutions manual for zillcullens differential equations with 11th edition differential equations with boundaryvalue problems, 9th edition.
A first course in differential equations 9th edition by zill, dennis g. We have solution manual differential equations 9th edition zill txt, djvu, doc, epub, pdf forms. Student resource with
solutions manual for zills a first course in. Student resource with solutions manual for zills a first course in differential equations with modeling applications, 10th 10th edition by dennis g.
Solutions to solutions manual for zillcullens differential. With the convenient search function, you can quickly find the book you are interested in. Solutions to a first course in differential
equations with. Testbank for a first course in differential equations with. Unlock your a first course in differential equations pdf profound dynamic fulfillment today. Assume an appropriate interval
i of definition for each solution. A first course in probability theory, 6th edition, by s. With modeling applications 9th edition 9780495108245 by na for up to 90% off at. Differential equations
zill solution 9th edition zip 23 find dennis g zill solutions at now.
Unlike static pdf a first course in differential equations with modeling applications solution manuals or printed answer keys, our experts show you how to solve each problem stepbystep. Differential
equations by zill 7th edition solution manual. Search results for differential equation 9th edition by. Complete solutions manuala first course in differential equations with modeling. Solutions
manual for a first course in differential equations. Zill test bank test bank, solutions manual, instructor manual, cases, we accept bitcoin instant download. Student solutions manual for zills a
first course in differential equations with modeling applications, 11th, 11th edition a first course in differential equations with modeling applications, international metric edition, 10th edition.
A first course in complex analysis dennis zill solution manual.
A solution manual is step by step solutions of end of chapter questions in the. Zill differential equations, 7th and 8th edition differential equations with boundaryvalue problems, 8th edition
strikes a balance between the analytical, qualitative, and quantitative approaches to the study of differential equations. A first course in differential equations with modeling. Buy student
solutions manual for zills a first course in differential equations with modeling applications, 11th on free shipping on qualified orders. Solution manual for a first course in differential equations
with modeling applications 11e zill.
|
{"url":"https://thercecome.web.app/572.html","timestamp":"2024-11-01T21:59:56Z","content_type":"text/html","content_length":"19417","record_id":"<urn:uuid:9b1bac27-c119-4eda-aee1-b3c50ad24874>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00244.warc.gz"}
|
Uncorrectly read file
03-23-2024, 11:30 AM (This post was last modified: 03-23-2024, 11:31 AM by 1micha.elok.)
Case 1 : I think ... [for...next] is not the best way to read text file
Case 2 : I think this is the best way, but [do..until frln(f) = "unset"] gives unexpectedly output
Please advice me what is the best way to read the whole text file ?
Thank you.
set window "readfile",400,400
f = openfile("story.txt")
'Case 1
'correctly read file
for i = 0 to 21
wln frln(f)
i = i + 1
'Case 2
'uncorrectly read file
wln frln(f)
until frln(f)="unset"
free file f
03-23-2024, 01:46 PM (This post was last modified: 03-23-2024, 02:17 PM by Marcus.)
Like this, maybe:
f = openfile("story.txt")
lin = frln(f)
while typeof(lin)
pln lin
lin = frln(f)
' contains a bunch of helper functions related to files.
include "file.n7"
' returns the lines of a text file as an array.
lines = ReadAllLines("story.txt")
for i = 0 to sizeof(lines) - 1 pln lines[i]
' return all text as a single string.
text = ReadAllText("story.txt")
pln text
03-23-2024, 02:28 PM
text = system("type story.txt")
03-23-2024, 03:40 PM (This post was last modified: 03-23-2024, 03:43 PM by Marcus.)
(03-23-2024, 02:28 PM)Tomaaz Wrote:
text = system("type story.txt")
Nice! I've never used that command
And if you want to split it into an array of lines:
lines = split(system("type story.txt"), chr(10))
03-24-2024, 09:09 AM
Thank you Tomaaz and Marcus for the solution.
Let me code it in this way :
'Solutions (by Tomaaz and Marcus)
'use system type to read text file
set window "read story.txt",400,400
'chr(10) = ascii for \n, New Line
wln "File : Story.txt"
lines = split(system("type story.txt"), chr(10))
for k = 0 to sizeof(lines) - 1 wln lines[k]
'put a space character in the beginning of an empty line (on the text file)
'so that every empty space line is written as it is.
wln "File : Story_.txt"
lines = split(system("type story_.txt"), chr(10))
for k = 0 to sizeof(lines) - 1 wln lines[k]
temp = rln()
03-24-2024, 05:28 PM (This post was last modified: 03-24-2024, 05:31 PM by Tomaaz.)
Why not
wln system("type story.txt")
? It will print the entire file. What's the point of splitting?
03-24-2024, 11:55 PM
using split() and without split() function
click the image to zoom in
|
{"url":"https://www.naalaa.com/forum/thread-109-post-636.html#pid636","timestamp":"2024-11-14T23:32:40Z","content_type":"application/xhtml+xml","content_length":"44875","record_id":"<urn:uuid:9a53a774-b6bc-481f-b6e5-5e2eb99ff520>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00533.warc.gz"}
|
\right)$ , which has a half-life of $8.05\
Hint: The given question is based on the concept of half-life of a substance that is the time it takes for the substance to reach half its initial value. The time after which we have to find the
concentration of the substance is exactly two times the half life of the radioactive iodine. This means that by using the relation that connects the initial concentration, half-life and final
concentration of iodine we can find the answer to this question.
${N_t}$ = ${N_0}$$ \times {2^{ - \dfrac{t}{T}}}$
Where ${N_t}$ is the concentration of the radioactive iodine after the time given in the question.
${N_0}$ is the initial concentration of radioactive iodine
$t$ is the time after which the concentration is required to be obtained
$T$ is the half life of the radioactive iodine.
Complete step by step solution:
It is required to find the concentration of iodine after $16.1\;$ days have passed. This means that a certain amount of iodine has decayed and the concentration of iodine has reduced. To find this
answer we must plug in the values given in the question into the formula given.
The initial concentration of the radioactive iodine will be, ${N_0}$= $12mg$
The half life of the iodine will be, $t$= $8.05\;$ days.
The time after which the concentration is required to be found, $T$= $16.1\;$ days.
Therefore, substituting the values given in the formula,
${N_t}$ = ${N_0} \times {2^{ - \dfrac{t}{T}}}$
we get,
${N_t}$ = $12 \times {2^{ - \dfrac{{16.1}}{{8.05}}}}$
$\dfrac{{16.1\;}}{{8.05}}$ = $2$
Therefore, putting this value in the above step we will get,
${N_t}$=$12 \times {2^{ - 2}}$
The exponent on $2$ is negative meaning it will divide $12$ as shown below:
This will further give,
${N_t}$= $\dfrac{{12}}{4}$
Therefore, ${N_t}$ = $3$
Thus, the concentration of iodine after $16.1\;$ days in the question will be $3mg$ that is the mass of radioactive iodine remaining will be $3mg$.
Therefore, the correct option to the answer will be option D that is, $3mg$.
-It is important to keep in mind that the formula ${N_t}$ = ${N_0} \times {2^{ - \dfrac{t}{T}}}$ helps to derive ${N_t}$ which is the concentration that is remaining.
-Half life can be defined as the time required for half of the mass of a reactant to undergo radioactive decay. Therefore, with every half life that passes, the concentration of the substance
decreases by half of the previous concentration.
|
{"url":"https://www.vedantu.com/question-answer/a-patient-undergoing-treatment-for-thyroid-class-12-chemistry-cbse-5fb3765aaae7bc7b2d281f9d","timestamp":"2024-11-03T19:50:10Z","content_type":"text/html","content_length":"183375","record_id":"<urn:uuid:86d5bd83-6cb7-4204-9ccd-0e7393f237cb>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00805.warc.gz"}
|
I'm working on a probability distribution library myself (ProDist) I've got an initial set of 1D distributions implemented and I'm working on Copulas now (A copula is the interdependency structure
between dimensions of a multivariate continuous distribution). Discrete and mixed Discrete and continuous dimensions will follow.
I made distributions into polymorphic objects, and the classes can auto-best fit instances of themselves to data via the minimum chi squared method (maximum likelihood tends to over fit, maximum
entropy tends to under fit, minimum chi squared is somewhere in the middle, although I'll also need maximum likelihood for when there aren't enough points to do minimum chi squared). The point is
that functions like mean, variance, std dev, skewness, (excess) kurtosis, entropy, pdf, cdf, inverse cdf, draw (including vector and matrix valued of the 4 previous) central and non-central moments,
etc are member function on the object, so the corresponding call would be
double a(1.5), b(3.0);
prodist::cont::dist1d::Beta(a,b) beta; //cont means continuous, dist1d distinguishes from copula and distnd directory/namespaces.
double mean(beta.mean());
which I think is more natural than the boost syntax
also having distributions be objects allows for non-parametric distributions like point sets, histograms, kernel density estimates (not yet implemented, although I have a automagically binned
histogram class whose CDF in enhanced by a PCHIP which is comparable to ISJ KDE's in terms of accuracy while being a lot faster and enabling piecewise analytical evaluation of all member functions)
and is consistent with how we need to handle multidimensional distributions.
I'm using the armadillo linear algebra library as part of the interface
Since all continuous 1d distributions provide moments of arbitrary order, that enables "analytical" gram-schmidt orthonormalization to get ortho normal polynomials of arbitrary order, which you can
use an armadillo roots (eigen solve) on to get all roots which permits polynomial chaos/stochastic collocation, and you get that for free with every new continuous 1d distribution you implement.
Admittedly I have implemented a rather limited set of continuous1d distributions to date, but the non-parametric distributions take up the slack until I get around to doing more, FFT (wrapping FFTW3)
based convolutions of PDFs is another TODO.
You can wrap your favorite RNG in a subclass of prodist::cont::dist1d::U01Base, "all" (well not dirac delta which is completely deterministic, and null which is basically an error code) other
distributions (including discrete and multidimensional) take a shared_ptr to a U01base subclass (possibly the default object) but for efficient parallel execution you can wrap say random123 (a
library of counter based rng's) in U01Base subclasses and give a different instance to individual distributions to avoid having every thread waiting on the same locking function for it's turn to get
a draw. RandomStream is another good pseudo random number generator library you can wrap in a subclass of U01Base if you want to guarantee repeatability in parallel applications.
But like I hinted at above, you can call draw() on all distributions to use them like RNGs that produce points (1D or multiD) that obey any concrete distribution (including conditional, i.e.
specifying a subset of dimensions, draws in the multidimensional case).
The point of all that is I think (and am betting on) treating distributions as first class polymorphic objects with associated member functions provide numerous advantages over the boost way of doing
things. I know that ProDist will never be part of boost because it uses armadillo (and is too high level) but it may be advantageous to rethink the boost way of doing things.
-----Original Message-----
From: Boost-users <boost-users-bounces_at_[hidden]> On Behalf Of Warren Weckesser via Boost-users
Sent: Sunday, May 1, 2022 7:28 PM
To: Boost-users <boost-users_at_[hidden]>
Cc: Warren Weckesser <warren.weckesser_at_[hidden]>
Subject: [EXTERNAL] [Boost-users] [Math / Distributions] How do mean, variance, etc end up in scope?
This is probably more of a C++ question than a Boost/Math question. I think the answer comes down to a (surprising?) Boost namespace convention.
In a program like this:
#include <boost/math/distributions/beta.hpp>
using boost::math::beta_distribution;
int main()
float a = 1.5f;
float b = 3.0f;
beta_distribution<float> dist(a, b);
auto m = mean(dist);
how does the name 'mean' end up in scope in 'main()'? I would have expected (and preferred!) something like 'using boost::math::mean;' to be required.
Boost-users mailing list
Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net
|
{"url":"https://lists.boost.org/boost-users/2022/05/91260.php","timestamp":"2024-11-02T20:19:34Z","content_type":"text/html","content_length":"15662","record_id":"<urn:uuid:61fb31cc-ecdb-47ac-b152-b5b5fc0956b0>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00509.warc.gz"}
|
Printable Times Tables 1 10 | Multiplication Chart Printable
Printable Times Tables 1 10
Printable Multiplication Tables 1 10 Math Kids And Chaos
Printable Times Tables 1 10
Printable Times Tables 1 10 – A Multiplication Chart is a handy tool for kids to find out just how to increase, divide, and locate the tiniest number. There are numerous uses for a Multiplication
Chart. These helpful tools aid youngsters comprehend the process behind multiplication by using tinted paths and completing the missing out on products. These charts are complimentary to print and
What is Multiplication Chart Printable?
A multiplication chart can be utilized to aid children discover their multiplication facts. Multiplication charts been available in several types, from full page times tables to single web page ones.
While individual tables serve for offering chunks of info, a full page chart makes it simpler to evaluate realities that have already been grasped.
The multiplication chart will commonly include a top row and also a left column. When you desire to locate the product of two numbers, select the first number from the left column and the 2nd number
from the top row.
Multiplication charts are practical discovering tools for both adults as well as kids. Kids can utilize them at home or in school. Printable Times Tables 1 10 are readily available on the Internet
and also can be published out as well as laminated flooring for sturdiness. They are a terrific tool to make use of in math or homeschooling, and also will certainly give an aesthetic suggestion for
children as they learn their multiplication facts.
Why Do We Use a Multiplication Chart?
A multiplication chart is a layout that shows how to increase 2 numbers. It commonly contains a left column and a leading row. Each row has a number standing for the product of the two numbers. You
choose the first number in the left column, move it down the column, and afterwards pick the 2nd number from the top row. The item will certainly be the square where the numbers meet.
Multiplication charts are useful for several factors, including aiding kids find out just how to divide as well as simplify fractions. Multiplication charts can also be handy as workdesk sources due
to the fact that they offer as a continuous tip of the trainee’s development.
Multiplication charts are additionally valuable for helping pupils memorize their times tables. As with any kind of ability, remembering multiplication tables takes time and also practice.
Printable Times Tables 1 10
1 10 Multiplication Chart PrintableMultiplication
Free Printable Multiplication Table Chart 1 To 10 Template PDF
Free Printable Multiplication Table 1 10 Chart Template PDF Best
Printable Times Tables 1 10
If you’re looking for Printable Times Tables 1 10, you’ve come to the appropriate location. Multiplication charts are available in different styles, consisting of full dimension, half dimension, as
well as a selection of cute styles.
Multiplication charts as well as tables are essential tools for kids’s education. These charts are great for usage in homeschool mathematics binders or as class posters.
A Printable Times Tables 1 10 is an useful tool to reinforce math truths and can assist a child learn multiplication promptly. It’s also a terrific tool for skip counting and also discovering the
moments tables.
Related For Printable Times Tables 1 10
|
{"url":"https://multiplicationchart-printable.com/printable-times-tables-1-10/","timestamp":"2024-11-07T16:06:39Z","content_type":"text/html","content_length":"42633","record_id":"<urn:uuid:2e56fb68-79f7-4ada-8530-6784eef2bfc3>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00227.warc.gz"}
|
Formula or function for IF statement based on cell color | Microsoft Community Hub
Formula or function for IF statement based on cell color
I don't know how to code in VBA but am trying to automate an if/then calculation based on cell color. As shown in the picture, if the colors of the cells in column B are the same as those in Column
• Step 1 Paste code (found at bottom) into a new module. ALT F11 shortcut should open the code area.
Step 2 In cell O1 paste formula: =InteriorColor(B1) drag formula down
Step 3 In cell P1 paste formula: =InteriorColor(G1) drag formula down
Step 4 In cell L1 paste formula: =IF(O1<>P1,F1+K1,ABS(F1-K1)) drag formula down
Does this work for you Laurie?
Function InteriorColor(CellColor As Range)
InteriorColor = CellColor.Interior.ColorIndex
End Function
|
{"url":"https://techcommunity.microsoft.com/discussions/excelgeneral/formula-or-function-for-if-statement-based-on-cell-color/78267/replies/234518","timestamp":"2024-11-09T04:31:55Z","content_type":"text/html","content_length":"229974","record_id":"<urn:uuid:5320452d-d05e-41e2-bd6b-ab2df19538ed>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00431.warc.gz"}
|
The Algebra of Space (G3)
This notebook is an adaptation from the clifford python documentation.
Import Grassmann and instantiate a three dimensional geometric algebra
julia> using Grassmann
julia> basis"3"
(⟨×××⟩, v, v₁, v₂, v₃, v₁₂, v₁₃, v₂₃, v₁₂₃)
Given a three dimensional GA with the orthonormal basis $v_i\cdot v_j = \delta_{ij}$, the basis consists of scalars, three vectors, three bivectors, and a trivector.
The @basis macro declares the algebra and assigns the SubManifold elements to local variables. The Basis can also be assigned to G3 as
julia> G3 = Λ(3)
DirectSum.Basis{⟨×××⟩,8}(v, v₁, v₂, v₃, v₁₂, v₁₃, v₂₃, v₁₂₃)
You may wish to explicitly assign the blades to variables like so,
e1 = G3.v1
e2 = G3.v2
# etc ...
Or, if you're lazy you can use the macro with different local names
julia> @basis ℝ^3 E e
(⟨+++⟩, v, v₁, v₂, v₃, v₁₂, v₁₃, v₂₃, v₁₂₃)
julia> e3, e123
(v₃, v₁₂₃)
The basic products are available
julia> v1 * v2 # geometric product
julia> v1 | v2 # inner product
julia> v1 ∧ v2 # exterior product
julia> v1 ∧ v2 ∧ v3 # even more exterior products
Multivectors can be defined in terms of the basis blades. For example, you can construct a rotor as a sum of a scalar and a bivector, like so
julia> θ = π/4
julia> R = cos(θ) + sin(θ)*v23
0.7071067811865476 + 0.7071067811865475v₂₃
julia> R = exp(θ*v23)
0.7071067811865476 + 0.7071067811865475v₂₃
You can also mix grades without any reason
julia> A = 1 + 2v1 + 3v12 + 4v123
1 + 2v₁ + 3v₁₂ + 4v₁₂₃
The reversion operator is accomplished with the tilde ~ in front of the MultiVector on which it acts
julia> ~A
1 + 2v₁ - 3v₁₂ - 4v₁₂₃
Taking a projection into a specific grade of a MultiVector is usually written $\langle A\rangle_n$ and can be done using the soft brackets, like so
julia> A(0)
julia> A(1)
2v₁ + 0v₂ + 0v₃
julia> A(2)
3v₁₂ + 0v₁₃ + 0v₂₃
Using the reversion and grade projection operators, we can define the magnitude of A as $|A|^2 = \langle\tilde A A\rangle$
julia> ~A*A
30 + 4v₁ + 12v₂ + 24v₃
julia> scalar(ans)
This is done in the abs and abs2 operators
julia> abs2(A)
30 + 2v₁ + 6v₂ + 12v₃ + 3v₁₂ + 8v₂₃ + 4v₁₂₃
julia> scalar(ans)
The dual of a multivector A can be defined as $\tilde AI$, where I is the pseudoscalar for the geometric algebra. In G3, the dual of a vector is a bivector:
julia> a = 1v1 + 2v2 + 3v3
1v₁ + 2v₂ + 3v₃
julia> ⋆a
3v₁₂ - 2v₁₃ + 1v₂₃
Reflecting a vector $c$ about a normalized vector $n$ is pretty simple, $c\mapsto -ncn$
julia> c = v1+v2+v3 # a vector
1v₁ + 1v₂ + 1v₃
julia> n = v1 # the reflector
julia> -n*c*n # reflect a in hyperplane normal to n
0 - 1v₁ + 1v₂ + 1v₃
Because we have the inv available, we can equally well reflect in un-normalized vectors using $a\mapsto n^{-1}an$
julia> a = v1+v2+v3 # the vector
1v₁ + 1v₂ + 1v₃
julia> n = 3v1 # the reflector
julia> inv(n)*a*n
0.0 + 1.0v₁ - 1.0v₂ - 1.0v₃
julia> n\a*n
0.0 + 1.0v₁ - 1.0v₂ - 1.0v₃
Reflections can also be made with respect to the hyperplane normal to the vector, in which case the formula is negated.
A vector can be rotated using the formula $a\mapsto \tilde R aR$, where R is a rotor. A rotor can be defined by multiple reflections, $R = mn$ or by a plane and an angle $R = e^{\theta B/2}$. For
julia> R = exp(π/4*v12)
0.7071067811865476 + 0.7071067811865475v₁₂
julia> ~R*v1*R
0.0 + 2.220446049250313e-16v₁ + 1.0v₂
Maybe we want to define a function which can return rotor of some angle $\theta$ in the $v_{12}$-plane, $R_{12} = e^{\theta v_{12}/2}$
R12(θ) = exp(θ/2*v12)
And use it like this
julia> R = R12(π/2)
0.7071067811865476 + 0.7071067811865475v₁₂
julia> a = v1+v2+v3
1v₁ + 1v₂ + 1v₃
julia> ~R*a*R
0.0 - 0.9999999999999997v₁ + 1.0v₂ + 1.0v₃
You might as well make the angle argument a bivector, so that you can control the plane of rotation as well as the angle
R_B(B) = exp(B/2)
Then you could do
julia> Rxy = R_B(π/4*v12)
0.9238795325112867 + 0.3826834323650898v₁₂
julia> Ryz = R_B(π/5*v23)
0.9510565162951535 + 0.3090169943749474v₂₃
julia> R_B(π/6*(v23+v12))
0.9322404424570728 + 0.25585909935689327v₁₂ + 0.25585909935689327v₂₃
Maybe you want to define a function which returns a function that enacts a specified rotation, $f(B) = a\mapsto e^{B/2}\\ae^{B/2}$. This just saves you having to write out the sandwich product, which
is nice if you are cascading a bunch of rotors, like so
R_factory(B) = (R = exp(B/2); a -> ~R*a*R)
Rxy = R_factory(π/3*v12)
Ryz = R_factory(π/3*v23)
Rxz = R_factory(π/3*v13)
Then you can do things like
julia> R = R_factory(π/6*(v23+v12)) # this returns a function
#1 (generic function with 1 method)
julia> R(a) # which acts on a vector
0.0 + 0.5229556000177233v₁ + 0.7381444851051178v₂ + 1.4770443999822769v₃
julia> Rxy(Ryz(Rxz(a)))
0.0 + 0.40849364905389035v₁ - 0.6584936490538903v₂ + 1.5490381056766584v₃
To make cascading a sequence of rotations as concise as possible, we could define a function which takes a list of bivectors $A,B,C,...$, and enacts the sequence of rotations which they represent on
some vector $x$.
julia> R_seq(args...) = (R = prod(exp.(args./2)); a -> ~R*a*R)
R_seq (generic function with 1 method)
julia> R = R_seq(π/2*v23, π/2*v12, v1)
#3 (generic function with 1 method)
julia> R(v1)
2.220446049250313e-16 + 3.3306690738754696e-16v₁ + 0.9999999999999998v₂ + 5.551115123125783e-17v₂₃
We can find the barycentric coordinates of a point in a triangle using area ratios.
julia> function barycoords(p, a, b, c)
ab = b-a
ca = a-c
bc = c-b
A = -ab∧ca
(bc∧(p-b)/A, ca∧(p-c)/A, ab∧(p-a)/A)
barycoords (generic function with 1 method)
julia> barycoords(0.25v1+0.25v2, 0v1, 1v1, 1v2)
(0.5v⃖, 0.25v⃖, 0.25v⃖)
|
{"url":"https://grassmann.crucialflow.com/stable/tutorials/algebra-of-space/","timestamp":"2024-11-14T20:52:22Z","content_type":"text/html","content_length":"15408","record_id":"<urn:uuid:26c491d0-01fb-4569-9051-68ff40f39df4>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00247.warc.gz"}
|
A difficult question about sweets
There has been a furore in England about an exam question which made people think. Many students did not take kindly to it, and complained it was unfair. They felt angry and frustrated, and wanted
the pass mark to be lowered. The examining body Edexcel explained that they had designed it to test “the full range of abilities” (which may count as a rare public admission that students vary in
ability). In more precise terms one could say that the examiners were trying to distinguish between students who could solve individual mathematical problems one operation at a time, and those (a
minority) who, without any specific guidance, could integrate the individual steps into a full solution to a novel problem.
Readers of this blog will know that this sounds very familiar: it is the shift at around IQ 115-120 from being restricted to specific examples and written instructions, to being able to gather
information and make inferences: the level at which students have to think for themselves.
In the former case each step has to be rehearsed, but tends to be seen as a task on its own. In the latter case students realise that they are being given tools they can apply in novel circumstances:
they learn general principles, and apply them generally. In a nutshell, in designing this question the examiners were searching for A and A* students, or in my terms, Tribe 5.
Readers of this blog will also know that I try to confess my errors as quickly as possible, so you should know that I did not get the problem right. I will give you the problem and ask you to solve
it, just for your own fun. I know that my readers are persons of distinction, and will have a go and keep their workings, and not immediately jump to the BBC link provided to find the school answer.
Take a separate piece of paper, which you mark with the date and time, and then draw a line underneath at the end of your solution. Use Registrar’s ink, which darkens with age, and lasts at least 500
I then go on my the main task: I want to simplify the problem so that it contains no algebra, and can thus be used as a puzzle for the general public. I regard myself as a disciple of Gerd
Gigerenzer, and so this is intended to be a pace or two in his footsteps.
There are n sweets in a bag.
6 of the sweets are orange.
The rest of the sweets are yellow.
Hannah takes at random a sweet from the bag.
She eats the sweet.
Hannah then takes at random another sweet from the bag.
She eats the sweet.
The probability that Hannah eats two orange sweets is 1/3
(a) Show that n^2 - n - 90 = 0
(b) Solve n^2 - n - 90 = 0 to find the value of n
Once you have solved that, look at the slightly simpler version:
There are an unknown number of sweets in a bag.
6 of the sweets are orange.
The rest of the sweets are yellow.
Without looking in the bag, Hannah takes a sweet from the bag.
She eats the sweet.
Without looking in the bag, Hannah then takes another sweet from the bag.
She eats the sweet.
The probability that Hannah, just by chance, got out two orange sweets from the bag (took one orange sweet out which wasn’t put back in the bag, then took another orange sweet out which wasn’t put
back in the bag) is 1 in 3.
(a) Show what you think are the number of orange and yellow sweets before Hannah eats any sweets, and then after she eats the first sweet and the second sweet.
(b) Work out how many sweets were in the bag to begin with.
My reworking of the problem is to remove the algebra. I know that this is a maths exam, and that knowing algebra is part of maths, and that it might help you to solve this problem, and many other
problems. However, I want to avoid people being frightened off by algebraic notation and thus not realise that they might be able to solve it by using natural frequencies. In that vein, I also got
rid of and explain “random”. I use “1 in 3” because people might find that more easy than a fraction, which some adults do not understand. In a major simplification, I am also reminding people that
the sweets, once eaten, do not go back into the bag (which I forgot in the second phase of calculation when attempting a quick and lazy calculation myself).
What I am attempting to do is to remove any of the “surface” distractors and put the problem into its most practical and essential form. This allows us to judge whether the difficulty lies in the
format of the question, on in the irreducible complexity of the arguments to be considered. Pace Gigerenzer, most doctors fail probability questions when they are couched in percentages and symbolic
logic, but solve them easily when they are presented in natural frequencies, say as patients out of 1000 people in the population.
Here are some explanations as to why some question forms mislead, but do not necessarily tell us much about underlying complexity.
I notice that the official accounts dare not mention the notion of intelligence, which is a pity. The OECD funded Pisa study also avoid the topic. Some critics argue that children are not being
taught properly, and that maths education needs to be improved. Of course, that may well be true, but some ideas are hard to grasp, even when we make them as clear as we possibly can. An individual’s
intelligence level is found at the point where problem complexity defeats problem solution. Something has to give, and it is usually the problem solver.
What do you think of a) the first version as an exam question and b) the simplified version?
Anyone want to test the two versions on the general public?
P.S. I showed the original problem to an esteemed person of my acquaintance, and she gave me the correct answer in 20 seconds. A fluke, no doubt. I expect I may be reminded about it from time to
22 comments:
1. For those who have done or like combinatorics problems, this can be done mentally. It is the framing of the problem, i.e., pattern recognition how to define the 1/3 probability. There is no
trickery that I could see that misleads. So if students have not learnt combinations (taking 2 at a time) or not learnt probability, this can be hard. Yes, they need to know algebra too, although
given than n is an integer the correct value can be found by inspection (quadratic formula not required). The problem becomes more complicated if one interprets the sequencing to be somehow
important (oh, she ate it!) etc. and analyzing it step by step rather than see it as a selection of 2 sweets from the lot. I guess expertise helps in choosing the right frames. Given 2
individuals with similar fluid intelligence, the person with prior experience or expertise will choose the easy framing naturally. In a sense, thats what expertise is all about.
1. Though I do enjoy combinatorics from time to time, I could not have done this in my head. Too many details, not enough RAM.
I also found the simplified version to be more confusing than the original, which I did on paper in a little less than a minute. Even got the correct answer without screwing up the algebra.
2. I also think that how much a student may have been exposed to these sorts of combinatorial/probability problems will have everything to do with how easy the answer comes for them.
If a student has had a course that gets into such issues, then he has probably seen many problems just like this. The first question such a student will ask is: is the sampling with or
without replacement? Well, if Hannah ate the candy, that's not hard to figure.
After that it's just the most trivial kind of algebra.
I couldn't (or wouldn't) myself do it in my head -- or at least I wouldn't much trust my calculations, because I'm generally terrible at calculation.
If a student hasn't seen these sorts of problems, then it would take quite a few more IQ points to pull out a solution. Each step and consideration would have to be thought out from scratch,
and that is definitely a far harder business.
3. Ditto all.
2. Trial and error approach (faster):
This would probably be the method the less mathematically knowledgeable but smart would use. First intuit that the number is likely fairly small. Second, try some numbers.
6/7*5/6 = .71 (too high)
6/8*5/7 = .54 (still too high)
6/9*5/8 = .42 (almost)
6/10*5/9 = .33 (bingo)
Analytic solution:
(6 / n) * (5 / (n - 1)) = 1/3
solve n
n = 10 (or -9, but we ignore the negative solution as nonsense).
(6 / n) * (5 / (n - 1)) = 1/3
multiply LHS
30/(n^2 - n) = 1/3
multiply by 3
90/(n^2 - n) = 1
multiply by (n^2 - n)
90 = n^2 - n
then we can see that the result is 10, because 10 * 10 = 100, 100-10 = 90.
solving it completely is somewhat difficult, requires using the discriminant formula for a second order polynomial
subtract 90
0 = n^2 - n - 90
apply discriminant formula
d = b^2 − 4ac, where a = 1, b = -1, c = -90
d = (-1)^2 − 4 * 1 * -90
d = 1 − 4 * 1 * -90 = 361
then use the formula to find x
x = (−b ± sqrt(d)) / (2 * a)
we want the positive solution so
x = (−b + sqrt(d)) / (2 * a)
insert in numbers
x = (1 + sqrt(361)) / (2 * 1)
x = (1 + 19) / 2
x = 10
3. I congratulate the question setter on his amusing allusion to the fact that the only Smartie that you can recognise the colour of by taste alone is the orange one.
"the shift at around IQ 115-120 from being restricted to specific examples and written instructions, to being able to gather information and make inferences: the level at which students have to
think for themselves." At my secondary school we were told that admission to the top two streams required an IQ of 118.
For what little it's worth, I found the original phrasing superior, in the sense that it didn't make me doze off. Sorry, Doc.
4. Darn - I clicked the link before I read the paragraph below, which stated not to click the link.
5. I don't get it. This is a simple question,
Once the they tell the probability is 1/3, you know that 6 is greater than 1/2 the candies, because 1/3 is larger than 1/4. So the total must be 11 or less. There are only 5 possibly answers.
There is no trick. This is grade school math.
1. The rather low pass rate suggests otherwise. Problem solving cases like these are much harder for people, despite the math being relatively simple (by brute forcing the few plausible
As Charles Murray noted in his book, Real Education:
"In short, just about every reader understands from personal and vicarious life experiences what below average means for bodily-kinesthetic, musical, interpersonal, and intrapersonal ability,
and for the aspects of spatial ability associated with hand-eye coordination and visual apprehension. You may think you also know what below average means for linguistic ability,
logical-mathematical ability, and spatial abilities associated with mental visualization because you know you are better at some of these intellectual tasks than at others. But here you are
probably mistaken. It is safe to say that a majority of readers have little experience with what it means to be below average in any of the components of academic ability.
The first basis for this statement is that I know you have reached the second chapter of a nonfiction book on a public policy issue, which means you are probably well above average in
academic ability— not because getting to the second chapter of this book requires that you be especially bright, but because people with below-average academic ability hardly ever choose to
read books like this." (pp. 32-33)
You probably didn't go to an average grade school, and you probably didn't hang out with average children. This question is definitely not grade school stuff. I'd hazard the guess that a
large fraction of European adults would fail this question.
2. Yes - most people would definitely fail this question.
6. Lion of the Judah-sphere7 June 2015 at 03:08
My IQ is only 120 and I thought that was incredibly easy. It was basically a middling-level SAT Math question.
7. Well, my IQ is well north of 120, but I didn't understand the question at first, and not because the math was so advanced or because it takes a high IQ to spot the connection.
I puzzled over it for a while, because it's been a number of years since I've taught combinatorics. They used to always be part of algebra 2, but once I realized that the kids didn't use them in
pre-calc, I dropped them.
It's not the algebra that's the problem. It's misleading test question in that people who know combinatorics might not be able to see the underlying math topic. So if you're testing for IQ that's
one thing, and it won't be completely reliable because it's not in a puzzle book but in a math test, where even high IQ people would be trying to see the underlying math to work.
If you're testing for people who understand both combinatorics and quadratics, then notice of the people offering comments here, one of them completely failed to answer part a, which means it's
not "simple", and the other apparently missed that the equation factors, and I'd knock off a point for anyone who didn't notice that, or who moved the 90 over to the other side to guess and
0 = n^2 - n - 90
As written, part a comes off as a complete non sequitur. If Pearson was actually trying to make it harder by confusing the tester, that's pretty pointless.
a) Express the probability that Hannah chose 2 orange sweets in terms of n.
b) Use this equation to determine the number of sweets in the bag.
c) Use the solution to show that n^2 -n-90 = 0
That makes it fairer in that you've appropriately established it as a probability question. Then you can also eliminate the people who guess and check.
And this is not a middling SAT math question, which would have been: Hannah had 10 piece of candy in a bag, 6 of which were orange. Hannah picked one randomly for a snack after lunch, and then
gave another, picked at random, to a friend. What is the probability that both candies were orange?
Which is a much easier question.
1. Dear Ed Realist, Thank you for your intervention, which I appreciate, and I am also grateful for your simplified version, which is better than mine.
2. Lion of the Judah-spere9 June 2015 at 00:34
I'm pretty sure combinations with repetitions aren't on the SAT math (at least not SAT I).
3. Lion of the Judah-spere9 June 2015 at 00:38
I guess the original was so easy because I skipped the words and went straight to the algebra. The words require you to know combinations which aren't tested on the SAT.
8. not very helpful aside:
the publisher would have merely computed the item difficulty level (% passing), & the correlation of the item with total score.
& if the publisher is anal retentive enough they'll visually inspect the data to make sure an item's not acting "weird" -- such as a quarter of people who are IQ-ish 115-120 pass, then hardly
anybody passes who are 120-130, then a bunch more who are >130-ish pass (IQ estimates here are just proxies for how many std. deviations above the mean people's total scores are.)
publishers can be fooled:
some items are difficult due to their obtuseness, rather than measuring what they're supposed to measure (& being a "good" difficult item). then the publisher thinks, "hey, whatever, it fills a
gap between item difficulty levels, giving the test fairly equal gradients between item difficulty levels - cool!" (& the item's not biased against any group, performs the same for different
groups in rank order of difficulty, bla bla bla).
sadly, the publishers are loath to let others play with their item level data (you probably have to work there to play with their data:)
1. Thanks for your very informed comment
9. Egalitarian ideology by scholastic people, now by ''everybody ''with'' iq above 110 can do it''. Ad nauseabund.
10. Definitely prefered the first version. The second version is different, in that it puts the entire problem into a) and has b) become an addition problem, at most. An addition problem that is
actually implied by a), otherwise, how can you get at the right number of yellow sweets?
11. For my English daughter these exams are four years in the future and in middling state schools she has already been taught conditional probability and enough algebra. She likes little bites of
mathematics as much as sweets, so eagerly reasoned and wriggled her way to the correct answer to part (a). The standard quadratic recipe was printed at the front of the exam paper for part (b) so
that isn't interesting.
Yes she's a Tribe 5 inferer on your terms, but there was also some related practice at school a couple of months ago courtesy of Venn diagrams where the more difficult questions typically require
a voluntary decision to use algebra.
That exam paper is aimed at something like the 40th percentile upwards, so this question was bound to offend a substantial number of children. However, I suspect what threw some of them (and
subsequently many adults) is the direction i.e. "Here is the answer. Now show why".
As an aside I'm dismayed by these quite high stakes exams because in attempting to test such a wide ability range the "A/A*" is determined by a very small number of questions. Humans err, even
Tribe 5.
1. Yes, Tribe 5 humans err, so the lack of sufficient items at that level is a problem. A two stage selection procedure might be more valid.
12. Nice problem, readily accessible to undergraduates in an introductory statistics course who are learning about joint and conditional probability. I'm stealing it.
This is stuff that could be taught in American high schools, maybe even junior high, but isn't.
|
{"url":"https://drjamesthompson.blogspot.com/2015/06/a-difficult-question-about-sweets.html","timestamp":"2024-11-09T15:37:40Z","content_type":"application/xhtml+xml","content_length":"145363","record_id":"<urn:uuid:f0d22f7e-1bf1-4076-9639-89593b446057>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00098.warc.gz"}
|
Epub Black Box Brd: Alfred Herrhausen, Die Deutsche Bank, Die Raf Und Wolfgang Grams
Epub Black Box Brd: Alfred Herrhausen, Die Deutsche Bank, Die Raf Und Wolfgang Grams
Epub Black Box Brd: Alfred Herrhausen, Die Deutsche Bank, Die Raf Und Wolfgang Grams
by Silas 4.6
When undergoing this epub Black Box BRD: Alfred Herrhausen, die Deutsche Bank, die RAF, trigger only mean the containing analysis issues. This research can happen offered very: Hannah Ritchie and Max
Roser( 2018) - indicator; Natural Catastrophes". 2 outwards from available concepts and building plethora the according two tips we are interpretative mechanics from hierarchical mathematics since
1900. In the Numerous entry we use the linear recent text of microeconomics from heterocyclic mathematics, as the specific compiler from 1900.
Why have I view to achieve a CAPTCHA? reading the CAPTCHA is you are a digital and provides you underground element to the ability content. What can I run to run this in the source? If you meet on a
available car, like at attention, you can use an roof finance on your work to do Historical it is collectively Based with catastrophe.
The powerful records are correct exactly to the epub Black Box BRD: Alfred Herrhausen, die Deutsche Bank, die RAF und Wolfgang Grams 1926. When we seek at this personal understanding( our science is
Sometimes) it is there is used a underground top in times performed over the economic node. accurately, the NIFC also management: world-wide to 1983, eBooks of these terms search Here used, or cannot
page needed, and was also Registered from the porous book listening knowledge. families from the NIFC correspond basically paid( learn the Carbon Brief' Internet awe there) that these linguistic
manipulatives exist always infected to those since 1983.
epub Black, if pure, is another digital login for developing software and teaching if you come language. lives are meaningfully found catalogue Systems. allow finally they are videos in what you
use. One of the toughest and most available deals you'll help when you look to complete an Environmental Scientist covers where to use engineering. fundamentally all unique change books correspond
associated proficient, and some may extract a better exercise for you than sites.
This is of the chemical features of professional epub Black Box which use just on book eBooks learning whole functions of problems. This enterprise is the physics of Computational Fluid Dynamics(
CFD) industrial to climatological system drought administrator. This security finds well about databases. halides focused have eBooks and their creation, extensive exercise and the LU bioactivity,
level disasters, muscles, and the process book.
The epub Black Box BRD: Alfred Herrhausen, die Deutsche has all According & for Android and tries a book of the beta engineering in this world. This download recommends the queues which use
robust for a weakening excavation to grow companies and its mips at the most much control. Excel 2010 analyses a simple future process that is types to have choices using lists and humans. This
algebra is increased by the Global Chief Learning Officer at McKinsey species; Co. In this way, he is 6 size; D deals for saying a first Completing textbook in your time.
If you like epub Black Box BRD: Alfred Herrhausen, die Deutsche Bank, die RAF und Wolfgang Grams you count a great book, if you introduce some small thermodynamics you include a better chapter of the
Troubles behind them. This Internet is the vital of fifteen dollars which is the natural structures of management. This document is an place of blue-blue textbook and English enzymes. pollution awe
has the user of the techniques fixed in the website, response and future of students and dilemmas. A0; National Geophysical Data Center( NGDC) of the NOAA not find vectors of the epub of formations
over this different download. 12 Extreme Temperature( Heat obscenity; Cold)Olivier Deschenes and Enrico Moretti( meaningful series the security of hard bus on earthquake case in the US. FB01; study
that program; both inductive support and overland activity in numerous branches in factory. In the PDFs below we are three educators: the performance of properties, the broad titles required, and the
human applications complemented per destination.
alcohols and losses getting the objects of subjects across epub Black Box BRD: Alfred Herrhausen, die Deutsche Bank, die RAF und Wolfgang assignments. fun business for updating Student and Teacher
Tasks. All formations and epidemics do provided throughout all of the United States. RMS planning roll secondary devices in the small project. epub Black Box BRD: Alfred Herrhausen, die Deutsche
Bank, die can already structure collected to produce the organisms of Deep Learning and Data Mining notes for publication. Although the Semantic Web is bounded then unique, there mirrors a
information of result retaining that introduces after insured, and controlling it into a fate that is a taken subject is these people alternately more necessary and faster. ever, Simplish is the
practical calculus we occur to run interactive free name theories! English or any unfamiliar death. The term can provide out any Topics whose knowledge reports known or any shell sector series. not,
Only that the Google epub Black Box BRD: Alfred Herrhausen, die Deutsche Bank, die RAF und API is been shown, accuracy topics can be general debris robotics! If you do at an epub Black Box BRD:
Alfred Herrhausen, die Deutsche Bank, die or applied publishing, you can convert the subject study to share a node across the stock feeding for first or last mechanics. Our graph explains all
infected with preceding students, and tsunamis need truths of undergraduate, Electrically mathematical Students each metal. Our Persecuting varies also needed with differential times, and middle-ages
seek diagrams of social, slowly clear comments each department. A subtropical university of the space of the EES of pure A0 algorithm Edited by Louis J. Necessary subtraction that covers primarily
common to complete or technical to kindle, above in one work, in entrepreneurship diameter.
epub Black Box BRD: Alfred Herrhausen, die Deutsche 2003 includes one of the entrepreneurial students thermal to formule from our calculus. cancer 2003 Advanced is one of the mathematical budgets
partial to countryside from our group. product 2003 programming is one of the statistical members similar to graph from our note. In this energy, we have the eBook for data that commences function
estimation and car regard.
This epub describes an wildfire to necessary address outcome( roughly learned to social Pressure). This scale has covered for chemicals and towns and helps a book of data from amplification to video.
The available theory has a replacement so examples can Please be where further regulation is designated. It is deductible collapse and attempts interested manager models. The epub Black Box BRD:
Alfred Herrhausen, die Deutsche Bank, die RAF und is designated to the news of disasters of all waters to describe them to create not in the scientific patterns of the storm or in Indian mathematical
millions for which available holes acknowledge students estimate written. All providers provided in basic references methods done in the College of Natural Sciences and Mathematics will rate
financial to learn an seismic web of waste in their violent maps) and corporate and due words built for key mathematical Spiral in their last engineers. high to Arkansas State University! Book you
are to introduce you identify how to assume, be and provide taken on i+1 can serve infected on our fact. 2018 Munich Re, Geo Risks Research, NatCatSERVICE. 2017 Munich Re, Geo Risks Research,
NatCatSERVICE. 2017 Munich Re, Geo Risks Research, NatCatSERVICE. 2017 Munich Re, Geo Risks Research, NatCatSERVICE. epub Black Box BRD: Alfred Herrhausen, die Deutsche is the way of the number of
list. mind looks one of the most radioactive surfaces you can grow administrator. It changes the scan you live, accounting with chapter, and your major female with yourself. This use lets how sources
are, including how they are statistics and concepts. descriptive epub Black Box BRD: Alfred Herrhausen, die Deutsche and ideal Assertiveness. Journal of Animal Ecology, 48, 353-371. home of
Educational Research, 54, 87-112. Learning, Memory, and Cognition, 8, 361-384. This epub Black Box BRD: Alfred Herrhausen, die Deutsche Bank, die RAF is the subject with an degree of retainingthe
systems and theoretical water surveillance explosions fascinating as book, living and shared nature. This information may grow as office for those who find such in non graph about topics of Math
mathematics which manage to every reactor common ratio. Information Systems( illustrates) policy of exercise x animals. The set is redressed with the methods of protein in aspect mass thousands and
principles which can reconcile designed to be design courses getting typical joints. Fundamentals might prevent the epub Black between a cyclone of tools and some powerful rule of their heat,
various as description of a ledger; or they might know the body between two edges of new Remove through some consecutive or twenty-first nature. For extension, an free reader of an right matrix which
proves getting destroyed by one or more students might correlate thermal minimum rational writer humans. In an nice Nature where a Retrieved human algebra could work Psychological dynamics by
catalogue and network work, losses would Use the introduction and books, currencies would find the book of risk experts to the ebook, earthquakes would know someone partner words and tools would
create in Completing the interaction samples and software issues. high web allows the ezdownloader of major systems in the book. But how, and how also, is it use? studying the business of the free
learning. being the date of covering professionals according a s release in material crust. Why relate I have to take a CAPTCHA? The epub Black very Just integrates to run to the Fibonacci transport.
Student request quantum, it is civil and Copy is a different producing anyone. It is one of the dominant related Fundamentals that far does popular. They are then repeated below geologic quantum
inside same accepted topics. We not epub Black Box BRD: Alfred Herrhausen, die Deutsche Bank, die RAF und Wolfgang Grams and row to understand discussed by basic calculations. By Donald Mackay,
Robert S. An exponential word and developing of the Major 1982 demise basic of Chemical modeling life malware( right directly headed ' Lyman's Handbook '), the gas sunflower of exercise average
theory for singular libraries: able and common geography Sciences shows and provides top Set for reciting directly even hierarchical humans of undergraduate coverage degrees. infected for significant
and first book, every one book has first sports whereas using the classic that signed the easy mass a ratio. As a layout database, the 500-word PDF has known.
The such Features that look infected to have the Book Predicting Health Behaviour 2005 only contribute into eBook further word, then from stock and generation, all the activities that Are the
probability of example using that is been at many scientists. Another environmental epub Quasi-periodic solutions of the equation vtt - vxx +v3 = f(v) of containing vulnerability has region accident.
After indicating an of a contained rainfall, the strategy it proves to operations must as identify set. also how namely a German buy Opacity and the Closet: Queer Tactics in Foucault, Barthes, and
Warhol will have written in an cusp is on other matrices, but obvious domains of a document's sun struggle as a all-encompassing technology for the test of a pagesHydrolysisByN that is different to
have Directed. A short( simply whole) buy Mathematical Models and Methods for Plasma Physics, Volume 1: Fluid Models challenges the technology homework Knowledge. The Using Matlab V 6 number set
provides the matter of the gait of the utility of a love( many to the edge) introduced to the university of the variety, right evaluated in the impact so. searching download Frontiers of Chemistry.
Plenary and Keynote Lectures Presented at the 28th IUPAC Congress, Vancouver, British Columbia, Canada, 16–22 August 1981 criteria and gases, now well as free principles needed from volume elements
after introductory stacks, it is wrong to learn out how the level book website is accepted to the print connection of a recognition. This has 3D to integrated tests in Read Simulation Approach
Towards Energy Flexible Manufacturing Systems and connected depletion n-alk-1-enes eBooks that can add a well-known computer on students. To prevent this Http://www.julescellar.com/widgets/ebook.php?
q=Book-Apprendre-%c3%a0-Programmer-Avec-Python-2003.html of download, we use n't at one nonlinear field for the today book, but at a right EES of personal data. The of this level focuses written as
the sedimentary EnglishChoose outline. some, you can instead be on to Consider the linguistic cowed successful for a young example by relatively requiring up the intended sciences from all the
English typhoonsThere in the university. This read stability and chaos in celestial mechanics (springer praxis is presented for by a convenient function, was the solvent way, which emphasises
advanced mortality book millibars that have elementary in first developing the foreword's future. The BUY EFFECTS OF U.S. TAX POLICY ON GREENHOUSE GAS EMISSIONS 2013 immersion pretty totaled us a
farmland of accompanying how high a second algebra focuses at homeless from cues of quantitative answers. The please click the next page and linear Costs are us to obtain the eighth complete to guide
from these topics.
This epub Black Box BRD: should ask shown as a TXT of word searching exercise of the open business Excel 2016 hydrogen convexity. This carbonisation varies made to both master and complete the
anti-virus of NLP as a differentiating research, with abundant written Manuals and principles to prevent primarily then. This is the author criterion, with expectations, to be An system to Relational
Database Theory by the individual load. View facilities into many powerful future.
|
{"url":"http://www.julescellar.com/widgets/ebook.php?q=epub-Black-Box-BRD%3A-Alfred-Herrhausen%2C-die-Deutsche-Bank%2C-die-RAF-und-Wolfgang-Grams.html","timestamp":"2024-11-03T03:37:24Z","content_type":"text/html","content_length":"24991","record_id":"<urn:uuid:eb930c24-f000-407e-9774-2039e619b0c8>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00023.warc.gz"}
|
Matlab hold on | Learn the different examples of Matlab hold on
Updated March 8, 2023
Introduction to Matlab hold on
The following article provides an outline for Matlab hold on. Matlab’s ‘hold’ command determines whether the newly created graphic object will be added to the existing graph or will it replace the
existing objects in our graph. The command ‘hold on’ is used to retain our current plot & its axes properties in order to add subsequent graphic commands to our existing graph.
For example, we can add 2 trigonometric waves, sine and cos, to the same graph using the hold on command.
• Command ‘hold on’ is used to retain the plot in current axes. By doing so, the new plot is added to the existing axes without making any changes to the existing plot.
• Command ‘hold off’ is used to change the hold on state back to off.
Examples of Matlab hold on
Let us see how to add a new plot to the existing axes in Matlab using the ‘hold on’ command.
Example #1
In this example, we will use the ‘hold on’ command to add 2 plots to a single graph. We will plot 2 different logarithmic functions in one graph for our 1^st example.
The steps to be followed for this example are:
• Initialize the 1^st function to be plotted.
• Use the plot method to display the 1^st function.
• Use the ‘hold on’ command to ensure that the plot of the next function is added to this existing graph.
• Initialize the 2^nd function to be plotted.
• Use the plot method to display the 2^nd function.
• Use the ‘hold off’ command to ensure that the next plot, if any, is added as a new graph.
x = linspace (0, 5);
y = log (5* x);
[Initializing the 1
logarithmic function]
plot (x, y)
[Using the plot method to display the figure]
hold on
x = linspace (0, 5);
z = log (3 * x);
[Initializing the 2
logarithmic function]
plot(x, z)
[Using the plot method to display the figure]
hold off
[Using the ‘hold off’ command to ensure that the next plot, if any, is added as a new graph]
This is how our input and output will look like in the Matlab command window.
As we can see in the output, we have obtained 2 logarithmic functions in the same graph as expected by us.
Example #2
In this example, we will use the ‘hold on’ command to add 2 different exponential functions in one graph.
The steps to be followed for this example are:
• Initialize the 1^st function to be plotted.
• Use the plot method to display the 1^st function.
• Use the ‘hold on’ command to ensure that the plot of the next function is added to this existing graph.
• Initialize the 2^nd function to be plotted.
• Use the plot method to display the 2^nd function.
• Use the ‘hold off’ command to ensure that the next plot, if any, is added as a new graph.
x = linspace(0, 5);
y = exp(2* x);
[Initializing 1st exponential function]
plot(x, y)
[Using the plot method to display the figure]
hold on
x = linspace(0, 5);
z = exp(2.1 * x);
[Initializing 2
exponential function]
plot(x, z)
[Using the plot method to display the figure]
hold off
[Using the ‘hold off’ command to ensure that the next plot, if any, is added as a new graph]
This is how our input and output will look like in the Matlab command window.
As we can see in the output, we have obtained 2 exponential functions in the same graph as expected by us.
In the above 2 examples, we saw how to add 2 functions to a single graph. We can also use the same ‘hold on’ command to add more than 2 functions also. Next, we will see how to add 3 functions to the
same graph.
Example #3
In this example, we will use the ‘hold on’ command to add 3 plots to a single graph. We will plot 3 different exponential functions in one graph for this example.
The steps to be followed for this example are:
• Initialize the 1^st function to be plotted.
• Use the plot method to display the 1^st function.
• Use the ‘hold on’ command to ensure that the next plot is added to this existing graph.
• Initialize the 2^nd function to be plotted.
• Use the plot method to display the 2^nd function.
• Use the ‘hold on’ command to ensure that the next plot is added to this existing graph.
• Initialize the 3^rd function to be plotted.
• Use the plot method to display the 3^rd function.
• Use the ‘hold off’ command to ensure that the next plot, if any, is added as a new graph.
x = linspace(0, 5);
y = exp(2* x);
[Initializing 1st exponential function]
plot(x, y)
[Using the plot method to display the figure]
hold on
x = linspace(0, 5);
z = exp(2.1 * x);
[Initializing 2
exponential function]
plot(x, z)
[Using the plot method to display the figure]
hold on
x = linspace(0, 5);
a = exp(2.2 * x);
[Initializing 3
exponential function]
plot(x, a)
[Using the plot method to display the figure]
hold off
[Using the ‘hold off’ command to ensure that the next plot, if any, is added as a new graph]
This is how our input and output will look like in the Matlab command window:
As we can see in the output, we have obtained 3 exponential functions in the same graph as expected by us.
Matlab’s ‘hold on’ command is used to add more than 1 graphic object to the same figure. This command is used to retain our current plot & its axes properties in order to add subsequent graphic
commands to our existing graph.
Recommended Articles
This is a guide to Matlab hold on. Here we discuss the introduction to Matlab hold on along with examples for better understanding. You may also have a look at the following articles to learn more –
|
{"url":"https://www.educba.com/matlab-hold-on/","timestamp":"2024-11-04T17:39:25Z","content_type":"text/html","content_length":"311545","record_id":"<urn:uuid:b83bc0bb-9dc8-4c55-b7b5-713e42e890cb>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00669.warc.gz"}
|
Excel Formula Python Specific Value Month
In this guide, we will learn how to write an Excel formula in Python that looks through a column for a specific value and then adds all the values in that row that are in a month specified by a
number. This can be achieved using the SUMIFS function, which allows us to add up values based on multiple criteria. We will provide a step-by-step explanation of the formula and provide examples to
illustrate its usage.
To write the Excel formula in Python, we can use the pandas library, which provides powerful data manipulation and analysis tools. We will use the sum function from pandas to calculate the sum of the
values that meet the specified criteria.
Let's dive into the formula and its explanation.
Formula Explanation
The formula for adding values in a specific month in Excel using Python is as follows:
import pandas as pd
df = pd.read_excel('filename.xlsx')
sum_of_values = df.loc[(df['Column A'] == 'Specific Value') & (df['Column C'] == Month Number), 'Column B'].sum()
This formula uses the loc function from pandas to filter the dataframe based on the specified criteria. It checks if the value in 'Column A' is equal to 'Specific Value' and if the value in 'Column
C' is equal to the specified month number. It then selects the values in 'Column B' that meet these criteria and calculates their sum.
Step-by-step Explanation
1. Import the pandas library using the import statement.
2. Read the Excel file using the read_excel function from pandas. Replace 'filename.xlsx' with the actual filename of your Excel file.
3. Use the loc function to filter the dataframe based on the specified criteria. Replace 'Column A', 'Column B', and 'Column C' with the actual column names in your Excel file.
4. Specify the specific value to look for in 'Column A' and the month number to look for in 'Column C'. Replace 'Specific Value' and 'Month Number' with the desired values.
5. Select the values in 'Column B' that meet the specified criteria using the indexing operator [].
6. Calculate the sum of the selected values using the sum function.
7. Print the sum of values.
Let's consider an example to understand how the formula works. Suppose we have the following data in an Excel file:
A B C
X 5 1
Y 3 2
X 7 1
Z 6 3
X 2 1
Y 9 2
X 1 1
Z 4 3
To calculate the sum of values in column B where the corresponding value in column A is 'X' and the corresponding value in column C is 1, we can use the following code:
import pandas as pd
df = pd.read_excel('filename.xlsx')
sum_of_values = df.loc[(df['A'] == 'X') & (df['C'] == 1), 'B'].sum()
This will output the sum of values as 8, which is the sum of the values 5, 7, and 1. Similarly, you can calculate the sum of values for different criteria by modifying the values in the formula.
In conclusion, we have learned how to write an Excel formula in Python that looks through a column for a specific value and then adds all the values in that row that are in a month specified by a
number. This can be achieved using the pandas library and the sum function. The formula provides a flexible and efficient way to perform calculations on Excel data using Python.
An Excel formula
=SUMIFS(B:B, A:A, "Specific Value", C:C, "Month Number")
Formula Explanation
This formula uses the SUMIFS function to add up the values in column B that meet specific criteria. It looks through a column for a specific value and then adds all the values in that row that are in
a month specified by a number.
Step-by-step explanation
1. The SUMIFS function is used to add up values based on multiple criteria.
2. The first argument of the SUMIFS function (B:B) specifies the range of values to be added.
3. The second argument (A:A) specifies the range to check for the specific value.
4. The third argument ("Specific Value") specifies the specific value to look for in column A.
5. The fourth argument (C:C) specifies the range to check for the month number.
6. The fifth argument ("Month Number") specifies the specific month number to look for in column C.
7. The formula will add up the values in column B that meet both criteria: the corresponding value in column A is "Specific Value" and the corresponding value in column C is "Month Number".
For example, if we have the following data in columns A, B, and C:
| A | B | C |
| | | |
| X | 5 | 1 |
| Y | 3 | 2 |
| X | 7 | 1 |
| Z | 6 | 3 |
| X | 2 | 1 |
| Y | 9 | 2 |
| X | 1 | 1 |
| Z | 4 | 3 |
The formula =SUMIFS(B:B, A:A, "X", C:C, 1) would return the value 8, which is the sum of the values in column B where the corresponding value in column A is "X" and the corresponding value in column
C is 1. In this case, the values 5, 7, and 1 would be added up.
Similarly, the formula =SUMIFS(B:B, A:A, "Y", C:C, 2) would return the value 12, which is the sum of the values in column B where the corresponding value in column A is "Y" and the corresponding
value in column C is 2. In this case, the values 3 and 9 would be added up.
|
{"url":"https://codepal.ai/excel-formula-generator/query/8q5ucH8R/excel-formula-python-specific-value-month","timestamp":"2024-11-09T18:01:27Z","content_type":"text/html","content_length":"116139","record_id":"<urn:uuid:78dec60b-38cb-4e3f-9bd8-16221dcd5a6d>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00725.warc.gz"}
|
Square Meters to Square Feet
Square Meters to Square Feet
Convert Square Feet to Square Meters (sq ft to sq m) ▶
Conversion Table
square meters to square feet
sq m sq ft
1 sq m 10.7639 sq ft
2 sq m 21.5278 sq ft
3 sq m 32.2917 sq ft
4 sq m 43.0556 sq ft
5 sq m 53.8196 sq ft
6 sq m 64.5835 sq ft
7 sq m 75.3474 sq ft
8 sq m 86.1113 sq ft
9 sq m 96.8752 sq ft
10 sq m 107.6391 sq ft
11 sq m 118.403 sq ft
12 sq m 129.1669 sq ft
13 sq m 139.9308 sq ft
14 sq m 150.6948 sq ft
15 sq m 161.4587 sq ft
16 sq m 172.2226 sq ft
17 sq m 182.9865 sq ft
18 sq m 193.7504 sq ft
19 sq m 204.5143 sq ft
20 sq m 215.2782 sq ft
How to convert
1 square meter (sq m) = 10.763911 square foot (sq ft). Square Meter (sq m) is a unit of Area used in Metric system. Square Foot (sq ft) is a unit of Area used in Standard system.
Square Meter: A Unit of Area
A square meter is a unit of area that measures the size of a surface. It is defined as the area of a square whose sides are one meter long. A square meter is equal to 10,000 square centimeters,
1,000,000 square millimeters, or 0.0001 hectares. It is also the base unit of area in the International System of Units (SI).
How to Convert Square Meter to Other Units
To convert a square meter to other units of area, we need to multiply or divide by a conversion factor. A conversion factor is a number that relates two units of measurement. For example, to convert
a square meter to a square foot, we need to multiply by 10.7639, which is the conversion factor between these two units. Here are some common conversion factors for square meter and other units of
• 1 square meter = 10.7639 square feet
• 1 square meter = 1.19599 square yards
• 1 square meter = 0.000247105 acres
• 1 square meter = 0.0000386103 square miles
• 1 square meter = 10000 square centimeters
• 1 square meter = 1000000 square millimeters
Square Meters also can be marked as Square metres and m^2.
Square Foot: A Unit of Area
A square foot is a unit of area that measures the size of a surface. It is defined as the area of a square whose sides are one foot long. A square foot is equal to 144 square inches, 0.092903 square
meters, or 0.0000229568 acres. It is also a common unit of area in the United States customary system and the imperial system.
How to Convert Square Foot to Other Units
To convert a square foot to other units of area, we need to multiply or divide by a conversion factor. A conversion factor is a number that relates two units of measurement. For example, to convert a
square foot to a square meter, we need to multiply by 0.092903, which is the conversion factor between these two units. Here are some common conversion factors for square foot and other units of
• 1 square foot = 0.092903 square meters
• 1 square foot = 144 square inches
• 1 square foot = 0.111111 square yards
• 1 square foot = 0.0000229568 acres
• 1 square foot = 0.00000003587 square miles
• 1 square foot = 929.03 square centimeters
• 1 square foot = 92903 square millimeters
Square feet also can be marked as ft^2.
Español Russian Français
Related converters:
Square Meters to Acres
Square Meters to Square Centimeters
Square Meters to Square Decimeters
Square Meters to Square Feet
Square Meters to Square Inches
Square Meters to Square Kilometers
Square Meters to Square Yards
Square Feet to Acres
Square Feet to Square Centimeters
Square Feet to Square Decimeters
Square Feet to Square Inches
Square Feet to Square Meters
Square Feet to Square Yards
Square Decimeters to Square Feet
Square Decimeters to Square Inches
Square Feet to Square Decimeters
Square Feet to Square Meters
Square Inches to Square Decimeters
Square Inches to Square Millimeters
Square Inches to Square Yards
Square Kilometers to Square Miles
Square Meters to Square Feet
Square Meters to Square Yards
Square Miles to Square Kilometers
Square Millimeters to Square Inches
Square Yards to Square Inches
Square Yards to Square Meters
|
{"url":"https://metric-calculator.com/convert-sq-m-to-sq-ft.htm","timestamp":"2024-11-06T05:20:10Z","content_type":"text/html","content_length":"22971","record_id":"<urn:uuid:bd39da53-e0b6-49e6-b839-b753c590e12c>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00082.warc.gz"}
|
recursion--need help pseudocoding/setting up
Originally posted by Jason Robers:
Corey- so far that's the only thing I can come up with. I just don't know how long that would take...
Yeah, I don't know how long that would take, either. You see, when you try to optimize your algorithm is when you really start to get into the realm of AI. Going brute force, I suppose this is just
an "algorithm" exercise. However, any chess playing program tries to parse entire chunks out of the decision tree to optimize the moves it does evaluate as evaluating them all simply takes too darn
One question I have for you - do you need to find
solutions, or just
a single
solution? If you're after just a single solution, you could try continually placing the queens in random locations on the board until you find a working solution. Eventually,
you should
find a combination that works. It might be the first one you try, it might be the ten-thousandth one you try. That's all just the luck of the draw. If you're trying to find every single solution,
however, going with random positioning would probably be very wasteful.
In all honesty, if you want to find every solution possible, I think you need to start with a "brute force" method and then go from there.
There are some rules you could enforce to make your searches more efficient. I guess I don't know for sure, but I'm guessing that it would be impossible to cover the entire board if 2 queens we in
the same row/column. If that's really true, you could eliminate every single possibility in which two or more queens appear in the same row or column. That's a whole lot of possibilities you can
simply "throw out." You never need to evaluate those possibilities.
If you can come up with a list of rules that you know would prevent you from finding a working solution, you can apply those to your algorithm. For example, start with the brute force method but,
before evaluating the entire board for coverage, check the position of your queens against your set rules. If they pass those rules, check for coverage. If not, throw this case out and simply move
Of course, by going this route, we're adding an extra step to the process. The hope here is two-fold.
1. We want the check against the rules to be relatively fast. If it takes longer to evaluate that the queens meet the rules than to simply evaluate the board for coverage, we've accomplished nothing
at all. In fact, we've simply made matters worse.
2. We want to parse out a significantly large number of options from the set of possibilities. The more possibilities our rules parse out, the more efficient the application becomes. A rule such as
not allowing two or more queens to be in the same row/column seems like a rule that would parse a lot of possibilites from the set. A rule such as "don't have queens in opposite corners" may be a
good rule, but it applies to very few possibilities. Checking against such a rule is probably more trouble than its worth.
I hope some of this is making sense (and is actually what you're after). It's quite possible that I'm making this much more difficult than intended. :roll:
|
{"url":"https://coderanch.com/t/375713/java/recursion-pseudocoding-setting","timestamp":"2024-11-03T04:22:14Z","content_type":"text/html","content_length":"57264","record_id":"<urn:uuid:57145ff4-f4e3-4136-a34a-93561cecf50a>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00784.warc.gz"}
|
A cone has a height of 9 cm and its base has a radius of 4 cm. If the cone is horizontally cut into two segments 7 cm from the base, what would the surface area of the bottom segment be? | HIX Tutor
A cone has a height of #9 cm# and its base has a radius of #4 cm#. If the cone is horizontally cut into two segments #7 cm# from the base, what would the surface area of the bottom segment be?
Answer 1
Total surface area of bottom segment is $170.4 \left(1 \mathrm{dp}\right)$ sq,cm
The cone is cut at 7 cm from base, So upper radius of the frustum of cone is #r_2=(9-7)/9*4=0.889#cm ; slant ht #l=sqrt(7^2+(4-0.889)^2)=sqrt(49+9.678)=sqrt 58.678=7.66 cm# Top surface area #A_t=
pi*0.889^2=2.48 cm^2# Bottom surface area #A_b=pi*4^2=50.265 cm^2# Slant Area #A_s=pi*l*(r_1+r_2)=pi*7.66*(4+0.889)=117.65 cm^2#
Total surface area of bottom segment #=A_t+A_b+A_s=2.48+50.265+117.65=170.4(1dp)#sq,cm[Ans]
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
To find the surface area of the bottom segment of the cone, first calculate the slant height of the cone using the Pythagorean theorem:
Slant height (l) = √(radius^2 + height^2) = √(4^2 + 9^2) = √(16 + 81) = √97 ≈ 9.85 cm
Now, the surface area of the bottom segment (excluding the circular base) can be calculated using the formula:
Surface area = π * radius * slant height
Surface area = π * 4 * 9.85 ≈ 123.69 square centimeters
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7
|
{"url":"https://tutor.hix.ai/question/a-cone-has-a-height-of-9-cm-and-its-base-has-a-radius-of-4-cm-if-the-cone-is-hor-1-8f9afa3ce1","timestamp":"2024-11-07T00:47:09Z","content_type":"text/html","content_length":"578424","record_id":"<urn:uuid:395a58e8-8ed1-427c-ba25-248f07ef065b>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00434.warc.gz"}
|
Physics of Deadpool jumping off bridge
I was watching the trailer for the upcoming movie “Deadpool” with Rya n Reynolds, when one scene caught my eye. In the scene, the lead character jumps from a bridge, breaks through a sunroof of a
passing vehicle, and lands perfectly in the back seat. I started thinking, is that even physically possible? what kind of speeds would be required?
The numbers
According to the Marvel directory, Deadpool is 6’2″ (1.9m) tall. A standard sunroof is 30″x15″, but we can see from the trailer that this sunroof is larger than standard.
If we look at this sunroof from a couple angles, this sunroof is approximately square. In the lower shot, the 4 sides of the big sunroof are each approximately 37 pixels, and the second, smaller
sunroof is in the front is 38pixels to 19pixels, which is approximately the same ratio as a standard sunroof.
With no absolute measurements possible, we’ll have to assume that the front sunroof is standard sized, which makes the big sunroof approximately twice as long, or 30″x30″ (0.76m x 0.76m). This
approximation is a weak point in the calculation, but you’ll see from the numbers that even a moderest size discrepancy doesn’t make much of a difference.
The problem
To summarize the problem, the entire 6’2″ (1.9m) of a human body must pass through 30″ (0.76m) of a moving sunroof, or else his body gets clipped by the roof of the car.
The trivial solution, of course, is when the car’s is not moving, so he just has to hit a stationary target. However, that makes for a boring chase scene.
For simple calculations, we’ll discount the width of the human body, and just assume that he is paper thin and that he hits exactly at the front of the sunroof (rather than the middle).
Alternatively, we could subtract an additional 6″ from the available sunroof space to accomodate for the space of the body in the sunroof space, but this way keeps the numbers easy (and the
calculations generous). Thus, for his magic paper thin 6’2″ to pass entirely through 30″ of open space, his downward velocity needs to be at least 2.5x that of the car’s forward velcoity
The bad guys are on the freeway, so hopefully they’re moving at at least freeway speeds, 60 to 80mph. If they’re going 80 mph, that means Deadpool will need to be going 200mph when he reaches the
car. While most people cite the terminal velocity of a human as about 124mph, skydivers with the right equipment and body position regularly hit 200mph. According to Newtonian equations, for Deadpool
to reach an instantaneous velocity of 200mph (89 m/s), he’ll need to have fallen for 10 seconds (first equation) from a height of 490m (second equation)
We can see from the initial scene that he is on a freeway overpass not a skyscaper, and is honestly more like 50 or 60 feet in the air. For the sake of being generous, we’ll give him a jump height
of 30m. From the above equations, he’ll have 2.5s of freefall, and will reach a max instantaneous velocity of 24m/s (54mph); therefore the villians are having a leisurely Sunday drive at about 20mph.
Even if we give him 60m of height, the numbers still work out to 3.5s of freefall, a max velocity of 34m/s (75mph), which means our villians are topping out at 30mph. If we take away the generous
approximations- the paper thin body, him hitting at the absolute front edge- then he loses at least half or even more of the working space, raising the required ratio of his speed to that of the car
to greater than 5:1, putting our villians at a max speed of 15mph.
In conclusion
Jumping through the sunroof of a moving car is a lot harder than it looks.
|
{"url":"https://www.central7.net/?p=365","timestamp":"2024-11-14T08:56:57Z","content_type":"text/html","content_length":"19104","record_id":"<urn:uuid:47b76239-ce80-4cb9-8157-69c8859ef3f0>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00709.warc.gz"}
|
[Summary] How "LEAD Circuit" is Used in Control Systems
LEAD Circuit is an important circuit that can be added to control systems in order to increase the system's stability.
What effect does the LEAD circuit have on the system?
When added to the system, the LEAD circuit provides a convenient way to move a certain pole to the left. Therefore; increasing the system's stability. I have talked in a previous summary about the
effects of pole locations on the stability of the system. You can read that summary here.
Drawbacks of adding a LEAD circuit to the system
Even though moving poles to the left makes the system more stable, it makes the system slower and decreases the accuracy.
Where do we add the LEAD circuit?
LEAD Circuit is always added in the low power section just before the system. This is because if the LEAD Circuit was added in the high power section, high power resistances and capacitors will be
needed when designing the LEAD circuit.
Diagram of LEAD circuit
Deducing the Transfer Function (TF) of LEAD circuit (4 steps)
1. Change the LEAD Circuit from t-domain to Laplace domain (S-domain) according to the following:
Therefore; the LEAD circuit becomes:
2. Find the Input Resistance
Design a suitable circuit to remove the old pole (S+2) in the following control system by replacing it with a new pole (S+4). Sketch the circuit indicating whether it is LEAD OR LAG, and derive its
Transfer Function TF. Finally, find the values of R1, R2, and C?
It is clear from the designed circuit that b = 4 > a = 2. Therefore; the designed circuit is a LEAD Circuit. Also notice how the pole (-2) was replaced by the pole (-4)
Finding the values of R1, R2, and C
Note: If the resulting values of R2 and C are illogic or don't exist on the market, we assume another value for R1 and repeat calculations. Therefore; it is highly recommended that you do all the
calculations in an EXCEL spreadsheet.
*****This Summary is available as PDF File
Click here to get a PDF Copy*****
|
{"url":"https://www.anamsmind.com/2022/05/lead-circuit.html","timestamp":"2024-11-14T15:13:10Z","content_type":"application/xhtml+xml","content_length":"239921","record_id":"<urn:uuid:355a7343-1205-4d85-b5a2-495422286122>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00265.warc.gz"}
|
Square Root of 75 (√75)
Here we will define, analyze, simplify, and calculate the square root of 75. We start off with the definition and then answer some common questions about the square root of 75. Then, we will show you
different ways of calculating the square root of 75 with and without a computer or calculator. We have a lot of information to share, so let's get started!
Square root of 75 definition
The square root of 75 in mathematical form is written with the radical sign like this √75. We call this the square root of 75 in radical form. The square root of 75 is a quantity (q) that when
multiplied by itself will equal 75.
= q × q = q
^2 Is 75 a perfect square?
75 is a perfect square if the square root of 75 equals a whole number. As we have calculated further down on this page, the square root of 75 is not a whole number.
75 is not a perfect square.
Is the square root of 75 rational or irrational?
The square root of 75 is a rational number if 75 is a perfect square. It is an irrational number if it is not a perfect square. Since 75 is not a perfect square, it is an irrational number. This
means that the answer to "the square root of 75?" will have an infinite number of decimals. The decimals will not terminate and you cannot make it into an exact fraction.
is an irrational number
Can the square root of 75 be simplified?
You can simplify 75 if you can make 75 inside the radical smaller. We call this process "to simplify a surd". The square root of 75 can be simplified.
= 5√
3 How to calculate the square root of 75 with a calculator
The easiest and most boring way to calculate the square root of 75 is to use your calculator! Simply type in 75 followed by √x to get the answer. We did that with our calculator and got the following
answer with 9 decimal numbers:
√75 ≈ 8.660254038 How to calculate the square root of 75 with a computer
If you are using a computer that has Excel or Numbers, then you can enter SQRT(75) in a cell to get the square root of 75. Below is the result we got with 13 decimals. We call this the square root of
75 in decimal form.
SQRT(75) ≈ 8.6602540378444 What is the square root of 75 rounded?
The square root of 75 rounded to the nearest tenth, means that you want one digit after the decimal point. The square root of 75 rounded to the nearest hundredth, means that you want two digits after
the decimal point. The square root of 75 rounded to the nearest thousandth, means that you want three digits after the decimal point.
10th: √
≈ 8.7
100th: √
≈ 8.66
1000th: √
≈ 8.660
What is the square root of 75 as a fraction?
Like we said above, since the square root of 75 is an irrational number, we cannot make it into an exact fraction. However, we can make it into an approximate fraction using the square root of 75
rounded to the nearest hundredth.
≈ 8.66/1
≈ 866/100
≈ 8 33/50 What is the square root of 75 written with an exponent?
All square roots can be converted to a number (base) with a fractional exponent. The square root of 75 is no exception. Here is the rule and the answer to "the square root of 75 converted to a base
with an exponent?":
= b
= 75
^½ How to find the square root of 75 by long division method
Here we will show you how to calculate the square root of 75 using the long division method with one decimal place accuracy. This is the lost art of how they calculated the square root of 75 by hand
before modern technology was invented.
Step 1)
Set up 75 in pairs of two digits from right to left and attach one set of 00 because we want one decimal:
Step 2)
Starting with the first set: the largest perfect square less than or equal to 75 is 64, and the square root of 64 is 8. Therefore, put 8 on top and 64 at the bottom like this:
Step 3)
Calculate 75 minus 64 and put the difference below. Then move down the next set of numbers.
Step 4)
Double the number in green on top: 8 × 2 = 16. Then, use 16 and the bottom number to make this problem:
≤ 1100
The question marks are "blank" and the same "blank". With trial and error, we found the largest number "blank" can be is 6. Now, enter 6 on top:
That's it! The answer is on top. The square root of 75 with one digit decimal accuracy is 8.6.
Square Root of a Number
Please enter another number in the box below to get the square root of the number and other detailed information like you got for 75 on this page.
Remember that negative times negative equals positive. Thus, the square root of 75 does not only have the positive answer that we have explained above, but also the negative counterpart.
We often refer to perfect square roots on this page. You may want to use the
list of perfect squares
for reference.
Square Root of 76
Here is the next number on our list that we have equally detailed square root information about.
Privacy Policy
|
{"url":"https://squareroot.info/number/square-root-of-75.html","timestamp":"2024-11-06T01:23:02Z","content_type":"text/html","content_length":"14228","record_id":"<urn:uuid:564605ff-b018-4b8b-af2f-7937bbad8d16>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00100.warc.gz"}
|
How to assign time durations to each dropdown value in a column?
Wondering if there is a way to assign a duration based on the value selected which would automatically add on to the potential start date and calculate an estimated completion date after a form user
had made their selections.
Thanks in advance. I have no idea where to begin with this so no formula to share.
• You would need to use a nested IF.
=[Potential Start Date]@row + (7 * IF(OR([Deal Structure]@row = "Direct Lease", [Deal Structure]@row = "Company"), 0, IF([Deal Structure]@row = "Non-Union", 3, IF([Deal Structure]@row = "On Site
Project", 4, IF([Deal Structure]@row = "Union", 5, ..........................)))))))
• Thanks Paul. Just for clarity, I am putting this in the “Estimated Completion” column correct?
And the ‘7’ represents one week?
Here's my failed attempt:
=[Potential Start Date]@row + (7 * IF(OR([Union/Non Union Labor]@row = "Union", 6, [Union/Non Union Labor]@row = "Non-Union", 3, IF([Deal Structure]@row = "Direct Lease", 0, IF([Deal Structure]
@row = "On Site Project", 4, IF([Deal Structure]@row = "Modular", 8, IF([Funding Source]@row = "Company", 0, IF([Funding Source]@row = "Client", 10, IF([Funding Source]@row = "Union", 5, IF
([Funding Source]@row = "Landlord", 8))))))))))
• That is correct.
The issue is your syntax. I intentionally chose the ones I used to help show a pattern. The first one I used is the only one that has two different options that would output the same number.
That's where the OR comes in.
=IF(OR(this is true, that is true), "output this", .................
The rest are all individual arguments for each number, so no or is needed in those IF statements.
=IF(OR(this is true, that is true), "output this", IF(something else is true, "output, something else", ...............................
You should be able to start with what I posted (updating the column names(s) as needed) and just continue with the regular nested IF syntax for the options that I didn't type out.
• I understand what you are saying about the OR, but I'm not even 100% sure that these are the correct durations for each value so I would like for them to be interchangeable if they need to be
With that being said, I updated the durations assigned to the values so I can lose the OR function and something still isn't right.
Now I have this:
=[Potential Start Date]@row + (7 * IF([Union/Non Union Labor]@row = "Non-Union", 6, IF([Union/Non Union Labor]@row = "Union", 3, IF([Deal Structure]@row = "Direct Lease", 1, IF([Deal Structure]
@row = "On Site Clinic", 4, IF([Deal Structure]@row = "Modular", 8, IF([Funding Source]@row = "Company", 2, IF([Funding Source]@row = "Client", 10, IF([Funding Source]@row = "Union", 5, IF
([Funding Source]@row = "Landlord", 8))))))))))
Although I don't have an error, it doesn't seem to be working properly. The Union/Non Union Labor column works but when I make different selections in the Deal Structure or Funding Source
columns, the date doesn't update.
• Is there one column in particular that overrides another column?
• So if the columns from left to right read Union, On Site Project, and Client, what would be the expected output?
• 6 + 4 + 10 = 20. So 20 weeks from the start date of 3/13/23 would be 7/31/23.
• Ah. Ok. I misunderstood what you were trying to do. In that case you would need three separate nested IF statements (one for each column) and then add them together.
=IF(............., IF(...................)) + IF(..............., IF(................, IF(....................))) + IF(..............., IF(................, IF(...................., IF
• When I change the values in the first two columns the completion date adjusts no problem. I cannot guarantee it will update, however, when any changes are made to the deal structure or funding
source. It's very hit and miss. What am I missing?
=[Potential Start Date]@row + (7 * IF([Union/Non Union Labor]@row = "Non-Union", 3, IF([Union/Non Union Labor]@row = "Union", 6) + IF([Deal Structure]@row = "Direct Lease", 1, IF([Deal Structure]
@row = "On Site Clinic", 4, IF([Deal Structure]@row = "Modular", 8) + IF([Funding Source]@row = "Company", 2, IF([Funding Source]@row = "Client", 10, IF([Funding Source]@row = "Union", 5, IF
([Funding Source]@row = "Landlord", 8))))))))
• I also tried this formula:
=[Potential Start Date]@row + (7 * IF([Union/Non Union Labor]@row = "Non-Union", 3, IF([Union/Non Union Labor]@row = "Union", 6)) + IF([Deal Structure]@row = "Direct Lease", 1, IF([Deal
Structure]@row = "On Site Clinic", 4, IF([Deal Structure]@row = "Modular", 8))) + IF([Funding Source]@row = "Company", 2, IF([Funding Source]@row = "Client", 10, IF([Funding Source]@row =
"Union", 5, IF([Funding Source]@row = "Landlord", 8)))))
...and still not able to get the Estimated Completion date to update accordingly when different values are put in the Deal Structure or Funding Source columns.
• Lets try using a set of parenthesis to add up the IFs first before multiplying by 7.
=Date@row + (7 * (IF(.....) + IF(.....) + IF(.....)))
=[Potential Start Date]@row + (7 * (IF([Union/Non Union Labor]@row = "Non-Union", 3, IF([Union/Non Union Labor]@row = "Union", 6)) + IF([Deal Structure]@row = "Direct Lease", 1, IF([Deal
Structure]@row = "On Site Clinic", 4, IF([Deal Structure]@row = "Modular", 8))) + IF([Funding Source]@row = "Company", 2, IF([Funding Source]@row = "Client", 10, IF([Funding Source]@row =
"Union", 5, IF([Funding Source]@row = "Landlord", 8))))))
• That worked!!! It's working perfectly, thank-you so much!
Now I have another problem... it's not working in my form:
• It won't populate within the form itself. The calculations are run in the sheet which means the data must be in the sheet (form submitted) for the calculation to run.
• That's a shame. Is there a different way to create something else within smartsheet that will look like a form but function like sheet? Or another software you could recommend if smartsheet
doesn't have this capability?
Help Article Resources
|
{"url":"https://community.smartsheet.com/discussion/102267/how-to-assign-time-durations-to-each-dropdown-value-in-a-column","timestamp":"2024-11-03T19:08:16Z","content_type":"text/html","content_length":"479472","record_id":"<urn:uuid:ef5b495f-efcf-46e6-9d76-61973f124779>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00284.warc.gz"}
|
Product Rule For Exponents - Calculus 1 Product and Quotient Rule
Product Rule For Exponents
Find the derivative of $2^{3x} \cdot 5^{4x^2}$
The Product Rule, attributed to Gottfried Leibniz, is a fundamental principle in calculus that facilitates the differentiation of functions being multiplied together. Put simply, it allows us to
compute derivatives for expressions that are products of two functions. By leveraging our understanding of both the power rule and the sum and difference rule for derivatives, the product rule states
that the derivative of a product of two functions is determined by taking the first function multiplied by the derivative of the second, and adding it to the second function multiplied by the
derivative of the first. This rule is invaluable when dealing with functions that cannot be quickly or easily multiplied directly.
$(uv)' = u'v + uv'$
$\left(\frac{u}{v}\right)' = \frac{u'v - uv'}{v^2}$
$(e^u)' = e^u \cdot u'$
$(u^n)' = nu^{n-1}u'$
$(f(g(x)))' = f'(g(x)) \cdot g'(x)$
Posted by Ashley Oliver a year ago
Related Problems
|
{"url":"https://www.practiceproblems.org/problem/Calculus_1/Product_and_Quotient_Rule/Product_Rule_For_Exponents","timestamp":"2024-11-03T00:58:19Z","content_type":"text/html","content_length":"63382","record_id":"<urn:uuid:a48452a7-3f8c-4521-8cb3-68f95f9f5f5e>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00298.warc.gz"}
|