content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
C++ LtXmlLib20::CBinaryData
In This Topic
LtXmlLib20::CBigInteger Class
A class to hold arbitrarily long integer numbers. The class is designed primarily as a container for such numbers, and the additional functionality (addition/subtraction/division/multiplication) is
provided on an unsupported basis. It is recommended that such numbers are extracted from this class in the form of a LONGLONG (int64) or string, and used in this form.
Members Description
Abs Changes the current object to contain the absolute value.
CBigInteger Constructor
GetAbs Returns a copy containing the absolute value.
GetNegative Returns a copy containing -1 * the value.
int() Converts the Big Int to an int, throws if out of range .
LONGLONG() Converts the Big Int to a LONGLONG (int64) throws if out of range .
Negate Multiplies the current object by -1.
operator== Compares this objects data with another objects
operator!= Compares this objects data with another objects
operator< Compares this objects data with another objects
operator> Compares this objects data with another objects
operator<= Compares this objects data with another objects
operator>= Compares this objects data with another objects
operator+ Adds a Big Int to the current value
operator- Subtracts a Big Int from the current value
operator* Multiples a Big Int with the current value
operator/ Divides the current value by a Big Int .
operator% Divides the current value by a Big Int, and returns the remainder
operator+= In-place Add operator
operator-= In-place Subtraction operator
operator*= In-place multiplication operator
operator/= In-place division operator
operator%= In-place modulus operator
Parse Parses the value contained in a string into the class.
Pow10 Gets a BigInteger object that is 10 ^ x
SquareRoot Returns the square root of the current value
ToString Converts the current value to a string. | {"url":"https://www.liquid-technologies.com/Reference/XmlDataBinding/LtXmlLib_CBigInteger.html","timestamp":"2024-11-07T19:03:33Z","content_type":"application/xhtml+xml","content_length":"18922","record_id":"<urn:uuid:222a68ed-22b1-42e1-84a1-5a42e91e24b1>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00666.warc.gz"} |
Model: 19BII, differences from -19B
Name: HP Business Consultant II
Code-Name: ?
Logic: RPN, algebraic w/no precedence
Firsts: first dual logic machine (with 17BII)
last Clamshell
Date: 1990-1
Price: $175
Date: <2003-01-01
Price: ?
Production-Run: ?
Named-Registers: 4 line history, line 2 is LAST or X, Y, Z, T,
last X; 0-9
Machine-State: prefix key state
RPN / algebraic
pending operations
printer status
display contrast
display mode
radix mark
deg / rad
currency conversion factors
registers and history
statistics lists
CFLO lists
current application and invocation history
time and time format
Shift-Keys: gold, white, above
(The = key has a lower label of ENTER.)
[] [] [] [] [] []
[] [] [] MODES PRINTER MAIN
CLEAR DATA E Rv x<>y CLEAR
[] [] [] [] 1/x
[] [] [] [] ^
MATH [] [] [] \v/x
[] MEM SHOW LAST x
( start nesting or Rv
) end nesting or x<>y
= complete operation or ENTER
INPUT use number in line 1 as response or ENTER
LAST use number in history line 2 or Last X
Rv roll down
x<>y exchange x and y
^ move to previous item or roll up
MODES: D/R BEEP PRNTR INTL DEMO MORE
MODES: ALG RPN MORE
There is an oddity in the RPN stack of this (and the 17bii) in that
the stack ranges in size from 1 to 4 entries, depending upon how
much data has been entered. For example, if you do:
- switch to RPN mode
- 1
- Enter
- 2
- Rv
- Rv
You will see "2" in the display, not zero.
(from Wlodek Mier-Jedrzejowicz)
Early models of the HP19BII have a bug which is activated when the
HP19BII is in RPN mode and the [+/-] and [x<>y] keys are pressed one
after the other. A harmless way to see the bug is as follows:
1. Put a positive number on the stack, for example press the 9 key.
2. Change the sign by pressing the [+/-] key next to [INPUT].
3. Exchange x and y by pressing the key with x<>y above it.
4. Now type a digit, for example 8; you will see a minus in front of it.
(This is the bug; after +/- to change a positive number to negative,
immediately followed by x<>y and a number, the number has a spurious
minus sign placed in front of it.)
5. Press [INPUT] and you will see the number is really positive.
6. Type another number; it will again have a minus sign.
7. To deactivate the bug safely press CLEAR (the yellow key followed by
the backarrow key).
The bug is dangerous, if you get to step 4 and see a number which is
negative though it should not be, you might be tempted to press the
backarrow key to remove the number and the minus sign. Since the minus
sign is not really there, if you try to delete it, you confuse the
calculator - it stops for a moment, then clears the stack after you
press the next key. If the next key you press is [ON], it sometimes
displays the screen to choose a language, and when you choose a
language, you get MEMORY LOST! The bug can also lock up your keyboard
- in that case you should reset the calculator by pressing the ON key
and the third key from the left in the top row both at the same time.
If you ever see the bug, press CLEAR at once to kill it!
There are some variations on the bug. If you press the backarrow key
until only the minus sign is in the display, then press [INPUT] or try
to use this minus sign, nothing will work except that the [-] key will
put a second minus in the display. If you do steps 1 through 4, then
press [+/-], x<>y, and a digit key again, you will see two minus
signs, and you can repeat this process adding one more minus sign each
time. If you press CLEAR now, then type a number, press [+/-],
[INPUT], x<>y, [+/-], x<>y and then a digit key, you will see a zero
(if the display mode is ALL) or a fraction mark (if the display mode
is FIX) in front of the digit instead of a minus.
The same bug exists in early HP17BII models, though it behaves a little
differently. It was removed early in 1991.
From: "Christoph Giesselink" <cgiess@swol.de>
Newsgroups: comp.sys.hp48
Subject: Re: SysRPL on HP19B2 ???
Date: Mon, 10 Feb 2003 23:42:54 +0100
Message-ID: <b299vb$jka$07$1@news.t-online.com>
The HP19BII has no RPL programming features accessable by the user.
Internally many parts are written in RPL. The HP19BII hardware is very
similar to the HP28S, only the 8KB RAM chip use a different memory location.
A added the MMU layout of the HP19BII and HP28S.
0 = address line must be low
1 = address line must be high
X = don't care
The first definition is for the master, the 2nd for the Slave controller.
@ HP19BII
Y Model (Tycoon II)
000XXXXXXXXXXXXXXXXX 64 ROM
XXXXXXXXXXXXXXXXXXXX 0 R Memory Controller Chip 1 (disabled)
110001XXXXXXXXXXXXXX 8 W Memory Controller Chip 2 (RAM)
1111111111XXXXXXXXXX Display/Timer
1111111111110000XXXX Control Register
001XXXXXXXXXXXXXXXXX 64 ROM
XXXXXXXXXXXXXXXXXXXX 0 R Memory Controller Chip 1 (disabled)
XXXXXXXXXXXXXXXXXXXX 0 R Memory Controller Chip 2 (disabled)
1111111110XXXXXXXXXX Display/Timer
1111111110110000XXXX Control Register
@ HP28S
O Model (Orlando)
000XXXXXXXXXXXXXXXXX 64 ROM
10XXXXXXXXXXXXXXXXXX 0 R Memory Controller Chip 1 (unused)
110XXXXXXXXXXXXXXXXX 32 W Memory Controller Chip 2 (RAM)
1111111111XXXXXXXXXX Display/Timer
1111111111110000XXXX Control Register
001XXXXXXXXXXXXXXXXX 64 ROM
XXXXXXXXXXXXXXXXXXXX 0 R Memory Controller Chip 1 (disabled)
XXXXXXXXXXXXXXXXXXXX 0 R Memory Controller Chip 2 (disabled)
1111111110XXXXXXXXXX Display/Timer
1111111110110000XXXX Control Register
"Matti Overmark" <matti.overmark@www.com> schrieb im Newsbeitrag
> Hi,
> Please donīt swallow your morning coffee the wrong way.
> Iīm just wondering if there are any hidden features of the HP19BII?
> Itīs almost (well I am indeed pushing it now, ... ) a HP28, must have
> a Saturn processor etcetera.
> SOmeone must have found out how to "dissolve" it?
> Just my monday-morning thoughts after grabbing the mentioned calc
> yesterday.
> Best regards, Matti
> +0 deg C in the North of Sweden | {"url":"https://www.finseth.com/hpdata/hp19bii.php","timestamp":"2024-11-13T15:08:53Z","content_type":"text/html","content_length":"19280","record_id":"<urn:uuid:9f6f0378-6e97-47e0-b1e0-d36efd2b82fc>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00517.warc.gz"} |
A Practical Guide to Pricing Weather Derivatives
Weather conditions play a fundamental role in the dynamics of the world economy, with estimates revealing that nearly 30% of economic activities and an overwhelming 70% of businesses encounter
susceptibility to weather-related fluctuations. This substantial impact underlines the critical necessity to hedge the risk associated with weather. Among these strategies, weather derivatives emerge
as a powerful tool, enabling market participants to effectively mitigate the adverse effects of weather variability on their operational efficiencies and financial performance.
Market Participants and Purposes
The history behind the development of weather derivatives relates to the deregulation of energy markets in the late 1990s. Previously, energy companies operated under monopolistic conditions, playing
multiple roles such as production and distribution, and could easily shift the burden of unexpected weather-related costs to consumers. However, deregulation led these companies into a competitive
landscape, which significantly increased the demand for ways to hedge against weather-related risks. In seeking solutions, energy companies turned to the capital markets. This search culminated in
the creation of weather derivatives in 1997, an innovative financial tool designed to mitigate the financial uncertainties caused by unpredictable weather patterns.
It is relevant to mention that weather derivatives have not emerged to replace traditional ways of insurance. Weather derivatives are designed to manage the financial impact of predictable, moderate
weather events, such as mild winters or cool summers, as opposed to insurance, which typically covers less frequent, high-impact events like hurricanes or blizzards. The payout mechanism of weather
derivatives is directly linked to the magnitude of the weather event, allowing for proportional compensation, whereas insurance payouts are generally fixed amounts contingent upon proving a financial
loss. This fundamental difference streamlines the claims process for weather derivatives, reducing the need for extensive documentation and legal proceedings. Additionally, weather derivatives are
tradeable securities, offering companies the flexibility to buy or sell coverage based on changing risk assessments or weather forecasts. This contrasts with traditional insurance policies, which are
usually non-transferable contracts between the insurer and the insured.
In general, there exist two standard ways to trade weather derivatives. The first one takes into consideration the primary market where weather hedges are provided for end-users that face weather
risk in their business. In the US, most derivatives are traded at CME where both options and futures are provided for a wide range of the US and European cities. Besides the primary markets, there
exists the opportunity to trade weather derivatives on secondary markets. Several counterparties, among which insurance companies, commercial banks or energy merchants belong, may be willing to
assume the weather risk exposure of an energy company.
Categories of Weather Derivatives
Much of the weather market is dominated by temperature derivatives, aimed to protect the holder from unexpected temperature prints. The underlying used to trade these contracts is either Heating
Degree Days (HDD) or Cooling Degree Days (CDD). For a specific day n, these can be defined as:
Heating Degree seasons:
Cooling Degree seasons:
The payoff of a call option is therefore equal to
Figure 1: Different option types, MPRA Paper 35037, BSIC
Other types of weather derivatives include rainfall contracts, measured in inches, and offered for trading by CME on a monthly or seasonal basis from March to October; snowfall contracts, also
measured in inches and offered for trading by CME on a monthly or seasonal basis from November through April; and wind derivatives, which depend on the daily average wind speed measured by a
predefined meteorological station over a specified period. However, interest for these products is purely demand driven and is generally initiated by a client wishing to protect against unfavourable
weather conditions. The main reason is that precipitations and wind speed are harder to model than temperature, adding to the fact that the former is supplied only during specific seasons. Two other
issues that represent a drawback for demand are geographical and basis risk: the first one results from the distance between the company that wishes to hedge and the station at which the weather
measurement takes place, while the latter arises from the relationship between the hedged volume and the underlying weather index – the payoff of the contract is based on the weather index and it is
unlikely to compensate for all the financial losses.
A Practical Pricing Process: Temperature Options – Sidney
In this section, we aim to provide a practical guide to pricing temperature options using Sidney weather data, consisting of the highest and lowest temperatures recorded each day for the past 165
As can be observed from the graphs, there are trend and seasonality components in the data that must be filtered out before we start pricing the options. We must undertake a few steps in modelling
Daily Average Temperature (DAT): de-trending and removing the seasonality from the raw data, using the filtered time series to model temperature, selecting the method with which we will model the
temperature volatility, and finally using these models to price an option. We dive into each of these steps by providing the mathematical intuition behind it and going over our results.
1. Modelling Seasonality
The first step in pricing temperature options is to decompose the time series into several components, each representing an underlying pattern category. This can be represented as:
One of the most widely used methods is classical decomposition – calculating the trend-cycle component
It is clear that the residual component exhibits substantial autocorrelation, thus we would use a more suitable approach, involving the use of Fourier series. The model for deterministic seasonal
mean temperature is:
To observe trend and overall peaks, the DAT series must be firstly denoised, as the data presents a lot of variation in noise. Afterwards, by plotting a rolling mean over annual periods, a linear
long-term trend is observed, thus
The speed of the seasonal process should be assumed to be
The model obtained for the deterministic seasonal mean temperature is:
The first issue that appears when fitting the model to our data is a strong partial autocorrelation identified at the first-time lag, when analysing the residuals. Moreover, by plotting the residuals
distribution against the quantiles of a normal distribution, we notice an upside deviation from the normal on the right tail of the distribution. However, it indicates that we should fit an
Autoregressive model with 1 time lag for the error term, that will help us in the next step.
2. Modelling Temperature
To model the variation of the estimated temperature model, we need to fit a mean-reverting stochastic process, with the most widely used by literature being an Ornstein-Uhlenbeck (OU) continuous
process. The use of a mean-reverting one is justified by the cyclical nature of temperature. An essential condition that must hold in order to capture the mean-reverting dynamics of temperature is
where is the mean-reversion parameter. To solve this stochastic differential equation (SDE), Ito-Doeblin formula is required, that is, for an Ito process x:
The best function to use is an exponential one with the mean-reverting parameter with respect to time, multiplied by the Ito process:
Taking the integral over a given unit of time
whose expectation is not equal to the long-run average. This happens because the mean process that the equation is reverting to,
3. Estimating the Speed of the Mean-Reversion Process and Volatility
To estimate the mean-reversion parameter, we use an Euler discretization of the SDE over a time interval
where we denoted
Since the standard deviation of temperature is non-constant over time, we would also need to find an equation that would best approximate the volatility component in the Ornstein-Uhlenbeck (OU)
continuous process.
The method that we found most suitable for this job is B-spline (because of its smoothing properties), a curve approximation, which estimates unknown data points in each range. This approach requires
the following parameters: knots (joints of polynomial segments), spline coefficients and the degree of a spline. We now plot B-spline curve approximations against the standard deviation of the
temperature, for different numbers of knots, measuring the residual sum of squares (RSS), for every single one of them. This would help us with the choice of knots for our final model.
We observe that the RSS does not change substantially when including more than 10 knots, while the model seems to exhibit substantial overfitting. Since we want to present a robust pricing model, we
chose to use 10 knots for our final approximation.
A more systematic, and reliable way to choose the number of knots could involve an optimization of the relationship between RSS and some overfitting measure, which is not within the scope of this
4. Monte Carlo simulations of temperature data
To simulate temperature data, we firstly define our approximation using a Euler scheme:
We then define a Monte Carlo function that takes as inputs the trading dates in which temperature is to be forecasted, the number of simulations, the parameters of the OU process obtained above, the
fitted volatility model, the first ordinal of the
5. Risk Neutral Pricing of Temperature Options
For our risk neutral pricing, we are going to use the already mentioned Ornstein-Uhlenbeck (OU) model for the variation of the estimated temperature, and we will use the B-spline interpolation to
describe the non-constant volatility term of the OU model. In this section, we will compare two pricing methods, a Black-Scholes approximation and a Monte Carlo approximation when pricing a winter
Sidney call option. It is important to clarify that Australia’s winter period is from June to August. We use the 10Y Australian bond yield as a proxy for the risk-free rate, approximately equal to
In general, using the Black Scholes approach makes sense because usually during winter periods, the average temperature does not often exceed the reference temperature (18 degrees C), at least not in
European countries and in the US. However, Sidney’s temperature presents a lot of upper outliers. Specifically, when simulating the Sidney temperature data over the upcoming winter months using the
Monte Carlo model from the previous section, the probability that the average temperature is higher than the reference point varies between 3.3% and 7.6%. This influences the price of the option as
the Black-Scholes approach incorporates a term related to the cumulative distribution function of the temperature. The price of a winter call option estimated by our risk-pricing model, expiring on
the 31^st of August, is $99,162.
On another note, the Monte Carlo valuation computes the discounted expectation of the payoff under the risk-neutral probability measure, by simulating the temperatures and then calculating the number
of resulting heating degree days in the valuation period. We will use the fundamental theorem of asset pricing, which states that:
The price of a winter call option expiring on the 31^st of August, using the Monte Carlo approximation, is equal to $106,958.
To conclude, we provide a plot of call option prices for different strikes to showcase that the Black-Scholes method is quite similar to the Monte Carlo valuation. However, the latter is more
suitable for pricing Australian temperature options, and the choice between the two should be rigorously researched when pricing these derivatives in other geographical areas.
[1] Weather Derivatives Tutorial, QuantPy
[2] Financial Management of Weather Risk, MPRA Paper No. 35037
[3] On Modelling and Pricing Weather Derivatives, Fat Tails Financial Analysis AB, Department of Mathematics, KTH
[4] Weather Derivatives: Pricing and Risk Management Applications, Institute of Actuaries of Australia
[5] Weather Derivatives and the Market Price of Risk, Journal of Stochastic Analysis
0 Comments
Leave a Reply Cancel reply
Categories: Markets | {"url":"https://bsic.it/a-practical-guide-to-pricing-weather-derivatives/","timestamp":"2024-11-10T21:13:34Z","content_type":"text/html","content_length":"127860","record_id":"<urn:uuid:66d3d6cc-17ac-49b3-b9f8-22446f05ca84>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00439.warc.gz"} |
The Math That’s Too Difficult for Physics | Quanta Magazine
The Math That’s Too Difficult for Physics
It’s one thing to smash protons together. It’s another to make scientific sense of the debris that’s left behind.
This is the situation at CERN, the laboratory that houses the Large Hadron Collider, the largest and most powerful particle accelerator in the world. In order to understand all the data produced by
the collisions there, experimental physicists and theoretical physicists engage in a continual back and forth. Experimentalists come up with increasingly intricate experimental goals, such as
measuring the precise properties of the Higgs boson. Ambitious goals tend to require elaborate theoretical calculations, which the theorists are responsible for. The experimental physicists’ “wish
list is always too full of many complicated processes,” said Pierpaolo Mastrolia, a theoretical physicist at the University of Padua in Italy. “Therefore we identify some processes that can be
computed in a reasonable amount of time.”
By “processes,” Mastrolia is referring to the chain of events that unfolds after particles collide. For example, a pair of gluons might combine through a series of intermediate steps — particles
morphing into other particles — to form a Higgs boson, which then decays into still more particles. In general, physicists prefer to study processes involving larger numbers of particles, since the
added complexity assists in searches for physical effects that aren’t described by today’s best theories. But each additional particle requires more math.
To do this math, physicists use a tool called a Feynman diagram, which is essentially an accounting device that has the look of a stick-figure drawing: Particles are represented by lines that collide
at vertices to produce new particles. Physicists then take the integral of every possible path an experiment could follow from beginning to end and add those integrals together. As the number of
possible paths goes up, the number of integrals that theorists must compute — and the difficulty of calculating each individual integral — rises precipitously.
When deciding on the kinds of collisions they want to study, physicists have two main choices to make. First, they decide on the number of particles they want to consider in the initial state (coming
in) and the final state (going out). In most experiments, it’s two incoming particles and anywhere from one to a dozen outgoing particles (referred to as “legs” of the Feynman diagram). Then they
decide on the number of “loops” they’ll take into account. Loops represent all the intermediate collisions that could take place between the initial and final states. Adding more loops increases the
precision of the measurement. They also significantly add to the burden of calculating Feynman diagrams. Generally speaking, there’s a trade-off between loops and legs: If you want to take into
account more loops, you need to consider fewer legs. If you want to consider more legs, you’re limited to just a few loops.
“If you go to two loops, the largest number [of legs] going out is two. People are pushing toward three particles going out at two loops — that’s the boundary that’s really beyond the state of the
art,” said Gavin Salam, a theoretical physicist at CERN.
Physicists already have the tools to calculate probabilities for tree-level (zero loop) and one-loop diagrams featuring any number of particles going in and out. But accounting for more loops than
that is still a major challenge and could ultimately be a limiting factor in the discoveries that can be achieved at the LHC.
“Once we discover a particle and want to determine its properties, its spin, mass, angular momentum or couplings with other particles, then higher-order calculations” with loops become necessary,
said Mastrolia.
And that’s why many are excited about the emerging connections between Feynman diagrams and number theory that I describe in the recent article “Strange Numbers Found in Particle Collisions.” If
mathematicians and physicists can identify patterns in the values generated from diagrams of two or more loops, their calculations would become much simpler — and experimentalists would have the
mathematics they need to study the kinds of collisions they’re most interested in. | {"url":"https://www.quantamagazine.org/the-math-thats-too-difficult-for-physics-20161118/","timestamp":"2024-11-06T02:28:40Z","content_type":"text/html","content_length":"194464","record_id":"<urn:uuid:7d2571b1-0ee9-4471-a239-c6eec45966e7>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00065.warc.gz"} |
Question ID - 100628 | SaraNextGen Top Answer
Two horizontal discs of different radii are free to rotate about their central vertical axes. One is given some angular velocity, the other is stationary. Their rims are now brought in contact.There
is friction between the rims. Then
a) The force of friction between the rims will disappear when the discs rotate with equal angular speeds
b) The force of friction between the rims will disappear when they have equal linear velocities
c) The angular momentum of the system will be conserved
d) The rotational kinetic energy of the system will not be conserved | {"url":"https://www.saranextgen.com/homeworkhelp/doubts.php?id=100628","timestamp":"2024-11-07T20:27:01Z","content_type":"text/html","content_length":"16235","record_id":"<urn:uuid:f5786b31-664b-4281-a6a9-a6e20f8fea69>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00398.warc.gz"} |
Handbuch - BotRate Community ist Ihr Partner für Finanzanalysen und Asset-Management-Automatisierung.
On the Statistics tab, you can view various statistics
• according to your current account
• for trading reports from files
• for any account from TopRate
The "What if...?" mode is also available, in which you can test trade filtering hypotheses.
View of the Statistics tab
Parametric statistics are displayed in three tabs on the left side of the application:
Graphic statistics are displayed in the central part and are presented in the form of 11 graphs and charts:
Trade distribution
Parametric statistics
• Growth – TWR for all time by balance (otherwise by equity, if available)
• Growth for today – TWR for the current day
• Growth for this month – TWR for the first of the current month
• 30 Day Growth - TWR over last 30 days
• Average monthly growth is the arithmetic mean of all monthly TWRs
• Max. drawdown – maximum account drawdown by balance (otherwise by equity, if available)
• Absolute growth - the ratio of the amount of profit to the amount of balance operations
• Recovery factor – the ratio of the current TWR to the maximum TWR drawdown (according to the balance)
• Profit factor – ratio of profit to loss
• Sharpe Ratio – Calculated Sharpe Ratio
• Mat. expectation – the ratio of the amount of profit to the number of trades
• Volatility - calculated volatility ratio
• Trading activity – the ratio of the time of having open positions to the entire lifetime of the account
• Algo trading – the ratio of the number of positions opened by Expert Advisors to the total number of positions
• Last trade – the time of the most recent trade on the account
• Positions profitable – number of positions closed with profit
• Unprofitable positions – the number of positions closed with a loss
• Long positions – number of BUY positions
• Short positions – number of SELL positions
• Average holding time is the arithmetic mean of the time from opening to closing positions
• Series of profitable positions - the largest number of consecutive profitable positions
• Series of unprofitable positions – the largest number of consecutive unprofitable positions
• Trading days – the number of days in which trading operations were performed. And also the ratio of this amount to the entire lifetime of the account.
• Trading days profitable – the number of days that closed with a profit
• Unprofitable trading days – the number of days that closed with a loss
• Series of profitable days – the largest number of consecutive days that closed with a profit
• Series of unprofitable days – the largest number of consecutive days that closed with a loss
• Trading weeks – the number of weeks in which trading operations were performed.
• Positions per week – the ratio of the total number of positions to the number of trading weeks.
• Broker name – the name of the broker where this account is located
• Leverage – leverage of this account
• Type – broker account type
• Time zone – which time zone the broker operates in
• Start – the time of the first known trade on this account
• Age – time since Start
• Deposits – amount and number of deposits
• Withdrawals – amount and number of withdrawals from the account
• Total result - the sum of profit and loss for all positions in the money
• Total profit – the sum of only profits for all positions in money and points
• Total loss – the sum of only the loss on all positions in money and points
• Balance – current account balance
• Maximum balance – time and value of the account balance when it was at its maximum
Graphic statistics
Displays how the TWR value has changed. An option to view TWR values for each month is also available.
Displays how the balance has changed, as well as equity, if available
Displays how the value of the total amount of profit and loss has changed. Also available is the option to view profit values for each month.
Displays how the drawdown on the account has changed
Displays the size of open positions
Distribution by days of the week
Displays the total number and profit of positions opened on one of the days of the week.
Distribution by hour
Displays the total number and profit of positions opened at one of the hours of the day.
Distribution by hold time
Displays the number and profit of positions depending on the duration of their life.
Distribution by potential profit/loss
Displays the position of each position on the MFE charts - the maximum profit value that the position has achieved during its life. In other words, an estimate of lost profits. And MAE - the maximum
value of the loss, which was achieved by the position during its life. In other words, the estimate of oversitting the loss.
Character distribution
Displays the total number and profit of positions for each symbol.
Distribution by magics
Displays the total number and profit of positions for each magic | {"url":"https://copy-soft.com/de/manual?top=0&doc=main","timestamp":"2024-11-10T15:57:40Z","content_type":"text/html","content_length":"28983","record_id":"<urn:uuid:edb1e73f-5d59-4f38-9d92-ba70add0544d>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00545.warc.gz"} |
Evolution Mapping for Describing Statistics of the Non-Linear Cosmic Velocity Field Using N-body Simulations
Core Concepts
Evolution mapping, a technique for simplifying the description of cosmological parameter dependence in non-linear structure formation, can be effectively applied to velocity statistics, specifically
the velocity divergence auto-power spectrum and its cross-power spectrum with the density field.
How might the inclusion of baryonic effects in the simulations impact the accuracy of evolution mapping for velocity statistics?
Answer: Including baryonic effects in simulations would likely impact the accuracy of evolution mapping for velocity statistics, particularly at small scales and late times. Here's why:
Scale-dependent impact: Baryonic processes like star formation, supernova feedback, and AGN activity primarily affect the distribution of matter on small scales (clusters of galaxies and smaller).
These processes can redistribute both mass and momentum, altering the velocity field in these regions compared to a dark matter-only simulation. Breaking the degeneracy: Evolution mapping relies on
the premise that cosmologies with the same shape parameters and π 12 exhibit similar nonlinear evolution. However, baryonic physics can introduce additional scale-dependent features in the power
spectra (both matter and velocity) that are not captured by the linear power spectrum amplitude alone. This can break the degeneracy between different cosmologies, making the mapping less accurate.
Impact on velocity statistics: Baryonic feedback can inject energy into the intergalactic medium, leading to outflows and modifying the velocity field. This can alter the velocity divergence power
spectrum, Pπ π (k), and its cross-power spectrum with the density field, Pπ Ώπ (k), particularly at high k values corresponding to small scales. Therefore, while evolution mapping provides a
valuable framework for understanding the cosmological dependence of velocity statistics in dark matter-only simulations, incorporating baryonic effects will be crucial for obtaining accurate
predictions on small scales and connecting to observations. This might involve developing more complex mapping relations or modifications to the existing framework that account for the
scale-dependent impact of baryonic physics.
Could there be alternative explanations, beyond differences in growth histories, for the observed deviations from perfect evolution mapping in the non-linear regime?
Answer: Yes, besides differences in growth histories, several alternative explanations could contribute to the observed deviations from perfect evolution mapping in the non-linear regime:
Non-locality of gravitational interactions: Evolution mapping assumes that the non-linear evolution of a given scale primarily depends on the linear power spectrum at that scale. However,
gravitational interactions are inherently non-local, meaning that modes at different scales can couple. This coupling becomes more significant in the non-linear regime and could lead to deviations
from the mapping, especially at smaller scales where mode coupling is stronger. Approximations in the simulation methods: Numerical simulations employ various approximations, such as finite
resolution, discrete time-stepping, and specific force calculation algorithms. These approximations can introduce numerical artifacts that might not scale identically with π 12 in different
cosmologies, leading to deviations from perfect evolution mapping. Inadequacy of the suppression factor: While the suppression factor, g(a) = D(a)/a, effectively captures some of the differences in
growth histories, it might not fully encapsulate all the relevant aspects of the non-linear evolution. Other factors, such as the growth rate of structure or the non-linear coupling between density
and velocity fields, could play a role and require a more sophisticated description beyond the simple suppression factor. Impact of massive neutrinos: Although the provided context focuses on
cosmologies without massive neutrinos, their presence introduces a scale-dependent growth factor due to their free-streaming nature. This scale dependence can lead to deviations from evolution
mapping, even in the linear regime, and would further complicate the picture in the non-linear regime. Investigating these alternative explanations will require careful analysis and comparison of
simulations with different codes, resolutions, and cosmological parameters. Developing more accurate theoretical models that capture the complexities of non-linear structure formation will also be
crucial for fully understanding the limitations of evolution mapping and refining its applicability.
If the universe's expansion were to deviate significantly from the standard model, how might that affect the applicability of evolution mapping in cosmology?
Answer: If the universe's expansion deviated significantly from the standard model, the applicability of evolution mapping in cosmology would be challenged, potentially requiring substantial
modifications or even rendering it ineffective. Here's why: Modified Growth of Structure: The standard model's expansion history, primarily governed by dark energy and dark matter, dictates the
growth rate of density fluctuations. Significant deviations from this expansion history would directly impact how structures form and evolve over time. This could alter the relationship between the
linear and non-linear power spectra, making it difficult to establish a simple mapping based on π 12. Scale Dependence in Growth: Deviations from the standard expansion could introduce
scale-dependent growth, meaning that density fluctuations at different scales evolve differently. This would break the fundamental assumption of evolution mapping that the growth factor is
scale-independent, making it challenging to map the evolution of different cosmologies using a single parameter like π 12. Impact on Velocity Field: The universe's expansion history also influences
the peculiar velocity field. Significant deviations from the standard model could lead to a more complex relationship between the density and velocity fields, making it harder to predict velocity
statistics based on the matter power spectrum alone. In essence, evolution mapping relies on the specific relationship between the linear and non-linear evolution of density fluctuations within the
framework of the standard cosmological model. If the universe's expansion were to deviate significantly, this relationship would likely change, necessitating a reevaluation and potential modification
of the evolution mapping approach. Here are some possible scenarios and their implications: Early Dark Energy: The presence of early dark energy could alter the growth of structure at early times,
potentially requiring a recalibration of the mapping relation or the inclusion of additional parameters to account for the modified growth history. Modified Gravity: Theories of modified gravity
often predict different growth rates for density fluctuations compared to General Relativity. This would necessitate developing entirely new mapping relations tailored to the specific predictions of
the modified gravity model. Interactions between Dark Energy and Dark Matter: If dark energy interacts with dark matter, it could directly influence the growth of structure and the velocity field,
making it challenging to establish a simple mapping based solely on the linear power spectrum amplitude. Exploring these scenarios through simulations and theoretical modeling would be crucial for
assessing the robustness of evolution mapping and adapting it to alternative cosmological models. | {"url":"https://linnk.ai/insight/scientific-computing/evolution-mapping-for-describing-statistics-of-the-non-linear-cosmic-velocity-field-using-n-body-simulations-Nf09ywL-/","timestamp":"2024-11-15T03:14:40Z","content_type":"text/html","content_length":"408340","record_id":"<urn:uuid:b4ea1021-1d31-4676-add3-aa874993f6c3>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00262.warc.gz"} |
True Anomaly of Asymptote in Hyperbolic Orbit given Eccentricity Calculator | Calculate True Anomaly of Asymptote in Hyperbolic Orbit given Eccentricity
What is Asymptote in Hyperbolic orbit ?
In the context of hyperbolic orbits or hyperbolic trajectories, an asymptote refers specifically to the straight lines that the hyperbola approaches but never intersects. These asymptotes determine
the shape and orientation of the hyperbolic trajectory relative to its focus.
How to Calculate True Anomaly of Asymptote in Hyperbolic Orbit given Eccentricity?
True Anomaly of Asymptote in Hyperbolic Orbit given Eccentricity calculator uses True Anomaly of Asymptote in Hyperbolic Orbit = acos(-1/Eccentricity of Hyperbolic Orbit) to calculate the True
Anomaly of Asymptote in Hyperbolic Orbit, The True Anomaly of Asymptote in Hyperbolic Orbit given Eccentricity refers to the angle between the asymptote (the line that the hyperbola approaches but
never intersects) and the line connecting the focus of the hyperbola to the periapsis (the closest approach to the central body). this angle is important for understanding the orientation of the
hyperbolic orbit. Given the eccentricity ) of the hyperbolic orbit, the true anomaly of the asymptote can be calculated using trigonometric functions. True Anomaly of Asymptote in Hyperbolic Orbit is
denoted by θ[inf] symbol.
How to calculate True Anomaly of Asymptote in Hyperbolic Orbit given Eccentricity using this online calculator? To use this online calculator for True Anomaly of Asymptote in Hyperbolic Orbit given
Eccentricity, enter Eccentricity of Hyperbolic Orbit (e[h]) and hit the calculate button. Here is how the True Anomaly of Asymptote in Hyperbolic Orbit given Eccentricity calculation can be explained
with given input values -> 7924.933 = acos(-1/1.339). | {"url":"https://www.calculatoratoz.com/en/true-anomaly-of-asymptote-in-hyperbolic-orbit-given-eccentricity-calculator/Calc-42517","timestamp":"2024-11-06T14:54:44Z","content_type":"application/xhtml+xml","content_length":"118652","record_id":"<urn:uuid:c5e9c16c-d7cf-4def-895c-9977b2280f0c>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00465.warc.gz"} |
The Physics of Hitting Home Runs
Ever since 2014, the number of home runs hit in a MLB baseball game has risen 47%. While some blame PED's, it is easier to prove the physics. The primary factor in hitting a home run is bat speed.
For every 1 extra mph of bat speed, means an extra 1.2 mph of ball speed making the ball fly 6 feet further. Also, launch angle effects whether or not a hit is a home run. The best launch angle is
somewhere between 25 and 35 degrees. If the ball is hit on the upper half, it will be a ground ball with a downwards velocity. Lastly, hitting the ball with the sweet spot of the bat (5-6 inches from
the end of the bat minimizes the vibration of the bat and thus maximizes the energy transferred to the ball.
Players now are taught to try and hit the ball just below its center to create more home runs, and as a result, strikeout and most importantly fly out perentages have sky rocketed since 2014.
Source: NOVA
2 Comments
Recommended Comments | {"url":"https://aplusphysics.com/community/index.php?/blogs/entry/31006-the-physics-of-hitting-home-runs/","timestamp":"2024-11-12T03:55:19Z","content_type":"text/html","content_length":"99314","record_id":"<urn:uuid:3f9f2ee9-4530-4908-9b22-1eabe3ed5ff8>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00663.warc.gz"} |
FIRE - Sampling strategy
[Work Log] FIRE - Sampling strategy
May 27, 2014
On the issue of HMC step size, I wondered if anyone had published on the relationship between ideal step size and the log-posterior's second derivative. I found an answer in section 4.2 of Neal 2011
[pdf], which poses the question in the context of a Gaussian-shaped posterior. Using eigenvalue analysis, he shows that a step size larger than 2 standard deviations results in unstable dynamcis, and
the state will diverge to infinity. From Neal (2011):
For low-dimensional problems, using a value for ε that is just a bit below the stability limit is sufficient to produce a good acceptance rate. For high-dimensional problems, however, the stepsize
may need to be reduced further than this to keep the error in H to a level that produces a good acceptance probability.
If we can estimate the Hessian of the log-posterior (perhaps diagonally), we can use this to choose the step-size as some fraction of that (user settable). Thus, our tuning run will perform Laplace
1. perform local minimizations of negative log posterior.
2. estimate diagonal Hessian at miminimum.
Sampling the clustering model
I've devised the strategy below for sampling the clustering model. I have determined that all variables can be Gibbs sampled except the latent piecewise linear model.
1. Gibbs sample new membership values
2. update each cluster's piecewise linear model using HMC
3. Gibbs sample observation model using known latent values
Step 3 needs some explanation.
Bayesian multiple linear regression
Let \(x\) be the column vector of latent immune activity values at each time, (t). These are provided by the (fixed) piecewise linear model. Recall that the observation model is:
\[ y = A x + B + \epsilon \]
where \(\epsilon\) is a normally distributed random variable with variance \(\sigma^2\).
Thus, the error is given by
\[ e = \left ( (A B)\begin{array}{c} x^\top \\ 1 \end{array} - y^\top \right ) / \sigma \]
Let \(\beta = (vec(A)^\top vec(B)^\top)^\top \) be the concatenated vectorization of A and B, and \(X = (x 1)^\top\). Following the [derivation provided by wikipedia](http://en.wikipedia.org/wiki/
Bayesian_multivariate_linear_regression), the resulting log-likelihood can be written as a guassian w.r.t. \(\beta\): \[ -(\beta - \hat \beta)^\top (\Sigma_\epsilon^{-1} \ocross X X^\top) (\beta - \
hat \beta) \]
where \(\ocross\) is the kronecker product in our case, the observation noise variance \(\Sigma_\epsilon\) is simply \(I \sigma^2 \).
If we assume a uniform prior over \(A\) and \(B\), then this Gaussian becomes our conditional posterior distribution, which we can easilly sample from with Gibbs.
Posted by
Kyle Simek
blog comments powered by | {"url":"http://vision.cs.arizona.edu/ksimek/research/2014/05/27/work-log/","timestamp":"2024-11-07T04:29:14Z","content_type":"application/xhtml+xml","content_length":"9858","record_id":"<urn:uuid:fc267ee2-be16-4056-810d-5466e75265ec>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00288.warc.gz"} |
Boolean logic
Summarized Boolean logic: rules and transformations
Some of it also doubles as basic set theory, negation (¬, ¬) becoming complements, conjunction (∧, AND, ∧) becoming the union, disjunction (∨, OR, ∨) becoming the intersection.
De Morgan's laws
Turn ANDs into ORs and back again, as long as you have NOTs.
• ¬(A or B) = ¬A and ¬B
• ¬(A and B) = ¬A or ¬B
From this, you can derive:
• A or B = ¬(¬A and ¬B)
• A and B = ¬(¬A or ¬B)
Boolean distributivity
There are two rules of replacement. These are very handy when transforming into and between different normal forms.
1. A and (B or C) = (A and B) or (A and C)
2. A or (B and C) = (A or B) and (A or C)
Implication →
Wikipedia calls this the material conditional. (The truth table for this is the one I always forget; it has one false, but where?)
a → b is equivalent to, and CNF-encoded as, (¬a ∨ b). By extension, a → (b ∨ c) becomes (¬a ∨ b ∨ c), and a → (b ∧ c) becomes (¬a ∨ b) ∧ (¬a ∨ c).
Or, as Wikipedia calls it, the logical biconditional. ↔'s HTML entity is ↔.
CNF-encoding a ↔ b: equivalent to (a → b) ∧ (b → a), therefore equivalent to (¬a ∨ b) ∧ (a ∨ ¬b).
XOR, or the exclusive disjunction
Symbols: ⊻ (⊻, ⊻), ⊕ (⊕).
CNF-encoding a ⊻ b: equivalent to (¬a ∨ ¬b) ∧ (a ∨ b).
Conjunctive normal form
CNF is an AND of OR clauses. Given a clause that already consists only of ANDs and ORs (and NOTs, of course), but haphazardly organized, transforming into CNF goes thusly:
1. Move NOTs inwards, with repeated De Morgan's laws and eliminating double negatives (¬¬X = X);
2. Distribute ORs inwards, by turning X or (Y and Z) into (X or Y) and (X or Z)
An example transformation:
1. Initial formula: (A and B) or (A and C) or (B and ¬C). All NOTs are as inwards as they can be.
2. Parenthesize, so we only deal with clauses of two: (A and B) or ((A and C) or (B and ¬C)).
3. Distribute the OR of the inner formula, with X=(A or C), Y=B and Z=¬C: (A and B) or [ ((A and C) or B) and ((A and C) or ¬C) ].
4. Shuffle the order around: (A and B) or [(B or (A and C)) and (¬C or (A and C))].
5. Apply the same rule again to the subclauses "B or (A and C)" and "¬C or (A and C)":
□ B or (A and C) is turned into (A or B) and (B or C), with "B or A" reordered into "A or B";
□ ¬C or (A and C) is turned into (A or ¬C) and (C or ¬C). Interestingly "(C or ¬C)" is a tautology: it cannot be false.
Pop the transformed subclauses back into the main formula: (A and B) or [((A or B) and (B or C)) and ((A or ¬C) and (C or ¬C))]
6. We can drop some extra parentheses within the right subclause, since it's in CNF now: (A and B) or [(A or B) and (B or C) and (A or ¬C) and (C or ¬C)]
7. Now, to deal with the other side. So we don't get mixed up, let's sweep the right subclause under the symbol D: D=[(A or B) and (B or C) and (A or ¬C) and (C or ¬C)], and our formula is (A and
B) or D = D or (A and B).
8. Distribute again: D or (A and B) becomes (D or A) and (D or B).
9. Expand D. The previous formula becomes ([(A or B) and (B or C) and (A or ¬C) and (C or ¬C)] or A) and ([(A or B) and (B or C) and (A or ¬C) and (C or ¬C)] or B).
10. Left subclause: A or [(A or B) and (B or C) and (A or ¬C) and (C or ¬C)].
1. Let's do this mechanically. Deep parenthesization first: A or ((A or B) and ((B or C) and ((A or ¬C) and (C or ¬C)))).
2. (A or (A or B)) and (A or (((B or C) and ((A or ¬C) and (C or ¬C)))))
3. (A or (A or B)) and ((A or (B or C)) and (A or (((A or ¬C) and (C or ¬C)))))
4. (A or (A or B)) and ((A or (B or C)) and ((A or (A or ¬C)) and (A or (C or ¬C))))
5. Collapse parentheses, because "(X or (Y or Z)) = X or Y or Z": (A or A or B) and (A or B or C) and (A or A or ¬C) and (A or C or ¬C)
6. Remove redundancies, because "X or X" is just X: (A or B) and (A or B or C) and (A or ¬C) and (A or C or ¬C)
11. Right subclause: B or [(A or B) and (B or C) and (A or ¬C) and (C or ¬C)].
1. We could do the same deep parenthesization thing as with the left subclause... or, since we're humans, we can skip ahead.
2. The rule, a bit informally, is that X or (Y and Z and W and ... and Я) = (X or Y) and (X or Z) and (X or W) and ... and (X or Я).
3. Therefore, B or [(A or B) and (B or C) and (A or ¬C) and (C or ¬C)] becomes...
4. (B or A or B) and (B or B or C) and (B or A or ¬C) and (B or C or ¬C), which is equivalent to...
5. (A or B) and (B or C) and (B or A or ¬C) and (B or C or ¬C)
12. Pop the left and right subclause back into the formula: ((A or B) and (A or B or C) and (A or ¬C) and (A or C or ¬C)) and ((A or B) and (B or C) and (B or A or ¬C) and (B or C or ¬C)).
13. Drop parentheses: (A or B) and (A or B or C) and (A or ¬C) and (A or C or ¬C) and (A or B) and (B or C) and (B or A or ¬C) and (B or C or ¬C).
14. Reorder: (A or B) and (A or B) and (A or B or C) and (A or B or ¬C) and (A or ¬C) and (A or C or ¬C) and (B or C) and (B or C or ¬C).
15. Drop the extra (A or B), because it's redundant. Solution: (A or B) and (A or B or C) and (A or B or ¬C) and (A or ¬C) and (A or C or ¬C) and (B or C) and (B or C or ¬C).
16. This still isn't in its most minimal form, though. Optimizations we can make:
□ ((A or B) and (A or B or C)) is equivalent to (A or B). This is clear if one writes out their truth tables. (A or B) is true 6 our of 8 times, while (A or B or C) is true 7 out of 8 times –
the same six that (A or B) is true, and also the A=0, B=0, C=1 case. When ANDed together, that one case drops out and we are left with the six cases of (A or B).
□ For the same reason, ((A or B) and (A or B or ¬C)) is equivalent to (A or B).
□ (A or C or ¬C) is always true, because (C or ¬C) is always true. Having an always-true clause in a CNF formula is pointless, because CNF formulas essentially work by defining where falsehoods
must be, and an always-true clause doesn't help with that; x and 1 equals x.
□ (A or C or ¬C) is likewise always true, and thus pointless in the formula.
17. After eliminating the useless clauses, we are left with (A or B) and (B or C) and (A or ¬C).
18. One further optimization becomes apparent if we write out truth tables: we can get rid of (A or B) and just have (B or C) and (A or ¬C) as the formula. All of the falsehoods we need are found in
(A or ¬C) and (B or C).
A B C (A or B) (A or ¬C) (B or C) target
Applications: DIMACS and WCNF file formats
These are standard text-based file formats used to pass CNF and weighted-CNF formulas to SAT and MaxSAT solvers.
DIMACS consists of lines of ASCII text. A line starting with c is a comment and ignored. The mandatory preamble line has the form p cnf [num-variables] [num-clauses]. The lines after that provide
clauses, a clause being a series of (base-10) numbers, a positive number i referring to the variable x[i] and a negative number −i referring to the negated variable ¬x[i]. Each clause is terminated
by a single 0.
DIMACS WCNF is very similar, but gives weights to each clause. The preamble is changed to p wcnf [num-vars] [num-clauses] [top-weight], and every clause has a number – the weight of the clause –
prepended to it. The top-weight is the weight that is to be regarded as "infinite", used to mark hard clauses that must be satisfied. For example, p wcnf 5040 59049 9999 is the preamble of a MaxSAT
instance with 5040 variables, 59049 clauses, and with hard clauses having a weight of 9999, with any clause with a weight less than 9999 being soft, i.e. optional.
oatcookies.neocities.org | last edit: 2021-05-15, created on or before 2021-03-07 | index | {"url":"https://oatcookies.neocities.org/condisjunctions","timestamp":"2024-11-04T01:56:56Z","content_type":"text/html","content_length":"12164","record_id":"<urn:uuid:9588454d-470d-4cbc-9a2e-b143859bc8ea>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00770.warc.gz"} |
Servo Torque Modeling and Simulation in context of servo torque
05 Oct 2024
Here is a potential academic article on Servo Torque Modeling and Simulation:
Title: Servo Torque Modeling and Simulation: A Comprehensive Review
Servo systems are widely used in various industrial applications, including robotics, manufacturing, and aerospace engineering. The performance of these systems relies heavily on the accurate
modeling and simulation of servo torque. This article provides a comprehensive review of existing methods for modeling and simulating servo torque, highlighting their strengths and limitations. We
also discuss the importance of considering nonlinear effects, such as friction and saturation, in servo torque modeling.
Servo systems consist of a motor, a gearbox, and a control system that regulates the motor’s speed and position. The performance of these systems is critical to achieving precise motion control and
high productivity. Servo torque, which is the torque generated by the motor, plays a crucial role in determining the overall performance of the servo system.
Servo Torque Modeling:
Several models have been proposed for modeling servo torque, including:
• Linear Model: The linear model assumes that the servo torque is proportional to the motor’s speed and position. This model can be represented by the following equation:
T_s = K_s \* (ω_m + K_p \* θ_m)
where T_s is the servo torque, ω_m is the motor speed, θ_m is the motor position, K_s is the servo gain, and K_p is the position gain.
• Nonlinear Model: The nonlinear model takes into account the effects of friction, saturation, and other nonlinear phenomena on the servo torque. This model can be represented by the following
T_s = f(ω_m, θ_m) + g(ω_m, θ_m)
where f is a function representing the nonlinear effects, and g is a function representing the linear effects.
Simulation Techniques:
Several simulation techniques have been proposed for simulating servo torque, including:
• Time-Domain Simulation: This technique involves solving the differential equations governing the servo system’s behavior in the time domain.
• Frequency-Domain Simulation: This technique involves analyzing the servo system’s frequency response to determine its stability and performance.
Simulation Tools:
Several simulation tools are available for simulating servo torque, including:
• MATLAB/Simulink: A popular simulation platform that allows users to model and simulate complex systems.
• Simcenter Amesim: A commercial simulation tool that provides a comprehensive environment for modeling and simulating complex systems.
Servo torque modeling and simulation are critical components of servo system design. This article has reviewed existing methods for modeling and simulating servo torque, highlighting their strengths
and limitations. The importance of considering nonlinear effects in servo torque modeling has also been emphasized. Future research should focus on developing more accurate and efficient models and
simulation tools to support the design and optimization of advanced servo systems.
[1] K. J. Åström and S. O. R. Halvorsen, “Servo Systems,” Springer, 2013.
[2] T. M. L. V. Gomes, et al., “Modeling and Simulation of Servo Torque,” Journal of Intelligent Manufacturing, vol. 24, no. 4, pp. 931-944, 2013.
[3] J. Y. Lee, et al., “Nonlinear Modeling and Simulation of Servo Systems,” IEEE Transactions on Industrial Electronics, vol. 61, no. 1, pp. 345-354, 2014.
Related articles for ‘servo torque ‘ :
• Reading: **Servo Torque Modeling and Simulation in context of servo torque **
Calculators for ‘servo torque ‘ | {"url":"https://blog.truegeometry.com/tutorials/education/40d7b38c8418b4274a474c91307128c7/JSON_TO_ARTCL_Servo_Torque_Modeling_and_Simulation_in_context_of_servo_torque_.html","timestamp":"2024-11-08T07:50:03Z","content_type":"text/html","content_length":"17480","record_id":"<urn:uuid:70ede5c8-f1cf-46f4-9292-903a6bd18487>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00412.warc.gz"} |
Microsoft excel formulas list with examples [INFOGRAPHICS]
Here, we are going to explain Microsoft excel formulas list with examples. We have provided a list of excel formulas and functions based on the text. Microsoft excel provided a various function like
string function, mathematical function, statistical function, logical function, information function, date and time function, financial function, look up function and database function. Here are some
simple excel formulas based on text function.
Most trending articles: 5 Top PCB design software, SQL server performance tuning, Relational database management system
Microsoft excel formulas list with examples
Top 15 Microsoft excel formulas list with examples on String
In this basic Microsoft excel formulas list with examples focuses on text function. Microsoft excel has many formulas list to offer when we try to manipulate text.
Read: How to run ios apps on windows PC.
There are mainly two types: Worksheet formula and VBA function. In this Microsoft excel training, we explain text function, Microsoft excel formulas list with examples on Worksheet.
If you are looking for excel statistical formula then check excel statistical function for data analysis.
15 Basic Microsoft excel formulas list with examples.
• Char
• Clean
• Code
• Concatenate
• Exact
• Find
• Left
• Len
• Lower
• Proper
• Replace
• Right
• REPT
• Trim
• Upper
Let start all Microsoft excel formulas list with examples.
Char is a function used to find character based on ASCII value. The input to char function is ASCII value. It converts ASCII value into character.
Microsoft excel formulas list with examples – Char (70) = F
Compatible with: Char function is Compatible with Excel 2000, Excel 2003, Excel XP, Excel 2007, Excel 2010, Excel 2013 and Excel 2011 Mac
Microsoft excel formulas list with Char examples
In this example Char function find F based on 70 and P on 80.
Clean is a function used to remove non-printable character from the cell.
Syntax for clean is – clean (text)
Compatible to: Clean function is Compatible with Excel 2000, Excel 2003, Excel XP, Excel 2007, Excel 2010, Excel 2013 and Excel 2011 Mac
Interview questions: Sql query interview questions, SQL Join query , Technical interview questions and answers
Microsoft excel formula Code is used to find ASCII value based on the text. If cell contains more than one character then code consider the only first character and return ASCII value of the first
Syntax for CODE is – code (text)
Example: Code (F) =70
Compatible to: Code function is Compatible with Excel 2000, Excel 2003, Excel XP, Excel 2007, Excel 2010, Excel 2013 and Excel 2011 Mac.
Microsoft excel formulas list with Code examples
In this example ASCII value of T is 84.In first row contain string Technicgang but Code calculate ASCII value of T only.
To join two or more string use Microsoft excel formula Concatenate.
Syntax for Concatenate is: Concatenate (text 1, text 2, text 3 ……..text n)
Compatible to: concatenate function is Compatible with Excel 2000, Excel 2003, Excel XP, Excel 2007, Excel 2010, Excel 2013 and Excel 2011 Mac.
concatenate two strings in excel
Microsoft excel formulas list with examples
Here we concatenate Technic, gang and .com to form Technicgang.com
Exact is a Microsoft excel formulas used to compare two string. Exact function compare and return true if give two string are exact equal.
Syntax for EXACT function is: Exact (text1, text2)
Compatible to: Exact function is Compatible with Excel 2000, Excel 2003, Excel XP, Excel 2007, Excel 2010, Excel 2013 and Excel 2011 Mac.
Exact is case sensitive function.
Example of excel formulas with examples exact with picture
Microsoft excel formulas list with examples
Here we check A1 and A2 due to case sensitiveness result is FALSE
Microsoft excel formulas list with examples
Recover corrupted or unsaved excel file
Find is a Microsoft excel formulas calculate location of sub string.
Syntax for FIND is: FIND (sub string, string, Start position)
Sub string: What to find?
String: Where to find?
Start position: Where search start?
Compatible to: Find function is Compatible with Excel 2000, Excel 2003, Excel XP, Excel 2007, Excel 2010, Excel 2013 and Excel 2011 Mac.
Find is a case sensitive function.
Microsoft excel formulas list with examples of find
Here we calculate location of gang in www.technicgang.com
Read: Remove Winsnare Virus From Computer
Left is a Microsoft excel formulas use to extract substring from string. Left function extract substring starting from left to right.
Syntax for Left is: Left (String, number of character)
This function is also used in Microsoft excel VBA formulas.
how to calculate first 3 characters in excel?
Microsoft excel formulas list with examples of Left
Here we find first 3 characters.
You also like What does stand for PCB in computer?
Len is a excel worksheet function use to calculate length of string.
Syntax for Len is: Len (String)
Compatible to: Len function is Compatible with Excel 2000, Excel 2003, Excel XP, Excel 2007, Excel 2010, Excel 2013 and Excel 2011 Mac.
This function is also used in Microsoft excel VBA formulas.
Microsoft excel formulas list with examples of Len
As name suggest, Lower is a excel function convert given string in lower case. In case text contain not a letter then Lower is not affected on it.
Syntax for Lower is: Lower (String)
Compatible to: Lower function is Compatible with Excel 2000, Excel 2003, Excel XP, Excel 2007, Excel 2010, Excel 2013 and Excel 2011 Mac.
Microsoft excel formulas list with examples of lower
Proper is a excel function convert given string in proper format. In this case all string start with capital latter.
Syntax for Proper is: Proper (String)
Compatible to: Proper function is Compatible with Excel 2000, Excel 2003, Excel XP, Excel 2007, Excel 2010, Excel 2013 and Excel 2011 Mac.
Microsoft excel formulas list with examples
Right is a Microsoft excel formulas extract substring from string. Right function extract text starting from right to left.
Syntax for Right is: Right (String, number of character)
Right function is also used in Microsoft excel VBA formulas. It is just opposite to Left function.
how to calculate last 3 characters using excel?
Microsoft excel formulas list with examples of Right
Replace is a simple formula in Microsoft excel, It is used to replace text with another text. Syntax for Replace in worksheet is different from replace in VBA.
Replace (Old string, start, Number of char, new string)
Here Old string is string or text we want to be replace.
Start indicate position of old text
Number of char indicate number of character to be replace.
New string is a new set of characters.
Microsoft excel formulas list with examples with Replace
Here we replace tech with technic.
Recommendation : how to make your computer faster?
REPT is a formula used in Microsoft excel to repeat a text value. Microsoft excel formulas list contain REPT to extend text by repeating set of text value.
Syntax for REPT is: REPT (text, number)
Here number indicate number of time text should be repeat.
Compatible to: REPT function is Compatible with Excel 2000, Excel 2003, Excel XP, Excel 2007, Excel 2010, Excel 2013 and Excel 2011 Mac.
Used in Microsoft worksheet formula only .
Microsoft excel formulas list with examples of REPT
If your computer is infected by shortcut virus, Clean shortcut virus from computer
Trim is a text formatting formula. Microsoft excel formulas list contain some formula related to SQL formula, trim is one of them. Trim is used to remove leading and trading space.
Syntax for trim is: trim (text)
Trim is also used in Microsoft VBA.
Microsoft excel formulas list with examples of trim
Upper is exactly opposite to lower function. Upper is a excel function convert given string in upper case. In case text contain not a letter then upper is not affected on it.
Syntax for upper is: Upper (String)
Compatible to: Upper function is Compatible with Excel 2000, Excel 2003, Excel XP, Excel 2007, Excel 2010, Excel 2013 and Excel 2011 Mac.
Microsoft excel formulas list with examples Upper
Similar Excel article –
Keyboard shortcut for excel expert.
Convert PDF to Excel
This info graphics give you some idea.
excel 2013 advantages Infographic
Check Vlookup function and its example –
How to use vlookup with different sheets using Single Value
How to use vlookup with different sheets using multiple Value
Remove Shortcut virus with infographics
In first part of Microsoft excel formulas list with examples we focus on Text function.In next post we will go through numerical formulas and all Excel Formula List.
4 thoughts on “Microsoft excel formulas list with examples [INFOGRAPHICS]”
1. Very useful . waiting for next post.
2. Useful info
3. Thanks Viji check PCB design.
4. Thanks for useful knowledge…
Leave a Comment | {"url":"https://technicgang.com/microsoft-excel-formulas-list-with-examples/","timestamp":"2024-11-10T01:54:15Z","content_type":"text/html","content_length":"90567","record_id":"<urn:uuid:16af7e3d-a040-4ce4-9abf-3a190b41e5a5>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00518.warc.gz"} |
Can someone help me with coding challenges and projects for my algorithms course in data analytics? Computer Science Assignment and Homework Help By CS Experts
Can someone help me with coding challenges and projects for my algorithms course in data analytics? I was planning on doing this course using SQLite without using any backend. I want to know if
anybody can give me a working solution without the use of the file server (or any other host in case of a i was reading this project). Thanks 1v1 and please help. A: Can’t imagine going to a database
with NoSQL. Why it’s there? It’s a cheap and fast tool after a couple of years of experience. It was designed that way actually there is no schema matching (or really, it’s written in SQL) but with
minimal back-end code your requirements simply don’t match. Be careful not to put any scheme in. At least I’ll play with it a bit. I’ve used your sample code for the years I’ve had a hard time with
SQL Server databases. I think you’re doing what you suggest. You can see what SQLite provides (small set of classes, mostly for classes, not especially designed for your model). $s = new MySqlTestDB
(“$SQLDATA”); $dst = $s->find().text(‘Name’); if($dst == ”) { echo “Can’t find Name”); } $smodel = $dst->findOne(true); if ($smodel!= ”) { $row = $s->storeQuery($dst->getHeadOne(){‘name’}; $arrN =
$s->searchQuery($row, $x); $arrNObj = $s->getCustomBy(array(‘value’))->search($row, $x); $dstCl = $s->findByFunction(function($sql) { echo “Name isCan someone help me with coding challenges and
projects for my algorithms course in data analytics? My current problem comes from my design of a system with two software applications – data analytics with the A-Scale and data analytics with the
N-Scale. I thought of solving a problem, but I’m not sure if that is the real problem; I have a couple subjects I need to study. I’m not very good at programming the algorithms problem but I could do
better than that. When comparing algorithms to data algorithms for different algorithms the good ones are the ones selected for being processed, like if I have code like: var total = A-X (total + ” ”
+ total) They would contain: average (average) average total(average) average total(average) Also give some examples of the algorithms I have written: I would like examples for n to be represented in
an n or in a m with a multiple choice approach. n are independent variables and im a way to think about it. I would like data to be organized in a different kind of order depending on which approach.
a is representing a set of samples such as: can be viewed as a set of a collection of data elements such as: 1 million samples N samples (m-1) 100k samples a might represent the number of occurrences
of a particular sequence of elements. if the list of data elements represented for n elements in order, then im a way to think about it.
Take My Online Class For Me
So, If I were going to write a problem where I wanted to find the averages and their means for all the sequences of samples one has so far I would need some sort of approach. Thanks. Could you tell
me about the methodology? I think that it would be a new challenge of a big software project like learn this here now it is always more difficult to work with long processes. Also I’m a big fan of
statistics, why would I be interested in using this approach? Yeah thanks. How can I know where the vectors is coming from, given that vectors have a number of values that differ from vector to
vector? This is the following problem: There are almost 1,000 data points in a 3-dimensional data frame. Or at least 1 thousand samples taken from a given sample frame. The 3rd-parameter is the fact
that each sample point has a different number of samples, each sample also has a different number of samples. According to the documentation of a problem the solution can top article be read by this
object code. However it’s still hard. So, I get the following errors: Why are there only 3 parameters? Here it is written like this: double x, y, z,… n, m, d.m,… m; double sum(\textbf x(…),\textbf y
How Many Students Take Online Courses 2017
..),…, x); I’m not sure on what are the variables x and y and the names of the m, d.m and m? Hello! Welcome to this topic. This is my current problem as I’m solving a system that has 2 software
application, which one is only on a single computer and does not use the N-Scale (the biggest machine on the planet) to collect data. The data collection starts to take another 5 or 6 days. The
DataBase does contain 10M students and I’m working with one of these data collections. I have trouble with multiple ways to estimate the time available to these students are using N and X. I think I
need different models to be trained. I want the N + X + X to be 1. I believe I can use a model for the N and m and do the correct estimation. But since the two software applications are used
different students need different training options. I realize this is not the real thing, but if I knowCan someone help me with coding challenges and projects for my algorithms course in data
analytics? I know that I’m way over my head. I’m working on a technology that uses the new Baire find more 4, which can be configured to use 3 out of 4 variants of the Baire number (F32). But what
I’m really comparing this with is having a set of large records that each have similar schema. What they have comes out to be quite different. There are multiple ways that I can do this and I’ve
included a number of them: Creating the sequence number: A set-to-project with 3 combinations of the A, B and C-symbolisation engines respectively (D3, D6) Creating the record sequence: A
set-to-project with 3 combinations of the A, B and C-symbolisation engines respectively (D3, D6) Creating the score: A set-to-project with 3 combinations of the A, B and C-symbolisation engines
respectively (D4, D5) Creating the score sequence: A set-to-project with 3 combinations of the A, B and C-symbolisation engines respectively (D4, D5) Finally there is a list of possibilities for any
type of score (and no other) that you want to use.
If I Fail All My Tests But Do All My Class Work, Will I Fail My Class?
A simple approach to doing this was to simply use the Kullback-Leibler distance (D4) criterion to compute a metric (n) for the D-product using a maximum principle along with Euler theorem (for
N-dimensional rank). A: I agree with the title is definitely not informative of this kind of data, as it is a data where some sort of sort of clustering might be present. But it is informative of
multiple aspects of how you are trying to estimate the parameters. Especially, I think what you were trying to find is a set that was not required as of yet. This is not a new idea — I’ve got | {"url":"https://csmonsters.com/can-someone-help-me-with-coding-challenges-and-projects-for-my-algorithms-course-in-data-analytics","timestamp":"2024-11-05T17:08:40Z","content_type":"text/html","content_length":"86610","record_id":"<urn:uuid:5afbd774-27e6-4ef1-99a7-347143108d93>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00549.warc.gz"} |
How much house can I afford?
This is what you can afford
Maximum mortgage amount
Maximum monthly mortgage payment
Borrowing money from the bank is not as easy as you may have thought. In many ways, you can not simply walk in and request a mortgage amount and obtain that unless you have made sure that you are
making enough to borrow that amount from the lender. The ability to get the right loan amount at the right price is crucial to getting the home that you want, though. How much can you afford to
borrow? How much of a mortgage can you take on? What you may not realize is that you can find this information about before you even begin to consider talking to a lender.
How to find out how much you can afford
There are many factors that lenders have in place that determine how much you can afford to borrow. Unfortunately, it is less based on how well you think that you can handle the funds and more based
on the facts about your financial situation. Your lender will take a good amount of facts and use them to rate you as a risk. What level of risk you are determines how low your interest rate is.
Lenders use an interest rate to define just how much risk they are taking on, in other words.
A key way that they analyze risk is to use your credit score. The higher your credit score is, the less risk you are because you have been able to be a responsible borrower, have had enough history
with lending and you are likely to continue these good habits on the loan they provide to you. A high credit score means that you are less likely to default on the loan they provide. This also means
that they can offer you a lower rate of interest because you are a more secured borrower. On the flip side, when you do not have great credit, lenders increase the risk that they face and therefore
increase the cost of lending to you. Now, they charge a higher rate of interest. You pay more for borrowing money from them.
How much, how much?
But, credit is not all that is taken into consideration in terms of getting a loan. Once your interest rate qualifications are considered and set, the next thing the lender needs to consider is just
how much to lend to you. The question is, then, what else goes into determining how much can be lent to you? Once your interest rate is determined, they can determine how much of a monthly payment
you can afford to pay. They will likely determine how much debt you have currently and how much of that has to be repaid on a monthly basis. This helps to determine how much of your monthly income is
left to repay the loan that you have.
You can use a mortgage calculator to help you to get the specifics for the loan that you are likely to obtain. This will give you a good estimate of how much of a loan you qualify for before talking
to lenders. Here are a few examples to help you to see what will need to go into repaying your mortgage loan.
Let's say that you are likely to get an interest rate at about 6 percent. You know that you would like to borrow and have a term of 30 years to repay your loan, which is a typical mortgage loan term.
You can choose other options including as short as 7 years and as long as 40 years. Here is how this would work out using a mortgage calculator to help you:
Interest rate: 6 percent
Term: 30 years
Annual real estate taxes for your area: $3500
Annual homeowners insurance for your area: $500
Your gross annual income: $100,000
Your monthly debt obligations: $1000
Your estimated highest monthly mortgage payment: $1667
Your estimated highest loan amount that you can borrow: $263,700
With this information, several things can be determined. First, based on your annual income, lenders will determine just how much money you have on a monthly basis to make payments with. Each lender
has a maximum percentage that they are willing to loan to. You must make a certain amount of money per month to qualify for a loan. Yet, it does not stop there.
You also need to have debt obligations that are paid monthly considered. The goal here is to determine how much of the income you bring in is already going out to other debts such as car loans,
student loans, personal loans, cell phones and other debt obligations that you are already paying per month. Do not forget, too, that a certain percentage is put aside for the estimated costs of
things like food, utilities and as noted in the example, homeowners insurance and taxes that you will need to pay but may not be paying right now.
When you use a mortgage calculator that is designed to take your information and give you answers, you are able to see just where you stand and what the future holds for you. In the above example,
you can see that you qualify for a loan that is considerably. If you only made $80,000, you would only qualify for a home mortgage that was no larger than $168,700, using the same factors for
everything else. Perhaps you are lucky enough to be able to lower your monthly obligation to debts down to just $250. With all factors the same as the example above, you would qualify for much more,
up to $316,420 loan.
When it comes to learning this information, use a mortgage calculator to help you. Change around the terms, interest rates and other factors to match the type of loan you are looking for as well as
your current financial situation. When you do this, you are able to find the most affordable loan for your needs and you know what is coming before you start looking! | {"url":"https://www.mortgagecalculatorplus.com/mortgage/calculators/how-much-house-can-i-afford/","timestamp":"2024-11-11T20:55:25Z","content_type":"text/html","content_length":"213965","record_id":"<urn:uuid:c5829b2e-aa98-4471-9fc6-c65a9cbf2a55>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00518.warc.gz"} |
Voltage Drop
This is a calculator for the estimation of the voltage drop of an electrical circuit based on the wire size, distance, and anticipated load current.
Voltage Drop = (WL x 2) x R x LC / 1000
Voltage Drop % = (Voltage Drop / SV) x 100
WL = Wire Length
R = Resistance
LC = Load Current
SV = Source Voltage | {"url":"https://www.grealpha.com/resources/dc-load-wiring-calculator/calc/voltage-drop/","timestamp":"2024-11-03T04:05:13Z","content_type":"text/html","content_length":"57594","record_id":"<urn:uuid:9c45e1f5-8277-4417-828a-977e078f88cc>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00477.warc.gz"} |
Workbooks for 4th grade “Math + MA” (age 9+)Workbooks for 4th grade “Math + MA” (age 9+) | Math Country
Workbooks for 4th grade “Math + MA” (age 9+)Aleksandr2024-05-10T11:48:07+00:00
Math Workbooks for 4th grade (age 9+)
This workbook series provides excellent instructional material for 4th-grade math lesson plans. We have organized all problems and exercises into lessons with corresponding homework assignments.
Solving mathematical problems leads to improvement of logical thinking, consolidates the interest of children in cognitive activity, and contributes to the development of thinking and intellectual
Math + MA (9+ y.o.)
Math + MA (9+ y.o.)
This series of books for 9 years old children allow each student to establish mathematical relationships between objects, detect relationships in phenomena, process, present information in a symbolic
or a graphic form, and design models that reflect various relationships between objects. Students make a comparison on one or more grounds and draw conclusions on this basis; establish a pattern
between objects (numbers, numerical expressions, equalities, geometric shapes, etc.) and determine its missing elements. In addition, students learn to classify objects according to proposed or
independently found grounds, draw conclusions by analogy and confirm the accuracy of these conclusions, make out simple generalizations, and use mathematical knowledge in an extended field of
application. Students grasp primary interdisciplinary subject concepts: number, size, geometric figure; recognize mathematical relations between objects and groups of objects in a sign-symbolic form
(on the model); perform an advanced search for information and present information in the proposed form.
Systematic exercises develop such important skills as using acquired mathematical knowledge to describe and explain surrounding objects, processes, and phenomena, as well as to assess their
quantitative and spatial relationships. Students practice the basics of logical and algorithmic thinking, improve spatial imagination and mathematical speech, master the basics of counting,
measuring, estimating the result, and evaluating it, visualize data in various forms (tables, charts, diagrams), record and execute algorithms.
Workbooks for 9+ years old “Math + MA” | {"url":"https://math-country.com/4th-grade-4-math-ma/","timestamp":"2024-11-13T11:30:38Z","content_type":"text/html","content_length":"376408","record_id":"<urn:uuid:8bb694e6-cb7a-4225-92e6-3ea3a2fe7a3a>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00121.warc.gz"} |
Structure design and surface interference analysis of double crown surface configuration of multistage face gears
A novel transmission using the multistage face gears as the core component is used to realize variable speed with differential gear shifting, there are multiple face gears superimposed on the radial
direction, meshing with planetary wheel at the same time, which achieves different outputs speed through braking different face gears. In order to solve the interference problems caused by
asynchronous meshing motion between several face gears and the same cylinder gear, this study mainly focuses on the meshing theory study based on the double crown surfaces in tooth profile and tooth
orientation. The surface structure of straight tooth and double crown are constructed according to the related surface equations, the corresponding interference conditions are obtained by comparison,
every single stage face gear model is designed and assembled. This study shows that the double crown configuration surface structure can easily improve contact characteristics compared with straight
tooth surface structure of face gear. In addition, the double crown configuration surface structure can improve the distribution and direction of contact path. This study is expected to establish a
new tooth surface model, which can provide the best machining parameters for the face gears.
1. Introduction
The core component of the novel multistage face gears transmission is the planetary gear train meshed by multistage face gears and cylindrical gear. The face gear driving is a new type of power
transmission technology and contains many advantages such as big transmission ratio, small gear axial force and insensitiveness to the axial installation error, excellent distributary feature, all of
which can optimize the size and weight of the transmission system greatly with higher reliability. Since face gear was developing continuously, Ref. [1] carried out a systematic and meticulous
research on the face gear from the initial of the mesh geometry principle. After that, Ref. [2] also had an in-depth and extensive research based on the related results of face gear driving. Besides,
Ref. [3-5] researched in terms of meshing principle or movement characteristics of face gear. Based on the results in geometry design method, transmission error, contact theory and meshing
performance, etc., this study aims to optimize the meshing equation, crown surface configuration technology and structure design of multistage face gears by theoretical analysis and software
simulation, which can improve contact features and avoid motion interference. The feasibility of double crown configuration is verified through the successful trial produced by the samples.
2. Meshing principle of the novel transmission system
The space structure of multistage face gears and cylindrical gear belongs to the orthogonal mesh, the elastic deformation caused by tooth surface stress in the process of meshing may deviate the mesh
point trajectory from the theoretical mesh curve. In addition, the contact between face gears and cylindrical gear is difficult to be synchronized due to the tooth number differences, which leads to
the edge contact interference between two levels and exists a big stress concentration, so it is easy to cause a mutual interference problem in the transmission process.
According to the relative motion relation,face gear pairs are described with four coordinate systems, the intersection point ${O}_{00}$ between cylindrical gear rotation axis ${z}_{s0}$ and the face
gear rotation axis ${z}_{f0}$ are the origin of coordinates, which is shown in Fig. 1.
In order to ensure the correctness of the relative position and relative movement in coordinate transformation, it is needed to set up a transformation matrix from cylindrical gear coordinate system
to the face gear coordinate system, Ref. [6], shown in Fig. 2.
Fig. 1The coordinate system of face gears
Fig. 2The transformation coordinate system of the face gears
Setting ${z}_{s0}$ as the vertical section, the tooth face equation ${\stackrel{\to }{r}}_{s}$ of gear shaper cutter involute is obtained:
${\stackrel{\to }{r}}_{s}\left({u}_{s},{\phi }_{si}\right)={\left[\begin{array}{llll}{x}_{si}& {y}_{si}& {z}_{si}& t\end{array}\right]}^{T}=\left[\begin{array}{c}±{r}_{bs}\left[\mathrm{s}\mathrm{i}\
mathrm{n}\left({\phi }_{s0}+{\phi }_{si}\right)-{\phi }_{si}\mathrm{c}\mathrm{o}\mathrm{s}\left({\phi }_{s0}+{\phi }_{si}\right)\right]\\ -{r}_{bs}\left[\mathrm{c}\mathrm{o}\mathrm{s}\left({\phi }_
{s0}+{\phi }_{si}\right)+{\phi }_{si}\mathrm{s}\mathrm{i}\mathrm{n}\left({\phi }_{s0}+{\phi }_{si}\right)\right]\\ {u}_{s}\\ 1\end{array}\right].$
In Eq. (1), ${r}_{bs}={m}_{s}{z}_{s}\mathrm{c}\mathrm{o}\mathrm{s}{\alpha }_{si}/2$ is the base radius of cylindrical gear, ${m}_{s}$ is the modulus, ${z}_{s}$ is tooth number, ${\alpha }_{si}$ is
pressure angle. ${u}_{s}$ is the axial parameters of one point in cylindrical gear tooth surface. ${\phi }_{si}$ is angle parameters of one point on the cylindrical gear involute. ${\phi }_{s0}$ is
the angle parameters from alveolus symmetrical line to involute starting point. ${\phi }_{s0}=\pi /2{Z}_{s}-inv{\alpha }_{si}$, $inv{\alpha }_{si}$ is involute function of the pressure angle ${\alpha
}_{si}$, $inv{\alpha }_{si}=\mathrm{t}\mathrm{a}\mathrm{n}{\alpha }_{si}-{\alpha }_{si}$.
The conjugate surfaces between two gears refer to a pair of teeth surface that satisfy the meshing motion and must be continuously tangent under the given motion pattern at contact points, Ref. [7].
Conjugate surface can be obtained from one to the other through the envelope principle in the given motion pattern of the two gears.
By meshing equation, the tooth surface equation of face gear as follows:
${r}_{fi}={\left[\begin{array}{lll}{x}_{fi}& {y}_{fi}& {z}_{fi}\end{array}\right]}^{T}=\left[\begin{array}{c}{r}_{bs}\left[\mathrm{c}\mathrm{o}\mathrm{s}{\theta }_{fi}\left(\mathrm{s}\mathrm{i}\
mathrm{n}{\theta }_{\phi }\mp {\phi }_{si}\mathrm{c}\mathrm{o}\mathrm{s}{\theta }_{\phi }\right)-\frac{\mathrm{s}\mathrm{i}\mathrm{n}{\theta }_{fi}}{{q}_{fsi}\mathrm{c}\mathrm{o}\mathrm{s}{\theta }_
{\phi }}\right]\\ -{r}_{bs}\left[\mathrm{s}\mathrm{i}\mathrm{n}{\theta }_{fi}\left(\mathrm{s}\mathrm{i}\mathrm{n}{\theta }_{\phi }\mp {\phi }_{si}\mathrm{c}\mathrm{o}\mathrm{s}{\theta }_{\phi }\
right)+\frac{\mathrm{c}\mathrm{o}\mathrm{s}{\theta }_{fi}}{{q}_{fsi}\mathrm{c}\mathrm{o}\mathrm{s}{\theta }_{\phi }}\right]\\ -{r}_{bs}\left(cos{\theta }_{\phi }±{\phi }_{si}\mathrm{s}\mathrm{i}\
mathrm{n}{\theta }_{\phi }\right)\end{array}\right].$
In Eq. (2), ${\theta }_{fi}={q}_{fsi}{\theta }_{\phi }$. The tooth surface equation corresponding to multistage face gears can be acquired when choosing different face gears $a$, $b$, $c$ as $i$ in $
The difference of normal curvatures of a pair of conjugate surfaces at one point of contact along any tangent lines is called the induced curvature of the direction. It reflects the degree of
closeness of the two conjugate surfaces in the meshing, which is the important factor to determine the size of contact, Ref. [8]. Assumption that the radius of normal curvature at any point of tooth
surface along the direction is $R$. Two different solutions ${R}_{1}$ and ${R}_{2}$ of radius of principal curvature can be obtained from solving the above equation, so ${K}_{1}=1/{R}_{1}$ and ${K}_
{2}=1/{R}_{2}$ are two principal curvatures:
$\left\{\begin{array}{l}{R}_{1}=\frac{1}{{K}_{1}}=\frac{DC-2EB+FA}{2\left(AC-{B}^{2}\right)}+\sqrt{{\left(\frac{2EB-DC-FA}{2\left(AC-{B}^{2}\right)}\right)}^{2}-\frac{DF-{E}^{2}}{AC-{B}^{2}}},\\ {R}_
Reference to the calculation analysis of induced normal curvature, we define the common normal direction as the space directed by the gear tooth entity. It is known from Table 1:
1) The main curvature of the face gear outer radius along the length of tooth is smaller, the inner radius’s main curvature is significant changes, but the value is much smaller to indicate that the
tooth surface is relatively flat.
2) The larger principal value of the induced normal curvature is much bigger than the smaller principal value, and the numerical value is determined mainly by the principal curvature of the
cylindrical gear.
3) If there is no curvature interference in two teeth surfaces, the induced normal curvature of ${\sum }_{s}$ and ${\sum }_{f}$ must be negative in any direction.
Table 1The curvature parameters of meshing point of face gear conjugate surface
Angle of cylindrical gear Main curvature of cylindrical gear Main curvature of face gear Induced principal curvature Induced curvature
Mesh point
${\phi }_{si}$ ${K}_{1}^{s}$ ${K}_{2}^{s}$ ${K}_{1}^{f}$ ${K}_{2}^{f}$ ${K}_{sf}^{1}$ ${K}_{sf}^{2}$ ${K}_{sf}$
1 0.1000 –0.5263 0 –0.3231 0.0725 –0.2032 –0.0725 –0.2019
2 0.1349 –0.3901 0 –0.2555 0.0696 –0.1347 –0.0696 –0.1310
3 0.1698 –0.3099 0 –0.2102 0.0666 –0.0998 –0.0666 –0.0952
4 0.2047 –0.2571 0 –0.1776 0.0634 –0.0795 –0.0634 –0.0756
5 0.2396 –0.2196 0 –0.1529 0.0600 –0.0668 –0.0600 –0.0642
6 0.2745 –0.1917 0 –0.1334 0.0566 –0.0583 –0.0566 –0.0574
7 0.3094 –0.1701 0 –0.1176 0.0530 –0.0525 –0.0530 –0.0528
8 0.3443 –0.1528 0 –0.1043 0.0494 –0.0485 –0.0494 –0.0492
9 0.3793 –0.1388 0 –0.0930 0.0457 –0.0458 –0.0457 –0.0457
10 0.4491 –0.1172 0 –0.0743 0.0384 –0.0429 –0.0384 –0.0384
3. Double crown profile configuration of face gear
The purpose of double crown configuration is to improve the meshing contact area between cylindrical gear and face gear in the position of face gear tooth surface to avoid the phenomenon in face
gear, such as stress concentration, partial load and interference. In order to obtain the precise crown surfaces in tooth profile and tooth orientation, the double crown surface configuration of face
gear is reconstructed from the straight tooth surfaces reconfiguration of cylindrical gear.
The tooth profile crown is a parabolic reconstruction from tooth addendum direction to tooth root direction, Refs. [9]. The cylindrical gear tooth profile crown is composed of approximate parabola,
with that the central $o\mathrm{"}$ of tooth thickness $s$ is parabolic virtual center, ${l}_{p}$ is radius, ${u}_{0}$ is vertex position parameter of tooth profile parabola, ${u}_{s}^{"}$ is surface
parameters in the direction of tooth profile. $\mathrm{\Delta }\mathrm{"}$ is profile parabola deformation, $\epsilon \mathrm{"}$ is profile parabola coefficient.
The equation of the parabolic crown surface of any tooth profile as follows:
${{\stackrel{\to }{r}}_{s}}^{\mathrm{"}}\left({u}_{s}^{"},{\phi }_{si}^{\text{'}}\right)=\left[\begin{array}{c}\left({u}_{s}^{"}-{u}_{0}\right)\mathrm{s}\mathrm{i}\mathrm{n}\alpha \mathrm{"}-{l}_{p}\
mathrm{c}\mathrm{o}\mathrm{s}\alpha \mathrm{"}-\epsilon \mathrm{"}{{u}_{s}^{\text{'}}}^{2}\mathrm{c}\mathrm{o}\mathrm{s}\alpha \mathrm{"}\\ \left({u}_{s}^{"}-{u}_{0}\right)\mathrm{c}\mathrm{o}\mathrm
{s}\alpha \mathrm{"}+{l}_{p}\mathrm{s}\mathrm{i}\mathrm{n}\alpha \mathrm{"}+\epsilon \mathrm{"}{{u}_{s}^{\text{'}}}^{2}\mathrm{s}\mathrm{i}\mathrm{n}\alpha \mathrm{"}\\ {\phi }_{si}^{\text{'}}\\ 1\
Introducing the transformation of generating gear coordinate system, so the surface equation of generating gear as follows:
${r}_{pi}^{"}\left({u}_{s}^{"},{\phi }_{si}^{\text{'}},{\theta }_{si}\right)={M}_{p,s}^{\mathrm{"}}×{r\mathrm{"}}_{s}\left({u}_{s}^{"},{\phi }_{si}^{\text{'}}\right).$
The tooth orientation crown configuration is to gradually reduce the thickness of the tooth thickness toward both ends of the gear tooth, introducing a virtual cutter gear, and then assuming that the
cutter can realize cylindrical gear tooth orientation configuration with a parabolic trajectory along the tooth length. Supposing that the orientation projection crown is composed of parabolic and
mirror curves, with that the central $o\mathrm{"}\mathrm{"}$ of tooth thickness $s\mathrm{"}\mathrm{"}$ is parabolic virtual center, ${l}_{t}$ is radius, ${u}_{s}^{"\text{'}}$ is surface parameters
in the direction of tooth profile ${x}_{s0}$. $\mathrm{\Delta }\mathrm{"}\mathrm{"}$ is profile parabola deformation, $\epsilon \mathrm{"}\mathrm{"}$ is profile parabola coefficient.
The parabola silhouette in tooth orientation direction is the transverse silhouette of the generating gear, that is ${z}_{p}={\phi }_{si}^{\mathrm{"}}=0$. So, the lateral section silhouette of
virtual cutter is:
${r}_{c}^{"\text{'}}\left({u}_{s}^{"}\right)={\left[\begin{array}{llll}{r}_{px}^{"}\left({u}_{s}^{"},0\right)& {r}_{py}^{\text{'}}\left({u}_{s}^{"},0\right)& 0& 1\end{array}\right]}^{T}.$
The surface equation of cylindrical gear double crown profile configuration can be obtained from the coordinate transformation relation ${M}_{cs}^{\mathrm{"}\mathrm{"}}\left({l}_{t}\right)$ between
the virtual cutter and the double crown cylindrical gear as follows:
In Eq. (7), $\mathrm{\Delta }{u}_{s}^{"\text{'}}$ is the parabolic vertex position parameters.
According to the coordinate transformations and envelop relationships, the double crown surface equation of face gears is derived:
${r}_{fi}^{"\text{'}}\left({u}_{s},{\phi }_{si},{\theta }_{si}\right)={M}_{f,s}×{{r}_{s}}^{\mathrm{"}\mathrm{"}}\left({u}_{s}^{\text{'}},{u}_{s}^{\text{'}\text{'}}\right)$
$=\left[\begin{array}{c}\left(\begin{array}{c}\mathrm{c}\mathrm{o}\mathrm{s}{\theta }_{si}\mathrm{c}\mathrm{o}\mathrm{s}{\theta }_{fi}\left(\left({u}_{s}^{"}-{u}_{0}\right)\mathrm{s}\mathrm{i}\mathrm
{n}\left({\theta }_{si}+\alpha \mathrm{"}\right)-\left({l}_{p}+\epsilon \mathrm{"}{{u}_{s}^{\text{'}}}^{2}\right)\mathrm{c}\mathrm{o}\mathrm{s}\left({\theta }_{si}+\alpha \mathrm{"}\right)+{r}_{ps}\
left(\mathrm{s}\mathrm{i}\mathrm{n}{\theta }_{si}-{\theta }_{si}\mathrm{c}\mathrm{o}\mathrm{s}{\theta }_{si}\right)\right)\\ -\mathrm{s}\mathrm{i}\mathrm{n}{\theta }_{si}\mathrm{c}\mathrm{o}\mathrm
{s}{\theta }_{fi}\left(\left({u}_{s}^{"}-{u}_{0}\right)\mathrm{c}\mathrm{o}\mathrm{s}\left({\theta }_{si}+\alpha \mathrm{"}\right)+\left({l}_{p}+\epsilon \mathrm{"}{{u}_{s}^{\text{'}}}^{2}\right)\
mathrm{s}\mathrm{i}\mathrm{n}\left({\theta }_{si}+\alpha \mathrm{"}\right)\right)\\ +{r}_{ps}\left(\mathrm{c}\mathrm{o}\mathrm{s}{\theta }_{si}+{\theta }_{si}\mathrm{s}\mathrm{i}\mathrm{n}{\theta }_
{si}\right)-\epsilon \mathrm{"}\mathrm{"}{{u}_{s}^{\text{'}\text{'}}}^{2}-\mathrm{s}\mathrm{i}\mathrm{n}{\theta }_{fi}\left(u\mathrm{"}\mathrm{"}+\mathrm{\Delta }u\mathrm{"}\mathrm{"}\right)\end
{array}\right)\\ \left(\begin{array}{c}-\mathrm{c}\mathrm{o}\mathrm{s}{\theta }_{si}\mathrm{s}\mathrm{i}\mathrm{n}{\theta }_{fi}\left(\left({u}_{s}^{"}-{u}_{0}\right)\mathrm{s}\mathrm{i}\mathrm{n}\
left({\theta }_{si}+\alpha \mathrm{"}\right)-\left({l}_{p}+\epsilon \mathrm{"}{{u}_{s}^{\text{'}}}^{2}\right)\mathrm{c}\mathrm{o}\mathrm{s}\left({\theta }_{si}+\alpha \mathrm{"}\right)+{r}_{ps}\left
(\mathrm{s}\mathrm{i}\mathrm{n}{\theta }_{si}-{\theta }_{si}\mathrm{c}\mathrm{o}\mathrm{s}{\theta }_{si}\right)\right)\\ +\mathrm{s}\mathrm{i}\mathrm{n}{\theta }_{si}\mathrm{s}\mathrm{i}\mathrm{n}{\
theta }_{fi}\left(\left({u}_{s}^{"}-{u}_{0}\right)\mathrm{cos}\left({\theta }_{si}+{\alpha }^{\mathrm{"}}\right)+\left({l}_{p}+\epsilon \mathrm{"}{{u}_{s}^{\text{'}}}^{2}\right)\mathrm{sin}\left({\
theta }_{si}+{\alpha }^{\mathrm{"}}\right)\right)\\ +{r}_{ps}\left(\mathrm{c}\mathrm{o}\mathrm{s}{\theta }_{si}+{\theta }_{si}\mathrm{s}\mathrm{i}\mathrm{n}{\theta }_{si}\right)-\epsilon \mathrm{"}
{{u}_{s}^{\text{'}}}^{2}-\mathrm{c}\mathrm{o}\mathrm{s}{\theta }_{fi}\left(u\mathrm{"}\mathrm{"}+\mathrm{\Delta }u\mathrm{"}\mathrm{"}\right)\end{array}\right)\\ \left(\begin{array}{c}\mathrm{s}\
mathrm{i}\mathrm{n}{\theta }_{si}\left(\left({u\mathrm{"}}_{s}-{u}_{0}\right)\mathrm{s}\mathrm{i}\mathrm{n}\left({\theta }_{si}+\alpha \mathrm{"}\right)-\left({l}_{p}+\epsilon \mathrm{"}{{u\mathrm
{"}}_{s}}^{2}\right)\mathrm{c}\mathrm{o}\mathrm{s}\left({\theta }_{si}+\alpha \mathrm{"}\right)+{r}_{ps}\left(\mathrm{s}\mathrm{i}\mathrm{n}{\theta }_{si}-{\theta }_{si}\mathrm{c}\mathrm{o}\mathrm{s}
{\theta }_{si}\right)\right)\\ +\mathrm{c}\mathrm{o}\mathrm{s}{\theta }_{si}\left(\begin{array}{c}\left({{u}^{\mathrm{"}}}_{s}-{u}_{0}\right)\mathrm{cos}\left({\theta }_{si}+{\alpha }^{\mathrm{"}}\
right)+\left({l}_{p}+{\epsilon }^{\mathrm{"}{{{u}^{\mathrm{"}}}_{s}}^{2}}\right)\mathrm{sin}\left({\theta }_{si}+{\alpha }^{\mathrm{"}}\right)\\ +{r}_{ps}\left(cos{\theta }_{si}+{\theta }_{si}sin{\
theta }_{si}\right)-\epsilon \text{'}\text{'}{u\mathrm{"}\mathrm{"}}_{s}^{2}\end{array}\right)\end{array}\right)\\ 1\end{array}\right].$
It is difficult to construct the model accurately with single 3D drawing software because of the complexion about the double crown surfaces configuration of face gear. Based on Matlab software for
mediation, this study can calculate the point coordinates of the tooth surface more accurately, and then import the corresponding coordinate value into 3D software, so that can carry out the complete
surface structure modeling of face gear. The configuration effect is shown in Fig. 3-4.
Fig. 3Drum configuration surface crowing a) of tooth profile, b) of longitudinal tooth
Fig. 4Double drum configuration surface crowing of gear tooth
4. NC machining brief analysis about crown surface of face gear
Compared with the common gear hobbing or forming machining method, the NC machining is more suitable for the development of single, small batch production and new products. In addition, it can also
process the complicated surface. In this paper, the machining of face gears is only used for sample development and experimental research, which belongs to small batch or single unit production. In
order to improve the processing efficiency and ensure the quality of processing, it is divided into two working procedure, roughing and finish machining. For example, flat end milling cutter is used
in rough machining, as shown in Fig. 5(a, b). And ball end milling cutter is used in finish machining, as shown in Fig. 5(c).
The face gears samples can be processed by CARVER 5400. Starting from the blank workpiece, after 3 processing of ‘Firstly roughing’, ‘Angle clearing’ and ‘Shiny processing’, and carrying out the
surface treatment with nitrification. In the same way, all the required multistage face gears are processed separately, as shown in Fig. 6.
Fig. 5Cutter path of cavity milling for: a) firstly roughing, b) angle clearing, and c) cutter path of contour milling for shiny processing
Fig. 6Milling machining: a) of level 1 face gear sample, b) of level 2 face gear sample, c) of level 3 face gear sample
5. Conclusions
The multistage face gears as the core component of a novel transmission realize variable speed with differential gear shifting. The study solves the problem of the meshing of multistage face gears
that lays the foundation for the following research:
1) Setting up a multistage face gears meshing coordinate system to obtain the relation of each coordinate by the space coordinate transformation principle, and then the cutter tooth surface equation
and the parametric equation of relative speed of contact points are derived. Thus, the parametric equations of multistage face gears are derived based on the conjugating theory.
2) Setting up a judging equation of surface interference, then studying the limit conditions of multistage face gears such as motion interference, and therefore obtaining the range of tooth surface
equation. But it is more complicated to derive the tooth surface equation of face gear directly. Therefore, the tooth surface equation of the double crown configuration can be derived by the crown
reconstruction of the cutter, and the tooth surface equation of the face gear is obtained through the enveloping surface.
3) A series of coordinate points of crown surface are constructed by MATLAB, and then the 3D entity and assembly model of multistage face gears are obtained by UG 3D modeling function to realize the
visualization of multistage face gears.
4) This study briefly describes the design analysis method and process manufacturing method of a novel multistage transmission sample, which are prepared conditions for subsequent experimental
• Lin C., Cai Z. Q. Modeling of dynamic efficiency of curve-face gear pairs. Proceedings of the Institution of Mechanical Engineers Part C-Journal of Mechanical Engineering Science, Vol. 230, Issue
7, 2016, p. 1209-1221.
• Liu D. W., Wang G. H., Ren T. Z. Transmission principle and geometrical model of eccentric face gear. Mechanism and Machine Theory, Vol. 109, 2017, p. 51-64.
• Chung T.-D., Chang Y.-Y. An investigation of contact path and kinematic error of face-gear drives. Journal of Marine Science and Technology, Vol. 13, Issue 2, 2005, p. 97-104.
• Ming X. Z., Gao Q., Yan H. Z., Liu J. H., Liao C. J. Mathematical modeling and machining parameter optimization for the surface roughness of face gear grinding. International Journal of Advanced
Manufacturing Technology, Vol. 90, Issues 9-12, 2017, p. 2453-2460.
• Saribay Z. B. Tooth geometry and bending stress analysis of conjugate meshing face-gear pairs. Proceedings of the Institution of Mechanical Engineers Part C-Journal of Mechanical Engineering
Science, Vol. 227, Issue 6, 2013, p. 1302-1314.
• Lin C., Wu X. Y. Calculation and analysis of contact ratio of helical curve-face gear pair. Journal of the Brazilian Society of Mechanical Sciences and Engineering, Vol. 39, Issue 6, 2017, p.
• Saribay Z. B., Bill R. C., Smith E. C., Rao S. B. Geometry and kinematics of conjugate meshing face-gear pairs. Journal of the American Helicopter Society, Vol. 62, Issue 3, 2017, p. 1-10.
• Zhao Y. P., Zhang Y. M. Computing method for induced curvature parameters based on normal vector of instantaneous contact line and its application to Hindley worm pair. Advance of Mechanical
Engineering, Vol. 9, Issue 10, 2017, p. 1-15.
• He Z. Y., Lin T. J., Luo T. H., Deng T., Hu Q. G. Parametric modeling and contact analysis of helical gears with modifications. Journal of Mechanical Science and Technology, Vol. 30, Issue 11,
2016, p. 4859-4867.
About this article
Mathematical models in engineering
multistage face gears
meshing principle
tooth surface equations
double crown configuration
CNC machining
Copyright © 2018 Xingbin Chen, et al.
This is an open access article distributed under the
Creative Commons Attribution License
, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. | {"url":"https://www.extrica.com/article/19894","timestamp":"2024-11-10T21:03:42Z","content_type":"text/html","content_length":"152884","record_id":"<urn:uuid:34ec9997-f1ff-4da3-b812-53d2456a09fa>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00701.warc.gz"} |
An Optimal Portfolio Problem Presented by Fractional Brownian Motion and Its Applications
Issue Wuhan Univ. J. Nat. Sci.
Volume 27, Number 1, March 2022
Page(s) 53 - 56
DOI https://doi.org/10.1051/wujns/2022271053
Published online 16 March 2022
Wuhan University Journal of Natural Sciences 2022, Vol.27 No.1, 53-56
CLC number: O232;O193
An Optimal Portfolio Problem Presented by Fractional Brownian Motion and Its Applications
School of Mathematical Sciences, Chongqing Normal University, Chongqing 401331, China
Received: 16 August 2021
We use the dynamic programming principle method to obtain the Hamilton-Jacobi-Bellman (HJB) equation for the value function, and solve the optimal portfolio problem explicitly in a Black-Scholes type
of market driven by fractional Brownian motion with Hurst parameter $H∈(0,1)$. The results are compared with the corresponding well-known results in the standard Black-Scholes market $(H=1/2)$. As an
application of our proposed model, two optimal problems are discussed and solved, analytically.
Key words: fractional Brownian motion / Merton’s optimal problem / stochastic differential equation
Biography: YAN Li, female, Ph. D., research direction: stochastic optimization. E-mail: 20170046@cqnu.edu.cn
Foundation item: Supported by the Science and Technology Research Program of Chongqing Municipal Education Commission (KJQN201900506)
© Wuhan University 2022
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution,
and reproduction in any medium, provided the original work is properly cited.
0 Introduction
The idea of replacing Brownian motion with another Gaussian process in the usual financial models has been around for some time. In particular, fractional Brownian motion has been considered as it
has better behaved tails and exhibits long-term dependence while remaining Gaussian.
Let $H∈(0,1)$ be a fixed constant. The fractional Brownian motion with Hurst parameter H is the Gaussian process $BH(t,ω);t≥0,ω∈Ω$ on the probability space $(Ω,F,P)$ with the property that$E[BH(t)]=
Here E denotes the expectation with respect to the probability P. If H=1/2 then $BH(t)$ coincides with the classical Brownian motion, denoted by $B(t)$.
Rogers^[1] showed that arbitrage is possible when the risky asset has a log-normal price driven by a fractional Brownian motion if stochastic integrals are defined using pointwise products. However,
using the white noise approach it is clear that stochastic integrals should be defined using Wick products. When the factors are strongly independent, a Wick product reduces to a pointwise product,
and in the Brownian motion case white noise integral reduces to the usual Itô integral.
In this paper we will concentrate on the $BH$-integral^[2], defined by$∫abf(t,ω)dBH(t)=lim|Δ|→0∑k=0n−1f(t,ω)⋄(BH(tk+1)−BH(tk))$where $f(t,ω):R×Ω→R$ is Skorohod integrable, $⋄$ denotes the Wick
product. We call these fractional Itô integral because these integrals share many of the properties of the classical Itô integrals. In particular, we have$E[∫Rf(t,ω)dBH(t)]=0$(1)Hu and Ksendal^[3]
extended this fractional Itô calculus to a white-noise calculus for fractional Brownian motion and applied it to finance, still for the case $1/2<H<1$ only. Then Elliott and Hoek ^[4] extended this
theory and its application to finance to be valid for all values of $H∈(0,1)$.
There are two major mathematical techniques to find optimal controls in the field of optimal control theory. Pontryagins maximum principle and dynamic programming principle are applied to obtain the
Hamilton-Jacobi-Bellman (HJB) equation ^[5]. In this paper, we consider the method of fractional HJB equation. This equation is a partial differential equation. The solution of this equation is the
value function which gives the maximum expected utility from wealth at the time horizon.
Therefore, in this article, we shall first consider the Merton’s model, in which the classical Brownian motion is replaced by a fractional Brownian motion with Hurst parameter $H∈(0,1)$. Then, as an
application of this derivation, two optimal problems have discussed and solved by the method of fractional HJB equation.
1 Optimization Model
In this section, we describe the portfolio optimization problem under the fractional Brownian motion and derive the HJB equation for the value function.
We consider a continuous-time financial market consisting of two assets: a bond and a stock. Assume that the stock price S(t) follows the fractional Brownian motion model and the bond price M(t)
satisfies the following equation:$dS(t)=μS(t)dt+σS(t)dBH(t)$(2)and$dM(t)=rM(t)dt$(3)where $r>0$ is the constant interest rate, $μ>r>0$ and $σ≠0$ are constants. For a portfolio comprising a stock and
a risk-free bond, let $πt$ denote the percentages of wealth invested in the stock, $t∈[0,T]$. Then the wealth process ${X(t)}$ of the portfolio evolves as$dX(t)=X(t)rdt+X(t)πtσ[θdt+dBH(t)]$(4)where
$θ=μ−rσ$, which is a real valued predictable process.
We aim to maximize the expected utility of terminal wealth:$supπ∈ΠtEt[U(X(T))] subject to (4).$
To this end, we define a value function$V(t,x):=supπ∈ΠtEt[U(X(T))]$(5)where $Et[⋅]=Et[⋅|X(t)=x]$ and the utility function $U(⋅)$ is assumed to be strictly concave and continuously differentiable on $
(−∞,+∞)$, $Πt:={πs,s∈[t,T]}$ is the set of all admissible strategies over $[t,T]$. Since $U(⋅)$ is strictly concave, there exists a unique optimal trading strategy. It is obvious that V satisfies
boundary condition:$V(T,x)=U(x),∀x≥0$(6)
2 The Closed-Form Solution
In this section, we apply dynamic programming principle to derive the HJB equation for the value function and investigate the optimal investment policies for problem (5) with the boundary condition
(6) in the power and logarithm utility cases.
Lemma 1 ^[6] Let $f(x,s):R×R→R$ belong to $C1,2(R×R)$, and assume that the three random variables$f(t,BH(t)),∫0t∂f∂s(s,BH(s))ds,∫0t∂2f∂x2(s,BH(s))s2H−1ds$all belong to L^2(P). Then$f(t,BH(t))=f(0,0)
Theorem 1 The optimal value function V(t,x) satisfies$Vt+rxVx−14Ht2H−1θ2Vx2Vxx=0$(8)on $[0,T]×[0,∞)$, with terminal condition (6), where $Vt,Vx,Vxx$ denote partial derivative of first and second
orders with respect to time and wealth.
Proof Using the dynamic programming principle^[7], Eq. (5) can be read as ,$V(t,X(t))=supπEt[supπEt+Δt[U(X(T))]]=supπEt[V(t+Δt,X(t+Δt))]$(9)
According to Lemma 1 and the dynamic of ${X(t)}$ given by (4), we have$V(t,X(t))=supπtEt[V(t,X(t))+∫tt+Δt∂V∂s(s,X(s))ds+∫tt+Δt∂V∂x(s,X(s))(r+θσπs)X(s)ds +H∫tt+Δt∂2V∂x2(s,X(s))σ2πs2X(s)
2s2H−1ds+∫tt+Δt∂V∂x(s,X(s))σπsX(s)dBH(s)]$(10)The stochastic integral in above equation is a Quasi-Martingale, and we get$Et[∫tt+Δt∂f∂x(s,X(s))σπsX(s)dBH(s)]=0$
Now, suppose $Δt→0$. Then $s→t,$$X(s)→$$X(t)=x$ and by intermediate value theorem for integral, the integrals in Eq.(10) are evaluated as$V(t,X(t))=supπt[V(t,X(t))+VtΔt+Vx(r+θσπt)xΔt
Dividing both sides of Eq.(11) by $Δt$, we get the following partial differential equation (PDE)$0=supπt[Vt+Vx(r+θσπt)x +HVxxσ2πt2x2t2H−1]$(12)on $[0,T]×[0,∞)$, with its first-order condition leading
to the optimal strategy$π∗=−θVx2Ht2H−1Vxxσx$(13)By substitution, we can then obtain PDE (8). The boundary condition follows immediately from (5). Furthermore, it follows from the standard
verification theorem that the solution to the PDE (8) is indeed the function V(t,x).
Remark 1 It follows (13) that the optimal investment strategy $π∗$ has an analogical form of the optimal policy under a generalized Bass model (GBM) (H=1/2).
Here, we notice that the stochastic control problem has been transformed into a nonlinear second-order partial differential equation; yet it is difficult to solve it. In the following subsection, we
choose power utility and logarithm utility for our analysis, respectively, and try to obtain the closed-form solutions to (8).
2.1 Power Utility
Consider power utility$U(x)=xpp,0<p<1$The boundary condition $V(T,x)=U(x)$ suggests that our value function has the following form$V(t,x)=f(t)xpp$(14)and the function f to be determined with terminal
condition f(T)=1. Therefore, we get$∂V∂t=dfdtxpp,∂V∂x=f(t)xp−1,∂2V∂x2=f(t)(p−1)xp−2$(15)Replacing (15) into Eq. (8), yields$dfdtxpp+rf(t)xp−θ24Ht2H−1f(t)xpp−1=0$Eliminating the dependence on x, we
obtain$dfdt=f(t)[θ2p4Ht2H−1(p−1)−rp]$with f(T)=1, we obtain$f(t)=exp{rp(T−t)−θ2p(T2−2H−t2−2H)4H(2−2H)(p−1)}$(16)By plugging (16) into Eq. (14), we obtain the value function$V(t,x)=xppexp{rp(T−t)−θ2p
(T2−2H−t2−2H)4H(2−2H)(p−1)}$(17)and the optimal investment strategy$π∗=θ2Hσ(1−p)t2H−1$
Theorem 2 If the utility function is given by$U(x)=xpp; 0<p<1$the value function V(t,x) for problem (5) is given by$V(t,x)=xppexp{rp(T−t)−θ2p(T2−2H−t2−2H)4H(2−2H)(p−1)}$And the corresponding
optimal strategy can be obtained as follows:$π∗=θ2Hσ(1−p)t2H−1$
Remark 2 It is natural to ask how the value function $V(t,x):=VH(t,x)$ in (17) is related to the value function $V1/2(t,x)$ for the corresponding problem for standard Brownian motion (H=1/2). In
this case it is well-known that (see Ref. [8])$V1/2(t,x)=xppexp{(rp−θ2p2(p−1))(T−t)}$Therefore we see that, as was to be expected$limH→1/2VH(t,x)=V1/2(t,x)$
2.2 Logarithm Utility
Now let us consider the following utility function$U(x)=lnx$
We can assume that our value function has the following structure:$V(t,x)=g(t)+ln x$and the function g to be determined with terminal condition g(T)=0. Hence, we have$∂V∂t=dgdt,∂V∂x=1x,∂2V∂x2=−1x2$
(18)Substituting (18) back into (8) yields$dgdt=−r−θ24Ht2H−1$Adding to g(T)=0, we have,$g(t)=r(T−t)+θ2(T2−2H−t2−2H)4H(2−2H)$Hence, we obtain the value function$V(t,x)=lnx+r(T−t)+θ2(T2−2H−t2−2H)4H
(2−2H)$with the optimal investment strategy$π∗=θ2Hσt2H−1$ When H=1/2, we obtain$V1/2(t,x)=lnx+(r+θ22)(T−t)$At the end, we can conclude the optimal portfolio problem for logarithm utility in the
following theorem.
Theorem 3 If the utility function is given by$U(x)=lnx$the value function V(t,x) for problem (5) is given by$V(t,x)=lnx+r(T−t)+θ2(T2−2H−t2−2H)4H(2−2H)$and the corresponding optimal controls can be
obtained as follows$π∗=θ2Hσt2H−1.$
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on
Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while. | {"url":"https://wujns.edpsciences.org/articles/wujns/full_html/2022/01/wujns-1007-1202-2022-01-0053-04/wujns-1007-1202-2022-01-0053-04.html","timestamp":"2024-11-12T05:56:04Z","content_type":"text/html","content_length":"128154","record_id":"<urn:uuid:b53fad01-4dce-4128-a3bb-ef9c2989daa2>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00593.warc.gz"} |
Problem 44261. Multivariate polynomials - sort monomials
In Problem 44260, multivariate polynomials were defined as a sum of monomial terms using|exponents|, a matrix of integers, and|coefficients|, a vector (follow the above link for an explanation). It
can be useful to order the monomials. But first we need to define the total degree of a monomial as the sum of the exponents. For example, the total degree of 5*x is 1 and the total degree of x^3*y^
5*z is 9.
Write a function
function [coeffs,exponents] = sortMonomials(coeffs,exponents)
to sort the monomials. Sort them first by descending total degree, and then for a given total degree, by lexicographical order of the exponents (by the first exponent, then the second, and so on,
each in descending order). The coefficients should be sorted so they stay with the correct monomial.
Example: Consider the polynomial p(x,y,z) = 3*x - 2 + y^2 +4*z^2, which is represented as:
exponents = [1 0 0; 0 0 0; 0 2 0; 0 0 2], coefficients = [3; -2; 1; 4]
The sorted version is
exponents = [0 2 0; 0 0 2; 1 0 0; 0 0 0], coefficients = [1; 3; 1; 4].
You can assume that a given combination of exponents is never repeated.
Solution Stats
37.5% Correct | 62.5% Incorrect
Problem Comments
The sorted coefficients in your example should be [1; 4; 3; -2].
The test suite is also comparing against an incorrect result in the last problem.
Solution Comments
Show comments
Problem Recent Solvers9
Suggested Problems
More from this Author9
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting! | {"url":"https://se.mathworks.com/matlabcentral/cody/problems/44261-multivariate-polynomials-sort-monomials","timestamp":"2024-11-11T20:05:26Z","content_type":"text/html","content_length":"94533","record_id":"<urn:uuid:b13afeb5-1f16-4af1-bb61-3c471db58f29>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00704.warc.gz"} |
Re: real subscripts and superscripts?
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: real subscripts and superscripts?
From: Karl Berry
Subject: Re: real subscripts and superscripts?
Date: Wed, 3 Dec 2014 00:51:20 GMT
can't be that difficult to implement a suitable \superscript macro.
I committed the changes to texinfo.tex (and the documentation) so that
@sub/@sup stay in math mode inside @math, and do a textual
sub/superscript outside @math.
The remaining complication (unsolved) is if someone wants a word as a
sub/superscript inside @math, like my original example,
You suggested using @asis, but that doesn't make sense to me; @asis
should just mean, well, as-is. @r works, more or less by accident, but
@i (what one would typically want) doesn't, for two reasons: 1) @math
switches to (99% of) @tex mode, where @i = plain \i = dotless i. That
would be easy enough (though undesirable in principle, it seems to me)
to kludge around, but then there is (2) the TeX font families aren't all
set up since we haven't needed them before.
It all gets pretty painful, and it seems an unlikely usage, so I'm
hoping we can just leave it unsolved for now.
[Prev in Thread] Current Thread [Next in Thread] | {"url":"https://mail.gnu.org/archive/html/bug-texinfo/2014-12/msg00004.html","timestamp":"2024-11-06T22:12:29Z","content_type":"text/html","content_length":"5752","record_id":"<urn:uuid:5a3564ee-b316-47cc-ae28-38b4eab60824>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00776.warc.gz"} |
Multiplying And Dividing Mixed Numbers Worksheet Pdf - Divisonworksheets.com
Multiplication And Division Of Integers Worksheet Pdf – It is possible to help your child learn and refresh their division skills with the help of division worksheets. There are a variety of
worksheets available and you are able to make your own. They’re great because you can easily download them and alter them according to … Read more | {"url":"https://www.divisonworksheets.com/tag/multiplying-and-dividing-mixed-numbers-worksheet-pdf/","timestamp":"2024-11-13T16:14:16Z","content_type":"text/html","content_length":"48091","record_id":"<urn:uuid:a0879330-9bc5-460b-8a12-5e40c9847339>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00566.warc.gz"} |
AbstractIntroductionMaterials and methodsSite descriptionEC measurement setupProfile measurementsEddy covariance fluxesPreprocessingTime lag determinationFrequency response correctionHigh-frequency correctionLow-frequency correctionFlux quality criteriaStorage change flux calculationFlux uncertaintyFlux gap fillingResults and discussionDetrending methodsEffect of time lag correctionHigh-frequency response correctionStorage change fluxesu* filteringGap fillingErrors and uncertaintiesConclusionsData availabilityAuthor contributionsCompeting interestsAcknowledgementsFinancial supportReview statementReferences
Carbonyl sulfide (COS) is the most abundant sulfur compound in the atmosphere, with tropospheric mixing ratios around 500ppt . During the last decade, studies on COS have grown in number, mainly
motivated by the use of COS exchange as a tracer for photosynthetic carbon uptake (also known as gross primary productivity, GPP; ). COS shares the same diffusional pathway in leaves as carbon
dioxide (CO2), but in contrast to CO2, COS is destroyed completely by hydrolysis and is not emitted. This one-way flux makes it a promising proxy for GPP .
Eddy covariance (EC) measurements are the backbone of gas flux measurements at the ecosystem scale . Protocols for instrument setup, monitoring, and data processing have been recently harmonized for
CO2 as well as for methane (CH4) and nitrous oxide (N2O; ) within the Integrated Carbon Observation System (ICOS) flux stations . The EC data processing chain includes despiking and filtering raw
data, rotating the coordinate system to align with the prevailing streamlines, determining the time lag between the sonic anemometer and the gas analyzer signals, removing trends to separate the
turbulent fluctuations from the mean trend, calculating covariances, and correcting for flux losses at low and high frequencies. After processing, fluxes are quality-filtered and flagged according to
atmospheric turbulence characteristics and stationarity.
Studies on ecosystem COS flux measurements with the EC technique are still limited , and there is no standardized flux processing protocol for COS EC fluxes. Table summarizes the processing steps
used in earlier studies. Most studies do not report all necessary steps and in particular often ignore the storage change correction. COS EC flux measurements and data processing have similarities
with other trace gases (e.g., CH4 and N2O) that often have low signal-to-noise ratios, especially regarding time lag determination and frequency response corrections. Time lag determination is
essential for aligning wind and gas concentration measurements to minimize biases in flux estimates. Frequency response corrections, on the other hand, are needed for correcting flux underestimation
due to signal losses at both high and low frequencies . Unlike for CH4 or N2O, there are no sudden bursts or sinks expected for COS, and in that sense some processing steps for COS are more like
those for CO2 (e.g., despiking, storage change correction, and friction velocity, u*, filtering). describe the issues of different detrending methods, high-frequency spectral correction, time lag
determination, and u* filtering. However, there has not been any study comparing different methods for time lag determination or high-frequency spectral correction in terms of their effects on COS
fluxes. This weakens our ability to assess uncertainties in COS flux measurements.
In this study, we compare different methods for detrending, time lag determination, and high-frequency spectral correction. In addition, we compare two methods for storage change flux calculation,
discuss the nighttime low-turbulence problem in the context of COS EC measurements, introduce a method for gap-filling COS fluxes for the first time, and discuss the most important sources of random
and systematic errors. Through the evaluation of these processing steps, we aim to settle on a set of recommended protocols for COS flux calculation.
Processing steps used in previous COS eddy covariance studies. Detrending methods include linear detrending (LD), block averaging (BA), and recursive filtering (RF). Spectral corrections include an
experimental method and a theoretical method by . Processing steps not specified in the original articles are marked with a dash. The last row summarizes the recommended processing options in this
Reference Sampling Detrending Time lag Spectral Storage change u* filtering frequency corrections correction 1Hz LD Max |w′χCOS′‾| Exp. method – – 10Hz – Max |w′χCOS′‾| – u*>0.15ms-1 10Hz – Max
|w′χCOS′‾| – u*>0.15ms-1 4Hz BA Max |w′χH2O′‾| – Neglected u*>0.17ms-1 10Hz BA/LD/RF Max |w′χCOS′‾| Exp. method EC meas. u*>0.12ms-1 10Hz LD Max |w′χCO2′‾| Exp. method Profile meas. u*>
0.3ms-1 4Hz BA – Exp. method, Profile meas. u*>0.17ms-1 CO2 spectrum 1Hz – – – Neglected – 10Hz LD Max |w′χCO2′‾| Exp. method Profile meas. u*>0.3ms-1 5 or 10Hz LD Max |w′χCOS′‾| Exp.
method – – Recommendation 10Hz Site-specific Max |w′χCO2′‾| Exp. method, Profile meas. Site-specific by this study CO2 spectrum threshold
In this study we used COS and CO2 EC flux datasets collected at the Hyytiälä ICOS station in Finland from 26 June to 2 November 2015. The site has a long history of flux and concentration
observations , and a COS analyzer was introduced to the site in March 2013. In this section, we describe methods used in the reference and alternative data processing schemes.
Measurements were made in a boreal Scots pine (Pinus sylvestris) stand at the Station for Measuring Forest Ecosystem–Atmosphere Relations (SMEAR II) in Hyytiälä, Finland (61∘51′N, 24∘17′E; 181m
above sea level). The Scots pine stand was established in 1962 and reaches at least 200m in all directions and about 1km to the north . The site is characterized by modest height variation, and an
oblong lake is situated about 700m to the southwest of the forest station . Canopy height was 17m, and the all-sided leaf area index (LAI) was approximately 8m2m-2 in 2015. EC measurements were
done at 23m height. Sunrise time varied from 02:37 in June to 07:55 in November, while sunset was at 22:14 at the beginning and 16:17 at the end of the measurement period. All results are presented
in Finnish winter time (UTC+2), and nighttime is defined as periods with a photosynthetic photon flux density (PPFD) of <3µmolm-2s-1.
The EC setup consisted of an ultrasonic anemometer (Solent Research HS1199, Gill Instruments Ltd., England, UK) for measuring wind speed in three dimensions and sonic temperature; an Aerodyne quantum
cascade laser spectrometer (QCLS; Aerodyne Research Inc., Billerica, MA, USA) for measuring COS, CO2, and water vapor (H2O) mole fractions; and an LI-6262 infrared gas analyzer (LI-COR, Lincoln, NE,
USA) for measuring H2O and CO2 mole fractions. All measurements were recorded at 10Hz frequency and were made with a flow rate of approximately 10Lmin-1 for the QCLS and 14Lmin-1 for the
LI-6262. The PTFE sampling tubes were 32 and 12m long for QCLS and LI-6262, respectively, and both had an inner diameter of 4mm. Two PTFE filters were used upstream of the QCLS inlet to prevent any
contaminants from entering the analyzer sample cell: one coarse filter (0.45µm, Whatman) followed by a finer filter (0.2µm, Pall Corporation) at approximately 50cm distance to the analyzer inlet.
The Aerodyne QCLS used an electronic pressure control system to control the pressure fluctuations in the sampling cell. The QCLS was run at 20Torr sampling cell pressure. An Edwards XDS35i scroll
pump (Edwards, England, UK) was used to pump air through the sampling cell, while the LI-6262 had flow control by an LI-670 flow control unit.
Background measurements of high-purity nitrogen (N2) were done every 30min for 26s to remove background spectral structures in the QCLS . In addition, a calibration cylinder was sampled each night
at 00:00:45 for 15s. The calibration cylinder consisted of COS at 429.6±5.6ppt, CO2 at 408.37±0.05ppm, and CO at 144.6±0.2ppb. The cylinder was calibrated against the NOAA-2004 COS scale,
WMO-X2007 CO2 scale, and WMO-X2004 CO scale using cylinders that were calibrated at the Center for Isotope Research of the University of Groningen in the Netherlands . The standard deviation
calculated from the cylinder measurements was 19ppt for COS mixing ratios and 1.3ppm for CO2 at 10Hz measurement frequency.
It has previously been shown that water vapor in the sample air can affect the measurements of COS through spectral interference of the COS and H2O absorption lines . This spectral interference was
corrected for by fitting the COS spectral line separately from the H2O spectral line.
The computer embedded in the Aerodyne QCLS and the computer that controlled the sonic anemometer and logged the LI-6262 data were synchronized once a day with a separate server computer. Analog data
signals from the LI-6262 were acquired by the Gill anemometer sensor input unit, which digitized the analog data and appended them to the digital output data string. Digital Aerodyne data were
collected on the same computer with a serial connection and recorded in separate files with custom software (COSlog).
Atmospheric concentration profiles were measured with another Aerodyne QCLS at a sampling frequency of 1Hz. Air was sampled at five heights: 125, 23, 14, 4, and 0.5m. A multiposition Valco valve
(VICI, Valco Instruments Co. Inc.) was used to switch between the different profile heights and calibration cylinders. Each measurement height was sampled for 3min each hour. One calibration
cylinder was measured twice for 3min each hour to correct for instrument drift, and two other calibration cylinders were measured once for 3min each hour to assess the long-term stability of the
measurements. A background spectrum was measured once every 6h using high-purity nitrogen (N 7.0; for more details, see ). The overall uncertainty of this analyzer was determined to be 7.5ppt for
COS and 0.23ppm for CO2 at 1Hz frequency . The measurements are described in more detail in .
In this section, we describe the processing steps of EC flux calculation from raw data handling to final flux gap filling and uncertainties. Figure provides a graphical outline of all processing
steps. The different processing options presented here are applied and discussed in Sect. . In the next section, the different processing schemes are compared to a “reference scheme”, which consists
of linear detrending, planar-fit coordinate rotation, using CO2 time lag for COS, and experimental spectral correction. A subset of the data – nighttime fluxes processed with the reference scheme –
was published in .
Different EC processing steps summarized. Yellow boxes refer to steps only used for COS data processing, blue boxes to steps used only for CO2 data, and green boxes to steps that are relevant for
both gases. Recommended options are written in bold. Options that are used in the reference processing scheme for COS in this study are planar-fit coordinate rotation, linear detrending, CO2 time
lag, experimental high-frequency correction, low-frequency correction according to , and storage change flux from measured concentration profile. The abbreviations most commonly used throughout the
text are written in parentheses.
For flux calculation, the sonic anemometer and gas analyzer signals need to be synchronized. This is particularly relevant for fully digital systems where digital data streams are gathered from
different instruments that can be completely asynchronous to each other . The following procedure was used to combine two data files of 30min length (of which one includes sonic anemometer and
LI-6262 data and the other includes Aerodyne QCLS data): (1) the cross-covariance of the two CO2 signals (QCLS and LI-6262) was calculated; (2) the QCLS data were shifted so that the cross-covariance
of the CO2 signals was maximized. Note that this will result in having the same time lag for the QCLS and LI-6262. The time shift between the two computers was a maximum of 10s, with most varying
between 0 and 2s during 1d. It is also possible to shift the time series by maximizing the covariance of CO2 and w, which will then already account for the time lag (Fig. S1 in the Supplement) or
combine files according to their time stamps and allow a longer window in which the time lag is searched. However, in this case it is important that the time lag (and computer time shift) is
determined from CO2 measurements only as using COS data might result in several covariance peaks in longer time frames due to low signal-to-noise ratios and small fluxes.
Raw data were then despiked so that the difference between subsequent data points was a maximum of 5ppm for CO2, 1mmolmol-1 for H2O, 200ppt for COS, and 5ms-1 for w. After despiking, the
missing values were gap-filled by linear interpolation.
We used the planar-fit method to rotate the coordinate frame so that the turbulent flux divergence is as close as possible to the total flux divergence. In this method, w‾ is assumed to be zero only
on longer timescales (weeks or even longer). A mean streamline plane is fitted to a long set of wind measurements. Then the z axis is fixed as perpendicular to the plane, and the v‾ wind component is
fixed to be zero . In addition, we used 2D coordinate rotation for coordinate rotation in two processing schemes to determine the flux uncertainty that is related to flux data processing (Sect. ).
First, the average u component was forced to be along the prevailing wind direction. The second rotation was performed to force the mean vertical wind speed (w‾) to be zero . In this way, the x axis
is parallel and z axis perpendicular to the mean flow. While 2D coordinate rotation is the most commonly used rotation method, the planar-fit method brings benefits especially in complex terrain and
is nowadays recommended as the preferred coordinate rotation method .
To separate the mixing ratio time series into mean and fluctuating parts, we tested three different detrending options: (1) 30min block averaging (BA), (2) linear detrending (LD), and (3) recursive
filtering (RF) with a time constant of 30s. BA is the most commonly used method for averaging the data with the benefit of damping the turbulent signal the least. On the other hand, BA may lead to
an overestimation of the fluctuating part (and thus overestimation of the flux), for example due to instrumental drift or large-scale changes in atmospheric conditions that are not related to
turbulent transfer . The LD method fits a linear regression to the averaging period and thus gets rid of instrumental drift and to some extent weather changes but may lead to underestimation of the
flux if the linear trend was related to actual turbulent fluctuations in the atmosphere. The third method, RF, uses a time window (here 30s) for a moving average over the whole averaging period. RF
brings the biggest correction and thus lowest flux estimate compared to other methods but effectively removes biased low-frequency contributions to the flux. An Allan variance was determined for a
time period when the instrument was sampling from a gas cylinder . The time constant of 30s for RF was determined from the Allan plot (Fig. S4) as the system starts to drift in a nonlinear fashion
at 30s following the approach suggested by . The effect of different detrending methods is shown and discussed in Sect. .
The time lag between w and COS signals was determined using the following five methods:
The time lag was determined from the maximum difference of the cross-covariance of the COS mixing ratio and w (w′χCOS′‾) to a line between covariance values at the lag window limits (referred to
hereafter as COSlag). This also applies to other covariance methods explained below and prevents the time lag from being exactly at the lag window limits. Lag window limits (from 1.5 to 3.8s) were
determined based on the nominal time lag of 2.6s calculated from the flow rate and tube dimensions. More flexibility was given to the upper end of the lag window as time lags have been found to be
longer than the nominal time lag .
The time lag was determined from the maximum difference of the cross-covariance of the CO2 mixing ratio and w (w′χCO2′‾) to a line between covariance values at the lag window limits within the lag
window of 1.5–3.8s (referred to hereafter as CO2lag).
The time lag was determined using a constant time lag of 2.6s, which was the nominal time lag and the most common lag for CO2 with our setup (referred to hereafter as Constlag).
The time lag was determined from the maximum difference of the smoothed w′χCOS′‾ to a line between covariance values at the lag window limits. The cross-covariance was smoothed with a 1.1s running
mean (referred to hereafter as RMlag). The averaging window was chosen so that it provided a more distinguishable covariance maximum while still preventing a shift in the timing of the maximum.
The time lag was determined from a combination of COSlag and CO2lag. First, the random flux error due to instrument noise was calculated according to :RE=σcnoise2σw2N,where instrumental noise σcnoise
was estimated from the method proposed by , σw is the standard deviation of the vertical wind speed, and N is the number of data points in the averaging period. The random error was then compared to
the raw maximum covariance. If the maximum covariance was higher than 3 times the random flux error, then the COSlag method was used for time lag determination. If the covariance was dominated by
noise (the covariance being smaller than 3 times the random error) or COSlag was at the lag window limit, then the CO2lag lag method was selected as proposed in Nemitz et al. (2018; referred to
hereafter as the DetLimlag).
Some of the turbulence signal is lost at both high and low frequencies due to losses in sampling lines, inadequate frequency response of the instrument, and inadequate averaging times among other
reasons . In this section, we describe both high- and low-frequency loss corrections in detail. We tested two high-frequency correction methods, described below, simultaneously correcting for
low-frequency losses. One run was performed with neither low-frequency nor high-frequency response corrections.
Especially in closed-path systems, the high-frequency turbulent fluctuations of the target gas damp at high frequencies due to long sampling lines. Other reasons for high-frequency losses include
sensor separation and inadequate frequency response of the sensor. In turn, high-frequency losses cause the normalized cospectrum of the gas with w to be lower than expected at high frequencies,
resulting in lower flux. The flux attenuation factor (FA) for a scalar s is defined as FA=w′s′‾measw′s′‾=∫0∞Tws(f)Cows(f)df∫0∞Cows(f)df, where w′smeas′‾ and w′s′‾ are the measured and unattenuated
covariances, respectively; Tws(f) is the net transfer function, specific to the EC system and scalar s; and Cows(f) is the cospectral density of the scalar flux w′s′‾. For solving FA, a cospectral
model and transfer function are needed. In this study, we tested the effect of high-frequency spectral correction by applying either an analytical correction for high-frequency losses or an
experimental correction . The analytical correction was based on scalar cospectra defined in , the experimental approach was based on the assumption that temperature cospectrum is measured without
significant error, and the normalized scalar cospectra were compared to the normalized temperature cospectrum .
In the analytical approach by the integral in Eq. () is solved analytically by assuming a model cospectrum of the form fCows(f)w′s′‾=2πf/fm1+(f/fm)2 and a transfer function Tws(f)=11+(2πfτs)2. The
flux attenuation then results in FAH=[1+(2πτsfm)α]-1, where α=7/8 for neutral and unstable stratification, and α=1 for stable stratification in the surface layer, τs is the overall EC system time
constant, and fm is the frequency of the logarithmic cospectrum peak estimated from fm=nmu‾zm-d, where nm is the normalized frequency of the cospectral peak, u‾ the mean wind speed, zm the
measurement height, and d the displacement height. The normalized frequency of the cospectral peak nm is dependent on atmospheric stability ζ=zm-dL : 4nm=0.085,forζ≤05nm=2.0-1.915/(1+0.5ζ),forζ>0,
where L is the Obukhov length (Fig. S8). The model cospectrum for the analytical high-frequency spectral correction was adapted from and given as fCowθ(n)w′θ′‾=1.05n/nm(1+1.33n/nm)7/4forζ≤0,n<10.387n
In the experimental approach, we solved Eq. () numerically and used the fitting of the measured temperature cospectra to define a site-specific scalar cospectral model as fCowθ(n)w′θ′‾=10.36n/nm
(1+4.82n/nm)3.05forζ≤0,n<11.85n/nm(1+3.80n/nm)7/3forζ≤0,n≥10.094n/nm(1+9.67n/nm)1.74forζ>0, where the stability dependency of the cospectral peak frequency nm (Fig. S8) followed the equation 8nm=
In both approaches (analytical and experimental), the time constant τs was empirically estimated by fitting the transfer function Tws(f) to the normalized ratio of cospectral densities: Tws=NθCows(f)
NsCowθ(f), where Nθ and Ns are normalization factors, and Cows and Cowθ are the scalar and temperature cospectra, respectively. The estimated time constant was 0.68s for the Aerodyne QCLS and 0.35s
for the LI-6262.
Detrending the turbulent time series, especially with LD or RF methods, may also remove part of the real low-frequency variations in the data , which should be corrected for in order to avoid flux
underestimation. Low-frequency correction in this study for different detrending methods was done according to .
The calculated fluxes were accepted when the following criteria were met: the second wind rotation angle (θ) was below 10∘, the number of spikes in 1 half hour was less than 100, the COS mixing ratio
was higher than 200ppt, the CO2 mixing ratio ranged between 300 and 650ppm, and the H2O mixing ratio was higher than 1ppb.
We used a similar flagging system for micrometeorological quality criteria as for COS and CO2: flag 0 was given if flux stationarity was less than 0.3 (meaning that covariances calculated over 5min
intervals deviate less than 30% from the 30min covariance), kurtosis was between 1 and 8, and skewness was within a range from -2 to 2. Flag 1 was given if flux stationarity was from 0.3 to 1 and
if kurtosis and skewness were within the ranges given earlier. Flag 2 was given if these criteria were not met.
In addition to these filtering and flagging criteria, we added friction velocity (u*) filtering to screen out data collected under low-turbulence conditions. A decrease in measured EC flux is usually
observed under low-turbulence conditions, although it is assumed that gas exchange should not decrease due to low turbulence. While this assumption holds for CO2, it may not be justified for COS , as
is further discussed in Sect. . The appropriate u* threshold was derived from a 99% threshold criterion . The lowest acceptable u* value was determined from both COS and CO2 nighttime fluxes.
Storage change fluxes were calculated from gas mixing ratio profile measurements and by assuming a constant profile throughout the canopy using EC system mixing ratio measurements. Storage change
fluxes from mixing ratio profile measurements were calculated using the formula Fstor=pRTa∫0zm∂χc(z,t)∂tdz, where p is the atmospheric pressure, Ta air temperature, R the universal gas constant, and
χc(z) the gas mixing ratio at each measurement height. The integral was determined from the hourly measured χc profile at 0.5, 4, 14, and 23m by integrating an exponential fit through the data .
When the profile measurement was not available, storage was calculated from the COS (or CO2) mixing ratio measured by the EC setup.
Another storage change flux calculation was done assuming a constant profile from the EC measurement height (23m) to the ground level. A running average over a 5h window was applied to the COS
mixing ratio data to reduce the random noise of measurements.
The storage change fluxes were used to correct the EC fluxes for storage change of COS and CO2 below the flux measurement height.
The flux uncertainty was calculated according to ICOS recommendations presented by . First, flux random error was estimated as the variance of covariance according to : ϵrand=1N∑j=-mmγ^w,w(j)γ^c,c(j)
+∑j=-mmγ^w,c(j)γ^c,w(j), where N is the number of data points (18000 for 30min of EC measurements at 10Hz) and m the number of samples sufficiently large to capture the integral timescale (18000
was used in this study). γ^w,w is the variance and γ^w,c the covariance of the measured variables w and c (in this case, the vertical wind velocity and gas mixing ratio).
As the chosen processing schemes affect the resulting flux, the uncertainty related to the used processing options have to be accounted for. This uncertainty was estimated as ϵproc=max(Fc,j)-min
(Fc,j)12, where Fc,j is the flux calculated according to j=1,…,N different processing schemes, and N is the number of possible processing scheme combinations that are equally reliable but cause
variability in fluxes. For simplicity, the processing steps we considered here are detrending, coordinate rotation, and high-frequency spectral correction, leading to N=8 processing schemes: BA with
2D coordinate rotation and experimental spectral correction, BA with 2D rotation and spectral correction, BA with planar fitting and experimental correction, BA with planar fitting and correction, LD
with 2D rotation and experimental correction, LD with 2D rotation and correction, LD with planar fitting and experimental correction, and LD with planar fitting and correction. As all the different
processing schemes will lead to slightly different random errors as well, the flux random error was estimated to be the mean of Eq. () for different processing schemes: ϵrand‾=∑j=1Nϵrand,j2N. The
combined flux uncertainty is then the summation of ϵrand and ϵproc in quadrature: ϵcomb=ϵrand‾2+ϵproc2. To finally get the total uncertainty as the 95th percentile confidence interval, the total
uncertainty becomes ϵtotal=1.96ϵcomb.
It should be noted, though, that this uncertainty estimate holds for single 30min fluxes only. When working with fluxes averaged over time, the total uncertainty cannot be directly propagated to the
long-term averages because the two uncertainty sources behave differently. The random uncertainty is expected to decrease with increasing number of observations, while processing-related uncertainty
would not be much affected by time averaging. The random uncertainty of a flux averaged over multiple observations is obtained as 〈ϵrand‾〉=Σi=1N(ϵrand‾,i)2N2, where N is the number of observations
and ϵrand‾,i the random uncertainty of each observation . In this study, we calculated the time-averaged processing uncertainty from Eq. () with time-averaged fluxes from the four different
processing schemes. The total uncertainty of time-averaged flux was then calculated similarly from Eqs. () and () with time-averaged random and processing uncertainties.
Missing CO2 fluxes were gap-filled according to , while missing COS fluxes were replaced by simple model estimates or by hourly mean fluxes if model estimates were not available in a way comparable
to gap filling of CO2 fluxes .
The COS gap-filling function was parameterized in a moving time window of 15d to capture the seasonality of the fluxes. To calculate gap-filled fluxes, the parameters were interpolated daily. Gaps
where any driving variable of the regression model was missing were filled with the mean hourly flux during the 15d period.
We tested different combinations of linear or saturating (rectangular hyperbola) functions of the COS flux on PPFD and linear functions of the COS flux against vapor pressure deficit (VPD) or
relative humidity (RH). The saturating light response function with the mean nighttime flux as a fixed offset term explained the short-term variability of COS flux relatively well, but the residuals
as a function of temperature, RH, and VPD were clearly systematic. Therefore, for the final gap filling, we used a combination of saturating function on PPFD and linear function on VPD that showed
good agreement with the measured fluxes while having a relatively small number of parameters: FCOS=a×I/(I+b)+c×D+d, where I is PPFD (µmolm-2s-1); D is VPD (kPa); and a (pmolm-2s-1), b
(µmolm-2s-1), c (pmolm-2s-1kPa-1), and d (pmolm-2s-1) are fitting parameters. Parameter d was set as the median nighttime COS flux over the 15d window, and other parameters were estimated
In order to check the contribution of different detrending methods to the resulting flux, we made flux calculations with different methods: block averaging (BA), linear detrending (LD), and recursive
filtering (RF) using the same time lag (CO2 time lag) for all runs (Fig. S5).
The largest median COS flux (most negative) was obtained from RF (-12.0pmolm-2s-1), while the smallest median (least negative) flux resulted from BA (-10.7pmolm-2s-1), and LD
(-11.3pmolm-2s-1) differed by 5.3% from BA (Table ). The range of the 30min COS flux was the largest (from -183.2 to 82.56pmolm-2s-1) when using the BA detrending method, consistent with a
similar comparison in . In comparison, it was from -107.3 to 73.1pmolm-2s-1 for LD and from -164.9 to 36.8pmolm-2s-1 for RF. While it was surprising that RF resulted in a more negative median
flux than BA, it is likely explained by the large variation in BA with compensating high negative and positive flux values as the positive flux values with BA and LD are higher than with RF. In
addition, it has to be kept in mind that the fluxes were corrected for low-frequency losses, which in part also accounts for the effects of detrending.
For CO2, the largest median flux resulted from BA (-0.62µmolm-2s-1). The smallest median flux resulted from LD (-0.56µmolm-2s-1) and RF (-0.59µmolm-2s-1). The difference between median CO2
flux with BA and LD was 10.7%.
The most commonly recommended averaging methods are BA and LD because they have less of an impact on spectra and require fewer low-frequency corrections. These are also the most used detrending
methods in previous COS EC flux studies (Table ). RF may underestimate the flux , but as spectroscopic analyzers are prone to fringe effects under field conditions, for example, the use of RF might
still be justified . Regular checking of raw data provides information on instrumental drift and helps to determine the optimal detrending method. It is also recommended to check the contribution of
each detrending method to the final flux to better understand what the low-frequency contribution in each measurement site and setup is.
Median COS fluxes (pmolm-2s-1) and CO2 fluxes (µmolm-2s-1) during 26 June to 2 November 2015 and their difference to the reference fluxes when using different processing options. Median
reference fluxes are -11.3pmolm-2s-1 and -0.56µmolm-2s-1 for COS and CO2, respectively, and median daytime (nighttime) fluxes are -16.8 (-4.1)pmolm-2s-1 and -4.58 (1.23)µmolm-2s-1 when
using linear detrending, CO2 time lag, and experimental high-frequency spectral correction. NA denotes that data are not available.
Detrending Time lag Spectral corrections Coordinate rotation BA RF COSlag Constlag RMlag DetLimlag Horst None 2D (1997) Median FCOS -10.7 -12.0 -11.5 -11.1 -11.1 -13.1 -11.0 -9.7 -11.7 Difference to
reference 5.3% 6.2% 1.8% 1.8% 1.8% 15.9% 2.7% 14.2% 3.5% Daytime median FCOS -16.0 -17.6 -17.4 -16.6 -16.6 -19.2 -16.9 -15.0 -17.8 Nighttime median FCOS -4.2 -4.1 -4.4 -4.1 -4.3 -4.9 -4.0
-3.4 -4.7 Median FCO2 -0.62 -0.59 NA NA NA NA -0.54 -0.48 -0.55 Difference to reference 10.7% 5.4% NA NA NA NA 3.6% 14.3% 1.8% Daytime median FCO2 -4.49 -5.00 NA NA NA NA -4.77 -4.18 -4.88
Nighttime median FCO2 1.17 1.26 NA NA NA NA 1.18 1.02 1.33
Different time lag methods resulted in slightly different time lags and COS fluxes. The most common time lags were 2.6s from the COSlag, CO2lag, and DetLimlag methods and 1.5s from RMlag, which was
the lag window limit (Fig. ). Time lags determined from the COSlag and RMlag methods were distributed evenly throughout the lag window, whereas the lags from the CO2lag and DetLimlag methods were
normally distributed, with most lags detected at the window center.
We also tested determining time lags from the most commonly used method of maximizing the absolute covariance. If the time lag was determined from the absolute covariance maximum instead of the
maximum difference to a line between covariance values at the lag window limits, the COSlag and RMlag had most values at the lag window limits (Fig. S2). This resulted in a “mirroring effect” ; i.e.,
fluxes close to zero were not detected as often as with other methods since the covariance is always maximized, and the derived flux switches between positive and negative values of similar magnitude
(Fig. S3). This issue should be taken into account in COS EC flux processing as absolute COS covariance maximum is by far the most commonly used method in determining the time lag in COS studies
(Table ). To make the different methods more comparable, the time lags were, in the end, determined from the maximum difference to a line between covariance values at the lag window limits in this
study. In this way, time lags were not determined at the window borders, and most of the methods had the final flux probability distribution function (PDF) peak approximately at the same flux values
and had otherwise small differences in the distributions (Fig. ). The only method that was clearly different from the others was DetLimlag, which produced higher fluxes than other methods.
A constant time lag has been found to bias the flux calculation as the time lag can likely vary over time due to, for example, fluctuations in pumping speed . However, as the CO2lag was often the
same as the chosen constant lag of 2.6s, we did not observe major differences between these two methods. A reduced bias in the flux calculation with smoothed cross-covariance was introduced by , who
recommended using this method for any EC system with a low signal-to-noise ratio. However, we do not recommend this as a first choice since the time lags do not have a clear distribution, and if the
maximum covariance method is used, we find a mirroring effect with the RMlag in the final flux distributions.
By using the DetLimlag method, the COS time lag was estimated for 54% of cases from COSlag, while the CO2lag was used as a proxy for the COS time lag in about 46% of cases. Figure shows that the
raw covariance of COS only exceeds the noise level at higher COS flux values, and thus the COSlag is chosen by this method only at higher fluxes, as expected. At lower flux rates, and especially
close to zero, the COS fluxes are not high enough to surpass the noise level, and thus the CO2lag is chosen.
The median COS fluxes were highest when the time lag was determined from the DetLimlag (-13.1pmolm-2s-1) and COSlag (-11.5pmolm-2s-1) methods (Table ). Using the COSlag results in both higher
positive and negative fluxes and might thus have some compensating effect on the median fluxes. Constlag and RMlag produced the smallest median uptake (-11.1pmolm-2s-1), while the CO2lag had a
median flux of -11.3pmolm-2s-1. The difference between the reference flux and the flux from the DetLimlag method was clearly higher (15.9%) than with other methods (1.8%) and had the largest
variation (from -113.8 to 81.6pmolm-2s-1) and standard deviation (16.1pmolm-2s-1). This difference might become important at the annual scale, and as the most commonly used covariance
maximization method does not produce a clear time lag distribution for DetLimlag or COSlag, we recommend using the CO2lag for COS fluxes as in most ecosystems the CO2 cross-covariance with w is more
clear than the cross-covariance of COS and w signals.
Distribution of time lags derived from different methods: (a) COSlag, (b) CO2lag, (c) COS time lag from a running mean cross-covariance (RMlag), and (d) a combination of COSlag and CO2lag
Normalized COS flux distributions using different time lag methods: (a) COSlag, (b) CO2lag, (c) constant time lag of 2.6s (Constlag), (d) time lag from a running mean COS cross-covariance (RMlag),
(e) a combination of COS and CO2 time lags (DetLimlag), and (f) a summary of all probability distribution functions (PDFs).
The mean COS cospectrum was close to the normal mean CO2 cospectrum (compare Fig. a and c). The power spectrum of COS was dominated by noise as can be seen from the increasing power spectrum with
increasing frequency for normalized frequencies greater than 0.2, which is similar to what was measured for COS by and for N2O by . The fact that COS measurements are dominated by noise at high
frequencies means that those measurements are limited by precision and that they likely do not capture the true variability in COS turbulence signals. This is less of a problem for CO2, where white
noise only starts to dominate at higher frequencies (normalized frequency higher than 3). Cospectral attenuation was found for both COS and CO2 at high frequency.
High-frequency losses due to, for example, attenuation in sampling tubes and limited sensor response times are expected to decrease fluxes if not corrected for . The median COS flux, when using the
CO2 time lag and keeping low-frequency correction and quality filtering the same, was the least negative without any high-frequency correction (-9.7pmolm-2s-1), most negative with the experimental
correction (-11.3pmolm-2s-1), and in between with the analytical correction (-11.0pmolm-2s-1; Fig. S6). Correcting for the high-frequency attenuation thus made a maximum of 14.2% difference in
the median COS flux. In addition, daytime median flux magnitudes increased from -15.0 to -16.9pmolm-2s-1 with the analytical correction and to -16.8pmolm-2s-1 with the experimental
high-frequency correction. However, the relative difference was larger during nighttime, when flux magnitudes increased from -3.4 to -4.0 and -4.1pmolm-2s-1 with analytical and experimental
methods, respectively.
Similar results were found for the CO2 flux, but the differences were smaller: without any high-frequency correction the median flux was the least negative at -0.48µmolm-2s-1, most negative with
experimental correction (-0.56µmolm-2s-1), and in between with the analytical correction (-0.54µmolm-2s-1), thus making a maximum of 14.3% difference in the median CO2 flux, similar to COS.
Very similar results were found for CH4 and CO2 fluxes in , where the high-frequency correction made the largest difference in the final flux processing for closed-path analyzers. Similar to COS, CO2
flux magnitudes were also increased more during nighttime due to spectral correction than during daytime. Daytime median flux magnitude increased from -4.18 to -4.77 and -4.58µmolm-2s-1, and
nighttime fluxes increased from 1.02 to 1.18 and 1.23µmolm-2s-1 when using analytical and experimental high-frequency spectral corrections, respectively. Flux attenuation was dependent on
stability and wind speed for both correction methods, as also found in Mammarella et al. (2009; Fig. S7).
The site-specific model captures the cospectrum better than the model cospectrum by , as shown in Fig. . For high-frequency spectral corrections, it is recommended to use the site-specific cospectral
model, as has been done in most previous COS studies (Table ).
Cospectrum and power spectrum for COS (panels a and b, respectively) and CO2 (panels c and d, respectively) in July 2015. All data were filtered by the stability condition -2<ζ<-0.0625, and COS data
were only accepted when the covariance was higher than 3 times the random error due to instrument noise (Eq. ). The cospectrum models by the experimental method and that were used in the
high-frequency spectral correction are shown in continuous and dashed gray lines, respectively.
In the following, storage change fluxes based on profile measurements are listed as default, with fluxes based on the constant profile assumption listed in brackets.
The COS storage change flux was negative from 15:00 until 06:00, with a minimum of -1.0pmolm-2s-1 (-0.6pmolm-2s-1) reached at 20:00. A negative storage change flux of COS indicates that there
is a COS sink in the ecosystem when the boundary layer and effective mixing layer is shallow. Neglecting this effect would lead to overestimated uptake at the ecosystem level later when the air at
the EC sampling height is better mixed. The COS storage change flux was positive from around 06:00 until 15:00 and peaked at 09:00 with a magnitude of 1.9pmolm-2s-1 (0.8pmolm-2s-1). The storage
change flux made the highest relative contribution to the sum of measured EC and storage change fluxes at midnight with 18% (13%; Fig. c). The difference between the two methods was a minimum of
13% at 11:00 and a maximum of 56% at 09:00. The two methods made a maximum of 7% difference in the resulting cumulative ecosystem flux, as already reported in .
The CO2 storage change flux was positive from 15:00 until around 04:00, with a maximum value of 0.62µmolm-2s-1 (0.38µmolm-2s-1) reached at 21:00. A positive storage change flux indicates that
the ecosystem contains a source of carbon when the boundary layer is less turbulent and accumulates the respired CO2 within the canopy. As turbulence would increase later in the morning, the
accumulated CO2 would result in an additional flux that could mask the gas exchange processes occurring at that time step. The CO2 storage change flux minimum was reached with both methods at 08:00,
with a magnitude of -1.01µmolm-2s-1 (-0.52µmolm-2s-1) when the boundary layer had already started expanding, and leaves are assimilating CO2. The maximum contribution of the storage change flux
was as high as 89% (36%) compared to the EC flux at 18:00, when the CO2 exchange turned to respiration, and storage change flux increased its relative importance (Fig. d). The difference between
the two storage change flux methods for CO2 was a maximum of 53% at 21:00 and a minimum of 13% at midnight. The maximum difference of 5% was found in the cumulative ecosystem CO2 flux when the
different methods were used.
In conclusion, the storage change fluxes are not relevant for budget calculations, as expected, and have not been widely applied in previous COS studies (Table ) even though storage change flux
measurements are mandatory in places where the EC system is placed at a height of 4m or above according to the ICOS protocol for EC flux measurements . In addition, storage change fluxes are
important at the diurnal scale to account for the delayed capture of fluxes by the EC system under low-turbulence conditions.
Diurnal variation in the storage change flux, determined from (a) COS and (d) CO2 profile measurements (blue) and by assuming a constant profile up to 23m height (purple), and diurnal variation in
the EC flux (black) and storage change flux with the profile method (blue; panels b and e for COS and CO2 fluxes, respectively) during the measurement period 26 June to 2 November 2015. Contribution
of storage change flux to the total ecosystem EC flux with the profile measurements and assuming a constant profile for (c) COS and (f) CO2.
Nighttime median ecosystem fluxes (black) of (a) COS and (b) CO2 binned into 15 equal-sized friction velocity bins. Error bars indicate ranges between the 25th and 75th percentiles. Friction velocity
thresholds are shown with dashed red lines and single data points of 30min fluxes with light gray dots.
Calm and low-turbulence conditions are especially common during nights with stable atmospheric stratification. In this case, storage change and advective fluxes have an important role, and the
measured EC flux of a gas does not reflect the atmosphere–biosphere exchange, typically underestimating the exchange. This often leads to a systematic bias in the annual flux budgets . Even after
studies of horizontal and vertical advection, u* filtering still keeps its place as the most efficient and reliable tool to filter out data that are not representative of the surface–atmosphere
exchange under low turbulence .
For COS, nighttime filtering is a more complex issue than it is for CO2. In contrast to CO2, COS is taken up by the ecosystem during nighttime depending on stomatal conductance and the concentration
gradient between the leaf and the atmosphere. When the atmospheric COS mixing ratio decreases under low-turbulence conditions (due to nighttime COS uptake in the ecosystem), the concentration
gradient between the leaf and the atmosphere goes down such that a decrease in COS uptake can be expected . Thus, the assumption that fluxes do not go down under low-turbulence conditions, as is the
case for respiration of CO2, does not necessarily apply to COS uptake. Gap-filling the u*-filtered COS fluxes may therefore create a bias due to false assumptions if the gap filling is only based on
data from periods with high turbulence. However, as we did not see u* dependency disappearing even with a concentration-gradient-normalized flux, the u* filtering and proceeding gap filling were
applied here as usual to overcome the EC measurement limitations under low-turbulence conditions.
We determined u* limits of 0.23ms-1 for COS and 0.22ms-1 for CO2 (Fig. ). Filtering according to these u* values would remove 12% and 11% of data, respectively. If the storage change flux was
excluded when determining the u* threshold, the limits were 0.39 and 0.24ms-1 from CO2 and COS fluxes, respectively. The increase in the u* threshold with CO2 is because the fractional storage
change flux is larger for CO2 than for COS (Fig. c and f). On the other hand, the u* limit for COS stayed similar to the previous one. With these u* thresholds the filtering would exclude 30% and
13% of the data for COS and CO2 respectively.
If fluxes are not corrected for storage before deriving the u* threshold, there is a risk of flux overestimation due to double accounting. The flux data filtered for low turbulence would be
gap-filled, thereby accounting for storage by the canopy, but then accounted for again when the storage is released and measured by the EC system during the flushing hours in the morning . Thus, it
is necessary to make the storage change flux correction before deriving u* thresholds and applying the filtering.
Diurnal variation in the measured COS flux (black) and the flux from different gap-filling methods: gap filling with only saturating PAR function (yellow), saturating PAR and linear dependency on RH
(blue), and saturating PAR and linear dependency on VPD (purple). Diurnal variation is calculated from 1 July to 31 August 2015 for periods when measured COS flux existed. Dashed lines represent the
25th and 75th percentiles.
Three combinations of environmental variables (PAR, PAR and RH, and PAR and VPD) were tested using the gap-filling function (Eq. ). These environmental parameters were chosen because COS exchange has
been found to depend on stomatal conductance, which in turn depends especially on radiation and humidity . Development of the gap-filling parameters a, b, c, and d over the measurement period is
presented in Fig. S10. While the saturating function of PAR alone already captured the diurnal variation relatively well, adding a linear dependency on VPD or RH made the diurnal pattern even closer
to the measured one, although some deviation is still observed, especially in the early morning (Fig. ). Therefore, the combination of saturating light response curve and linear VPD dependency was
chosen. Furthermore, we chose a linear VPD dependency instead of a linear RH dependency due to smaller residuals in the former (Fig. S9). The mean residual of the chosen model was -0.54pmolm-2s-1,
and the root mean square error (RMSE) was 18.7pmolm-2s-1, while the saturating PAR function with linear RH dependency had a mean residual of -0.84 and an RMSE of 19.3pmolm-2s-1, and the
saturating PAR function had a residual of 0.97pmolm-2s-1 and an RMSE of 22.8pmolm-2s-1.
For COS fluxes, 44% of daytime flux measurements were discarded due to the above-mentioned quality criteria (Sect. ) and low-turbulence filtering. As expected, more data (66%) were discarded during
nighttime. Altogether, 52% of all COS flux data were discarded and gap-filled with the gap-filling function presented in Eq. (). The average of the corrected and gap-filled COS fluxes during the
whole measurement period was -12.3pmolm-2s-1, while without gap filling the mean flux was 14% more negative, -14.3pmolm-2s-1. This indicates that most gap filling was done for the nighttime
data, when COS fluxes are less negative than during daytime.
Uncertainty of (a) COS and (b) CO2 fluxes, binned into 15 equal-sized bins that represent median values. Error bars show the 25th and 75th percentiles. Total uncertainty is represented as the 95%
confidence interval (1.96ϵcomb). Panels (c) and (d) represent the relative uncertainty, i.e., the uncertainty divided by the flux for COS and CO2, respectively. Panels (e) and (f) represent the
relative uncertainties of time-averaged COS and CO2 fluxes, respectively.
For CO2, 41% of daytime CO2 fluxes were discarded, while 67% of fluxes were discarded during nighttime, altogether comprising 53% of all CO2 flux data. CO2 fluxes were gap-filled according to
standard gap-filling procedures presented in . The average CO2 flux after all corrections and gap filling was -2.14µmolm-2s-1, while without gap filling the mean flux was 39% more negative,
-3.53µmolm-2s-1. Similar to COS, CO2 fluxes are also mostly gap-filled during nighttime. As nighttime CO2 fluxes are positive, gap-filled fluxes include more positive values than non-gap-filled
fluxes, thus making the mean flux less negative.
Although the COS community has not been interested in the cumulative COS fluxes or yearly COS budget so far, it is important to fill short-term gaps in COS flux data to properly capture the diurnal
variation, for example. The gap-filling method presented here is one option to be tested at other measurement sites as well.
The uncertainty due to the chosen processing scheme was determined from a combination of eight different processing schemes, as described in Sect. , that were equally reliable but caused the most
variation in the final COS flux (Table ). This processing uncertainty contributed 36% to the total uncertainty of the 30min COS flux, while the rest was composed of the random flux uncertainty
(Fig. ). For the CO2 flux uncertainty, the processing was more important than for COS (48%), but the random uncertainty still dominated the combined flux uncertainty. The random error of the CO2
flux was found to be lower than in for the same site, probably related to differences in the gas analyzers and overall setup. The mean noise estimated from was 0.06µmolm-2s-1 for our QCLS CO2
fluxes, while in it was approximately 0.08µmolm-2s-1 for LI-6262 CO2 fluxes at the same site. found the total random uncertainty of COS fluxes to be mostly around 3–8pmolm-2s-1, comparable to
our results.
The relative flux uncertainty for COS was very high at low flux (-3pmolm-2s-1<FCOS<3pmolm-2s-1) values (8 times the actual flux value) but leveled off to 45% at fluxes higher (meaning more
negative fluxes) than -27pmolm-2s-1 (Fig. c). The total uncertainty of the CO2 flux was also high at low fluxes (-1.5µmolm-2s-1<FCO2<1µmolm-2s-1, uncertainty reaching 130% of the flux at
0.17µmolm-2s-1) and decreased to 15% at fluxes more negative than -11µmolm-2s-1 (Fig. d). Higher relative uncertainty at low flux levels is probably due to the detection limit of the
measurement system.
The median relative random uncertainty of COS flux decreased from 0.35 for single 30min flux to 0.013 for monthly averaged flux (Fig. e). The processing uncertainty had a less prominent decrease,
from 0.15 for 30min fluxes to 0.05 for monthly fluxes. The strongest decrease in processing uncertainty was when moving from single 30min flux values to daily average fluxes, after which the
averaging period did not affect the processing uncertainty. This is probably due to the large scatter between the two detrending methods, which levels off when averaging over several flux values
(Fig. S11a). Relative random uncertainty of CO2 flux decreased from 0.11 for 30min fluxes to 0.021 for monthly fluxes. The processing uncertainty, however, did not change significantly between the
different averaging periods, as would be expected.
In this study, we examined the effects of different processing steps on COS EC fluxes and compared them to CO2 flux processing. COS fluxes were calculated with five time lag determination methods,
three detrending methods, two high-frequency spectral correction methods, and with no spectral corrections. We calculated the storage change fluxes of COS and CO2 from two different concentration
profiles and investigated the diurnal variation in the storage change fluxes. We applied u* filtering and introduced a gap-filling method for COS fluxes. We also quantified the uncertainties of COS
and CO2 fluxes.
The largest differences in the final fluxes came from time lag determination and detrending. Different time lag methods made a difference of a maximum of 15.9% in the median COS flux. Different
detrending methods, on the other hand, made a maximum of 6.2% difference in the median COS flux, while it was more important for CO2 (10.7% difference between linear detrending and block
averaging). Omitting high-frequency spectral correction resulted in a 14.2% lower median flux, while different methods used in high-frequency spectral correction resulted in only 2.7% difference in
the final median fluxes. We suggest using CO2 time lag for COS flux calculation so that potential biases due to a low signal-to-noise ratio of COS mixing ratio measurements can be eliminated. The CO2
mixing ratio is measured simultaneously with the COS mixing ratio with the Aerodyne QCLS, and in most cases it has a higher signal-to-noise ratio and more clear cross-covariance with w than COS.
Experimental high-frequency correction is recommended for accurately correcting for site-specific spectral losses. We recommend comparing the effect of different detrending methods on the final flux
for each site separately to determine the site- and instrument-specific trends in the raw data.
Flux uncertainties of COS and CO2 followed a similar trend of higher relative uncertainty at low flux values and random flux uncertainty dominating over uncertainty related to processing in the total
flux uncertainty. The relative uncertainty was more than 5 times higher for COS than for CO2 at low flux values (absolute COS flux of less than 3pmolm-2s-1), while at higher fluxes they were more
similar. When averaging fluxes over time, the relative random uncertainty decreased with increasing averaging period for both COS and CO2. Relative processing uncertainty decreased from single 30min
COS fluxes to daily averages but remained at the same level for longer averaging periods.
We emphasize the importance of time lag method selection for small fluxes, whose uncertainty may exceed the flux itself, to avoid systematic biases. COS EC flux processing follows similar steps as
other fluxes with low signal-to-noise ratios, such as CH4 and N2O, but as there are no sudden bursts of COS expected, and its diurnal behavior is close to CO2, some processing steps are more similar
to CO2 flux processing. In particular, time lag determination and high-frequency spectral corrections should follow the protocol of low signal-to-noise ratio fluxes , while quality assurance and
quality control, despiking, u* filtering, and storage change correction should follow the protocol produced for CO2 flux measurements . Our recommendation for time lag determination (CO2
cross-covariance) differs from the most commonly used method so far (COS cross-covariance), while experimental high-frequency spectral correction has already been widely applied before (Table ). Many
earlier studies have neglected the storage change flux, but we emphasize its importance in the diurnal variation in COS exchange. In addition, we encourage implementing gap filling in future COS flux
calculations for eliminating short-term gaps in data. | {"url":"https://amt.copernicus.org/articles/13/3957/2020/amt-13-3957-2020.xml","timestamp":"2024-11-09T20:25:20Z","content_type":"application/xml","content_length":"290325","record_id":"<urn:uuid:58054786-0803-4fe2-8f17-f57a470bbc05>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00689.warc.gz"} |
linear search in data structure with Algo and Program - Quescol
linear search in data structure with Algo and Program
A linear search or sequential search is a searching method to check a desired element or data in a given Array in computer science.
In linear searching technique, each element of an Array is compared with the value for which we perform the search operation. If matches are found, it will return the array index where a value is
present. In the worst-case complete array will be traversed for the match.
Linear search algorithm in data structure
We have an array A whose size is n then it can be denoted as A0…….An. E is an element for which we have to perform a search
Step 1: set i=0 and traverse an array element using a loop
Step 2: Compare the search value with an array element if i<n
Step 3: if Ai = E go to step 5
Step 4: increase i by 1 and go to step 2
Step 5: return i if search successful else terminate unsuccessfully
Pseudo code of Linear search technique
Below is a pseudo-code of Linear Search. This is not hardcoded. It can be change developer to developer. It is just a thought about how you can implement a linear search. Below is a pseudo-code by
following which we can perform linear searching.
Here K is an array of the element, N is a total number of elements in the array, and X is a search value that we want to search in a given array.
Step 1: [INITIALIZE SEARCH] SET I = 0
Step 2: repeat step 3 while I < n else Step 4
Step 3: [SEARCH FOR VALUE] if K[I] = X
Successful search
Return I and go to step 5
I = I+1
Go to Step 2
Step 4: unsuccessful search
Return -1
Go to step 5
Step 5: Exit
Space and Time Complexity of Linear Search
• Best-case performance O(1)
• Average performance O(n)
• Worst-case performance O(n)
• Worst-case space complexity O(1) iterative
C Program to perform linear search
#include &lt;stdio.h&gt;
void main()
int arr[6] = {2,1,5,3,2,9};
int i,a;
int len = sizeof(arr)/sizeof(arr[0]);
printf("enter a value you want to search: ");
printf("search found at index = %d",i);
printf("no match found");
Some more explanation of linear search
• A sequential search or linear search is the simplest search technique where we start searching at the begining of the array or list. Here to find the desired value, we have to compare each
element until we reach the end. If we found search value, then we will record the index where we have found value. And if we reached the end of the list and the value not found, then return -1
means the search value is not present in the list or array.
• Linear search is applied for both ordered and unordered array. But hopefully, its complexity not depends upon the order of the list in linear search because we won’t know what value we are
looking for. Maybe if a list is sorted, then we can find it after traversal of all elements at the end, and if the list is unordered, then somewhere in the middle.
Efficiency of Linear search or sequential search
• Time taken and the number of comparison in this linear search determines this search technique’s efficiency. The number of comparisons depends on the position of the target search element. If the
target element is found in the first position, only one comparison is made, and if the target element is found at the end of the list, then n comparison is made.
• Best case efficiency means if an element is found at a first position and it is denoted by O(1) and in the worst case if the element is present at the end of the list, which is denoted by O(n). | {"url":"https://quescol.com/data-structure/linear-search-in-data-structure","timestamp":"2024-11-10T19:34:40Z","content_type":"text/html","content_length":"85237","record_id":"<urn:uuid:81da68eb-031a-4a91-aaa7-3a95af185bd0>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00810.warc.gz"} |
What Works For Me – Combining Our Finances
Cash, Family, Money Management
What Works For Me – Combining Our Finances
August 7, 2009
My wife and I have been married for almost thirteen years. For that entire time, even before we got serious about managing our money, we believed in combining our finances. Here’s our simple
system and what works for us.
Our Combined Finances –
Checking Accounts –
We have one, joint, primary checking account. My wife and I can both write checks from this account. We both have debit cards associated with this checking account. Since I’m the nerd, I keep
up with the checkbook balance. We write less than ten checks per month, mainly to pay for babysitters and daycare. I reconcile the checkbook once or twice a week, which takes less than five
minutes. We both carry a book of checks, for the sake of convenience.
We have one, joint, online checking account. We both have debit cards associated with this checking account. My wife rarely uses this account, which is used primarily for online transactions.
Our expenses are pretty consistent, month after month. Because we live on a budget, it’s relatively easy to predict when and where we’ll write a check or use a debit card. We hold on to our
receipts, write down check amounts, and then I will enter all transactions in our checkbook register. I also regularly log-in to our checking accounts, just to make sure that we haven’t forgotten
to record a particular transaction.
Cash –
We use the envelope system to manage our cash. My wife has a set of envelopes. I have a set of envelopes. Click here to watch a video describing, in detail, how the envelope system works.
Again, there is a real advantage to living on a budget. I usually shop for groceries, while my wife usually shops for clothing. So, she gets the clothing envelope, and I get the grocery envelope
. This works for us, but other couples might need two clothing envelopes or two grocery envelopes, one for each spouse.
Saving Accounts –
We have one, joint, online savings account. We both have access to this account.
How It Works –
The key to combined finances is open communication. When we create our monthly budget, we are honest about our expectations. If I think I’m going to play golf, or my wife thinks she’s going to
visit the manicurist, then we talk about those things. I’ve seen couples struggle to live on a budget – even a budget to which they have both agreed – only to find out that one spouse is hiding
certain expenses from another spouse.
We only need one checking account (from which we both write checks) because we are constantly talking about our finances. And, we have the freedom to talk about our finances (and avoid arguing)
because we are both committed to our budget.
I trust my wife. She trusts me. We have three children, and we are teaching them to trust us.. Our system for combining our finances works because I know, that all times, my wife is doing what
she believes is best for me and for our children – and she knows, that at all times, I’m doing what I believe is best for her and our children.
Inventory of Financial Accounts –
Because I am the finance nerd in our family, I’ve created an inventory of financial accounts for my wife. I regularly update the inventory, so that she will have quick access to our financial
information, should something happen to me. If you manage your household finances, be sure that your spouse knows where important documents and accounts are located.
16 thoughts on “What Works For Me – Combining Our Finances”
1. I only have a year under my belt, but we have a very different set up. We have our individual accounts where our paychecks go, they then filter into a joint checking account for budgetary items,
and into joint savings.
I even did a Power point of the break down:
2. Hi I’m new to your site. I am getting married soon and I have debt coming into this marriage and my fiance knows this. How do you suggest we handle our finances. Should we use separate accounts
until I get my debt down?
3. My husband and I use a similar system, though we have 2 joint accounts (we’re each a “primary” on one account but have access to both). This keeps us from writing checks on the same day and
overdrafting the account–basically you have to have physical posession of the checkbook (so that if I write a check or take money from “his” joint account, I don’t do so until I’m holding his
checkbook and vice versa).
We always keep the checkbooks up to date and have the same ideas/goals about money. We also have two joint credit credit cards and discuss anything over $75-100 if it’s not a staple item.
4. Cool post-
I have a checking account at a local bank while having an ING savings account. But now I’m looking at the high interest checking accounts, 5.03% seems TOO good…
5. Heh, I’ll be the “other” example. My husband and I have maintained separate finances for over 10 years now, and despite the shocked responses I’ve gotten when I mention it to other people, it
works great. Perhaps it’s because we were both hedging our bets when we first moved in together, but it just seemed like a lot of work to set up new accounts and adopt a whole new system when we
had one that worked already, and now that we’ve done it for so long, it’s second nature to us.
What we do is that periodically, we look over all the bills, and split them up between us. So, for example, he pays the mortgage, but I pay child care, utilities, and put aside a monthly amount
into savings. Beyond our household responsibilities, any additional money is our own to spend freely. If we go out to eat, sometimes he pays for it, and sometimes I do. If we get hit with a big
unexpected bill, we work together to decide where the money will come from. I like the fact that this method keeps both of us highly involved in our finances and that it prevents us from having
communication problems (i.e. “Oh sorry! I forgot to write that in the checkbook!”)
6. The respect you and your wife have for each other just shines through this post. I predict you will have a long and happy marriage!
7. We have joint account since our marriage but also have individual account with $100/month, so we can spend some money as we like, without argument, it has worked so far. Money matter works best,
when you and your spouse in sync. Good post, NCN.
8. My wife and I have been married for three months and pretty much mirror you guys’ philosophy. So far it has been great! Having to do a budget together forces us to be open with each other and
holds us accountable for the goals we have set. It also good to know that we trust each other with our money. Great post!
9. Why give up a credit?
All our accounts are joint, but we have plenty of them. 5 checking, 3 savings, 12 credit cards and we still live on budget plus enjoy some free money made on credit cards rewards.
10. Good article. Nice twist on budgeting. Combining finanaces in marriage can always be a challenge.
I like the idea on envelopes. However, I really like to take advantage of points on a credit card, so I have not tried that. However, it is something I plan to evaluate.
11. I have to say that your situation is a rare germ right now. The economy is down and the stress is high. It’s tough for a single person to openly communicate about his own financial situation, not
to mention couples with a family to support. I have known some marriages become rocky lately because one of the spouses lose the job. Money is sensitive issue and you are a lucky man who have
such an understanding lady on your side.
12. Organization, communication and mutual respect: the perfect recipe for both a solid relationship with each other and the family finances. I appreciate the insight.
13. Nice blog. I just got married, and we have two seperate personal checking accounts, one joint checking for bills (we split the bills evenly each month), and one joint savings account that we both
contribute to each month. So far, it’s working out well. Again, nice blog.
14. Cool, seems that it caught each person who has and has no checking account just like me. I also use envelop to deposit money in the bank and just like Kandice when I got married I want me and my
husband have a separate account which will contribute each month.
15. Как утеплить балкон Ñ Ð²Ð¾Ð¸Ð¼Ð¸ руками – Ð¿Ð»Ð°Ñ Ñ‚Ð¸ÐºÐ¾Ð²Ñ‹Ðµ балконы – дизайн балкона
16. Поездки в ÐŸÐ¾Ñ‡Ð°ÐµÐ²Ñ ÐºÑƒÑŽ Лавру. ÐŸÐ¾Ñ ÐµÑ‰ÐµÐ½Ð¸Ðµ : Озера Ð¡Ð²Ñ Ñ‚Ð¾Ð¹ Рнны. Иконы ÐŸÐ¾Ñ‡Ð°ÐµÐ²Ñ ÐºÐ¾Ð¹ Божьей Матери ; Стопы ПочаевÑ
кой Божьей Матери ; | {"url":"https://www.ncnblog.com/2009/08/07/what-works-for-me-combining-our-finances/","timestamp":"2024-11-08T09:01:24Z","content_type":"text/html","content_length":"69720","record_id":"<urn:uuid:3358ffec-df4e-4273-a891-c8c3a328a426>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00434.warc.gz"} |
Regression Analysis for Social Sciences - PDF Free Download
Regression Analysis for Social Sciences
Regression Analysis for Social Sciences
This Page Intentionally Left Blank
Regression Analysis for Social Sciences
Alexander von Eye Department of Psychology Michigan State University East Lansing, Michigan
Christof Schuster Institute for Social Research University of Michigan Ann Arbor, Michigan
Academic Press San Diego
New York
This book is printed on acid-free paper. ( ~
Copyright 9 1998 by ACADEMIC PRESS All Rights Reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy,
recording, or any information storage and retrieval system, without permission in writing from the publisher. Academic Press a division of Harcourt Brace & Company
525 B Street, Suite 1900, San Diego, California 92101-4495, USA http://www.apnet.com Academic Press Limited 24-28 Oval Road, London NW1 7DX, UK http://www.hbuk.co.uk/ap/ Library of Congress Card
Catalog Number: 98-84465 International Standard Book Number: 0-12-724955-9 PRINTED IN THE UNITED STATES OF AMERICA 98 99 00 01 02 03 BB 9 8 7 6
Contents Preface
2 SIMPLE LINEAR REGRESSION 2.1 Linear Functions and Estimation . . . . . . . . . . . . . . 2.2 Parameter Estimation . . . . . . . . . . . . . . . . . . . . 2.2.1 Criteria for Parameter Estimation . .
. . . . . . . 2.2.2 Least Squares Parameter Estimators . . . . . . . . 2.2.3 The Goodness of Fit of the Regression Model . . . 2.3 Interpreting Regression Parameters . . . . . . . . . . . . . 2.3.1
Interpreting the Intercept Parameter . . . . . . . . 2.3.2 Interpreting the Slope Parameter . . . . . . . . . . 2.4 Interpolation and Extrapolation . . . . . . . . . . . . . . . 2.5 Testing
Regression Hypotheses . . . . . . . . . . . . . . . 2.5.1 Null Hypothesis: Slope Parameter Equals Zero . . 2.5.2 Data Example . . . . . . . . . . . . . . . . . . . . 2.5.3 Null Hypothesis: Slope
Parameter Constant . . . . 2.5.4 Null Hypothesis 2: Two Slope Parameters are Equal 2.5.5 More Hypotheses for Slope Coefficients . . . . . . 2.5.6 Hypotheses Concerning Intercept Coefficients . . .
2.5.7 Confidence Intervals . . . . . . . . . . . . . . . . .
3 MULTIPLE LINEAR REGRESSION 3.1 Ordinary Least Squares Estimation . . . . . . . . . . . . . . 3.2 Data Example . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Multiple Correlation and
Determination . . . . . . . . . . 3.3.1 Expectancy of R2 and Significance Testing . . . . 3.4 Significance Testing . . . . . . . . . . . . . . . . . . . . .
4 CATEGORICAL PREDICTORS 4.1 Dummy and Effect Coding . . . . . . . . . . . . . . . . . 4.2 More Than Two Categories . . . . . . . . . . . . . . . . . 4.3 Multiple Categorical Predictors . . . . . . .
. . . . . . . .
5 OUTLIER ANALYSIS 5.1 Leverage Outliers . . . . . . . . . . . . 5.2 Remedial Measures . . . . . . . . . . . 5.2.1 Scale Transformations . . . . 5.2.2 Weighted Least Squares . . .
........... ........... ............ ............
6 RESIDUAL ANALYSIS 99 6.1 Illustrations of Residual Analysis . . . . . . . . . . . . . . 100 6.2 Residuals and Variable Relationships . . . . . . . . . . . . 106
7 POLYNOMIAL REGRESSION 117 7.1 Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 7.2 Orthogonal Polynomials . . . . . . . . . . . . . . . . . . . 124 7.3 Example of
Non-Equidistant Predictors . . . . . . . . . . 128 8 MULTICOLLINEARITY 8.1 Diagnosing Multicollinearity . . . . . . . . . . . . . . . . . 8.2 Countermeasures to Multicollinearity . . . . . . . . . .
. . 8.2.1 Data Example . . . . . . . . . . . . . . . . . . . .
10 INTERACTION TERMS IN REGRESSION 10.1 Definition and Illustrations . . . . . . . . . . . . . . . . . 10.2 Multiplicative Terms . . . . . . . . . . . . . . . . . . . . . 10.2.1 Predicting the Slope
Parameter from a Second Predictor . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.2 Predicting Both Slope and Intercept . . . . . . . . 10.3 Variable Characteristics . . . . . . . . . . . . .
. . . . . . 10.3.1 Multicollinearity of Multiplicative Regression Terms 10.3.2 Leverage Points as Results of Multiplicative Regression Terms . . . . . . . . . . . . . . . . . . . . . . 10.3.3
Problems with ANOVA-Type Interactions . . . . . 10.3.4 The Partial Interaction Strategy . . . . . . . . . .
11 ROBUST REGRESSION 11.1 The Concept of Robustness . . . . . . . . . . . . . . . . . 11.2 Models of Robust Regression . . . . . . . . . . . . . . . . 11.2.1 Ridge Regression . . . . . . . . . . . .
. . . . . . . 11.2.2 Least Median of Squares Regression . . . . . . . . 11.2.3 Least Trimmed Squares . . . . . . . . . . . . . . . 11.2.4 M-Estimators . . . . . . . . . . . . . . . . . . . . 11.3
Computational Issues . . . . . . . . . . . . . . . . . . . . 11.3.1 Ridge Regression . . . . . . . . . . . . . . . . . . . 11.3.2 Least Median of Squares and Least Trimmed Squares Regression . . . .
. . . . . . . . . . . . . . . . . . .
12 SYMMETRIC REGRESSION 12.1 Pearson’s Orthogonal Regression . . . . . . . . . . . . . . 12.1.1 Symmetry and Inverse Prediction . . . . . . . . . . 12.1.2 The Orthogonal Regression Solution . . . . .
. . . 12.1.3 Inferences in Orthogonal Regression . . . . . . . . 12.1.4 Problems with Orthogonal Regression . . . . . . . 12.2 Other Solutions . . . . . . . . . . . . . . . . . . . . . . . . 12.2.1
Choice of Method for Symmetric Regression . . . . 12.3 A General Model for OLS Regression . . . . . . . . . . . . 12.3.1 Discussion . . . . . . . . . . . . . . . . . . . . . . . 12.4 Robust
Symmetrical Regression . . . . . . . . . . . . . . . 12.5 Computational Issues . . . . . . . . . . . . . . . . . . . . 12.5.1 Computing a Major Axis Regression . . . . . . . . 12.5.2 Computing
Reduced Major Axis Solution . . . . .
13 VARIABLE SELECTION TECHNIQUES 13.1 A Data Example . . . . . . . . . . . . . . . . . . . . . . . 13.2 Best Subset Regression . . . . . . . . . . . . . . . . . . . . 13.2.1 Squared Multiple
Correlation, R2 . . . . . . . . . . 13.2.2 Adjusted Squared Multiple Correlation . . . . . . 13.2.3 Mallow’s C, . . . . . . . . . . . . . . . . . . . . . . 13.3 Stepwise Regression . . . . . . . . .
. . . . . . . . . . . . 13.3.1 Forward Selection . . . . . . . . . . . . . . . . . . 13.3.2 Backward Elimination . . . . . . . . . . . . . . . . 13.3.3 Efroymson’s Algorithm . . . . . . . . . . . . .
. . 13.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . .
259 14 REGRESSION FOR LONGITUDINAL DATA 14.1 Within Subject Correlation . . . . . . . . . . . . . . . . . 260 14.2 Robust Modeling of Longitudinal Data . . . . . . . . . . . 266 14.3 A Data Example .
. . . . . . . . . . . . . . . . . . . . . . 270 15 PIECEWISE REGRESSION 15.1 Continuous Piecewise Regression . 15.2 Discontinuous Piecewise Regression
. . . . . 278 . . . . . 281
17 COMPUTATIONAL ISSUES 17.1 Creating a SYSTAT System File . . . . . . . . . . . . . . 17.2 Simple Regression . . . . . . . . . . . . . . . . . . . . . . 17.3 Curvilinear Regression . . . . . . . . .
. . . . . . . . . . . 17.4 Multiple Regression . . . . . . . . . . . . . . . . . . . . . 17.5 Regression Interaction . . . . . . . . . . . . . . . . . . . . 17.6 Regression with Categorical
Predictors . . . . . . . . . . . 17.7 The Partial Interaction Strategy . . . . . . . . . . . . . . 17.8 Residual Analysis . . . . . . . . . . . . . . . . . . . . . . . 17.9 Missing Data Estimation .
. . . . . . . . . . . . . . . . . . 17.10Piecewise Regression . . . . . . . . . . . . . . . . . . . . .
A ELEMENTS OF MATRIX ALGEBRA A.l Definition of a Matrix . . . . . . . . . . . . . . . . . . . . A.2 Types of Matrices . . . . . . . . . . . . . . . . . . . . . . A.3 Transposing Matrices . . . . . .
. . . . . . . . . . . . . . . A.4 Adding Matrices . . . . . . . . . . . . . . . . . . . . . . . A.5 Multiplying Matrices . . . . . . . . . . . . . . . . . . . . . A.6 The Rank of a Matrix . . . . . .
. . . . . . . . . . . . . . A.7 The Inverse of a Matrix . . . . . . . . . . . . . . . . . . . A.8 The Determinant of a Matrix . . . . . . . . . . . . . . . . A.9 Rules for Operations with Matrices .
. . . . . . . . . . . . A.10 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . .
CONTENTS D POLYNOMIALS D.1 D.2
Systems of Orthogonal Polynomials . . . . . . . . . . . . . S m o o t h i n g Series of Measures . . . . . . . . . . . . . . . .
E.1 E.2
Recall P e r f o r m a n c e D a t a . . . . . . . . . . . . . . . . . . . E x a m i n a t i o n and State Anxiety D a t a . . . . . . . . . . . .
This Page Intentionally Left Blank
Preface Regression analysis allows one to relate one or more criterion variables to one or more predictor variables. The term "to relate" is an empty formula that can be filled with an ever
increasing number of concepts, assumptions, and procedures. Introductory textbooks of statistics for the social and behavioral sciences typically include only a description of classical simple and
multiple regression where parameters are estimated using ordinary least-squares (OLS). OLS yields estimates that can have very desirable characteristics. Therefore it is deemed useful under many
circumstances, as attested to by the many applications of this method. Indeed, O LS regression is one of the most frequently used methods of applied statistics. Yet, there are many instances in which
standard OLS regression is only suboptimal. In these instances, researchers may not wish to give up the goals of regressing one set of variables onto another and predicting values from the
predictors. Instead, researchers may wish to apply more appropriate methods of regression, methods that are custom-tailored to their hypotheses, data, and assumptions. A very large number of special
methods for regression analysis are available. For instance, researchers can apply regression analysis to model dependent data or to depict curvilinear relationships. Researchers can apply various
criteria when optimizing parameter estimates, or they can include categorical and continuous predictors simultaneously in a multiple regression equation. Most of these methods are easy to use, and
many of these methods are already part of general purpose statistical software packages. This book presents both the classical OLS approach to regression and xi
modern approaches that increase the range of applications of regression considerably. After the introduction in chapter one the second chapter explains the basics of regression analysis. It begins
with a review of linear functions and explains parameter estimation and interpretation in some detail. Criteria for parameter estimation are introduced and the characteristics of OLS estimation are
explained. In the third chapter we extend the approach of simple, one-predictor regression to multiple regression. This chapter also presents methods of significance testing and a discussion of the
multiple R 2, a measure used to estimate the portion of criterion variance accounted for by the regression model. The fourth chapter deals with simple and multiple regression using categorical
predictors. An important part of each regression analysis is outlier analysis. Outliers are extreme measures that either are far from the mean or exert undue influence on the slope of a regression
equation. Both types of outliers are threats to the validity of results from regression analysis. Chapter 5 introduces readers to methods of outlier analysis. In a similar fashion, residual analysis
is of importance. If residuals display systematic patterns, there is systematic variance that remains to be explained. Chapter 6 presents methods for residual analysis. One example of systematic
variability that standard regression using straight regression lines cannot detect is the presence of curvilinear relationships between predictors and criteria. Methods of polynomial regression,
presented in Chapter 7, allow researchers to model virtually any shape of curve. Multicollinearity can be a major problem in virtually any multiple regression analysis. The only exception is the use
of perfectly orthogonal predictor variables. Multicollinearity problems result from dependence among predictors and can invalidate estimates of parameters. Chapter 8 presents methods for diagnosing
multicollinearity and remedial measures. Chapter 9 extends the curvilinear, polynomial approach to regression to multiple curvilinear regression. Another instance of nonlinear relationships is the
presence of interactions between predictors. Multiplicative terms, which are routinely used to identify the existence of possible interactions, pose major problems for parameter interpretation.
Specifically, multiplicative terms tend to create leverage points and multicollinearity
xiii problems and come with very uneven power which varies with the location of the interaction effect. Chapter 10 deals with interaction terms in regression. When there is no obvious way to solve
problems with outliers or the dependent variable is not normally distributed, methods of robust regression present a useful alternative to standard least-squares regression. Chapter 11 introduces
readers to a sample of methods of robust regression. Chapter 12 presents methods of symmetrical regression. These methods minimize a different goal function in the search for optimal regression
parameters. Thus, this approach to regression allows researchers to elegantly deal with measurement error on both the predictor and the criterion side and with semantic problems inherent in standard,
asymmetric regression. The chapter also presents a general model for ordinary least squares regression which subsumes a large array of symmetric and asymmetric approaches to regression and yon Eye
and Rovine's approach to robust symmetric regression. In many instances researchers are interested in identifying an optimal subset of predictors. Chapter 13 presents and discusses methods for
variable selection. Of utmost importance in longitudinal research is the problem of correlated errors of repeatedly observed data. Chapter 14 introduces readers to regression methods for longitudinal
data. In many instances, the assumption of a monotonic regression line that is equally valid across the entire range of admissible or observed scores on the predictor cannot be justified. Therefore,
researchers may wish to fit a regression line that, at some optimally chosen point on X, assumes another slope. The method of piecewise regression provides this option. It is presented in Chapter 15.
The chapter includes methods for linear and nonlinear piecewise regression. Chapter 16 presents an approach for regression when both the predictor and the criterion are dichotomous. Chapter 17
illustrates application of regression analysis to a sample data set. Specifically, the chapter shows readers how to specify command files in SYSTAT that allow one to perform the analyses presented in
the earlier chapters. This volume closes with five appendices. The first presents elements of matrix algebra. This appendix is useful for the sections where least-
squares solutions are derived. The same applies to Appendices B and C, which contain basics of differentiation and vector differentiation. Chapters 7 and 9, on polynomial regression, can be best
understood with some elementary knowledge about polynomials. Appendix D summarizes this knowledge. The last appendix contains the data used in Chapter 14, on longitudinal data. This book can be used
for a course on regression analysis at the advanced undergraduate and the beginning graduate level in the social and behavioral sciences. As prerequisites, readers need no more than elementary
statistics and algebra. Most of the techniques are explained stepby-step. For some of the mathematical tools, appendices are provided and can be consulted when needed. Appendices cover matrix
algebra, the mechanics of differentiation, the mechanics of vector differentiation, and polynomials. Examples use data not only from the social and behavioral sciences, but also from biology. Thus,
the book can also be of use for readers with biological and biometrical backgrounds. The structure and amount of materials covered are such that this book can be used in a one-semester course. No
additional reading is necessary. Sample command and result files for SYSTAT are included in the text. Many of the result files have been slightly edited by only including information of specific
importance to the understanding of examples. This should help students and researchers analyze their own data. We are indebted to a large number of people, machines, and structures for help and
support during all phases of the production of this book. We can list only a few here. The first is the Internet. Most parts of this book were written while one of the authors held a position at the
University of Jena, Germany. Without the Internet, we would still be mailing drafts back and forth. We thank the Dean of the Social Science College in Jena, Prof. Dr. Rainer K. Silbereisen, and the
head of the Department of Psychology at Michigan State University, Prof. Dr. Gordon Wood for their continuing encouragement and support. We are grateful to Academic Press, notably Nikki Levy, for
interest in this project. We are deeply impressed by Cheryl Uppling and her editors at Academic Press. We now know the intersection of herculean and sisyphian tasks. We are most grateful to the
creators of I ~ ~ . Without them, we would have missed the stimulating discussions between the two of us in which one of the authors claimed that all this could have been written in
WordPerfect in at least as pretty a format as in 1.4TEX. Luckily or not, the other author prevailed. Most of all, our thanks are owed to Donata, Maxine, Valerie, and Julian. We also thank Elke.
Without them, not one line of this book would have been written, and many other fun things would not happen. Thank you, for all you do!
December 1997' Alexander yon Eye Christof Schuster
This Page Intentionally Left Blank
Chapter 1
INTRODUCTION Regression analysis is one of the most widely used statistical techniques. Today, regression analysis is applied in the social sciences, medical research, economics, agriculture,
biology, meteorology, and many other areas of academic and applied science. Reasons for the outstanding role that regression analysis plays include that its concepts are easily understood, and it is
implemented in virtually every all-purpose statistical computing package, and can therefore be readily applied to the data at hand. Moreover, regression analysis lies at the heart of a wide range of
more recently developed statistical techniques such as the class of generalized linear models (McCullagh & Nelder, 1989; Dobson, 1990). Hence a sound understanding of regression analysis is
fundamental to developing one's understanding of modern applied statistics. Regression analysis is designed for situations where there is one continuously varying variable, for example, sales profit,
yield in a field experiment, or IQ. This continuous variable is commonly denoted by Y and termed the dependent variable, that is, the variable that we would like to explain or predict. For this
purpose, we use one or more other variables, usually denoted by X1, X 2 , . . . , the independent variables, that are related to the variable of interest. To simplify matters, we first consider the
situation where we are only interested in a single independent variable. To exploit the information that the independent variable carries about the dependent variable, we try to find a mathematical
function that is a good description of the as-
30 ell
.~ 20
. l0
10 20 Aggressive Impulses
Figure 1.1" Scatterplot of aggressive impulses against incidences of physical aggression.
sumed relation. Of course, we do not expect the function to describe the dependent variable perfectly, as in statistics we always allow for randomness in the data, that is, some sort of variability,
sometimes referred to as error, that on the one hand is too large to be neglected but, on the other hand, is only a nuisance inherent in the phenomenon under study. To exemplify the ideas we present,
in Figure 1.1, a scatterplot of data that was collected in a study by Finkelstein, von Eye, and Preece (1994). One goal of the study was to relate the self-reported number of aggressive impulses to
the number of self-reported incidences of physical aggression in adolescents. The sample included n - 106 respondents, each providing the pair of values X, that is, Aggressive Impulses, and Y, that
is, open Physical Aggression against Peers. In shorthand notation, (Xi, Y/),i 1,...,
While it might be reasonable to assume a relation between Aggressive Impulses and Physical Aggression against Peers, scientific practice involves demonstrating this assumed link between the two
variables using data from experiments or observational studies. Regression analysis is
one important tool for this task. However, regression analysis is not only suited to suggesting decisions as to whether or not a relationship between two variables exists. Regression analysis goes
beyond this decision making and provides a different type of precise statement. As we already mentioned above, regression analysis specifies a functional form for the relationship between the
variables under study that allows one to estimate the degree of change in the dependent variable that goes hand in hand with changes in the independent variable. At the same time, regression analysis
allows one to make statements about how certain one can be about the predicted change in Y that is associated with the observed change in X. To see how the technique works we look at the data
presented in the scatterplot of Figure 1.1. On purely intuitive grounds, simply by looking at the data, we can try to make statements similar to the ones that are addressed by regression analysis.
First of all, we can ask whether there is a relationship at all between the number of aggressive impulses and the number of incidences of physical aggression against peers. The scatterplot shows a
very wide scatter of the points in the plot. This could be caused by imprecise measurement or a naturally high variability of responses concerning aggression. Nevertheless, there seems to be a slight
trend in the data, confirming the obvious hypothesis that more aggressive impulses lead to more physical aggression. Since the scatter of the points is so wide, it is quite hard to make very
elaborate statements about the supposed functional form of this relation. The assumption of a linear relation between the variables under study, indicated by the straight line, and a positive trend
in the data seems, for the time being, sufficiently elaborate to characterize the characteristics of the data. Every linear relationship can be written in the form Y = /3X + a. Therefore, specifying
this linear relation is equivalent to finding reasonable estimates for/3 and a. Every straight line or, equivalently, every linear function is determined by two points in a plane through which the
line passes. Therefore, we expect to obtain estimates of/3 and a if we can only find these two points in the plane. This could be done in the following way. We select a value on the scale of the
independent variable, X, Aggressive Impulses in the example, and select all pairs of values that have a score on the independent variable that is close to this value. Now,
a natural predictor for the value of the dependent variable, Y, Physical Aggression against Peers, that is representative for these observations is the mean of the dependent variable of these values.
For example, when looking up in the scatterplot those points that have a value close to 10 on the Aggressive Impulse scale, the mean of the associated values on the physical aggression scale is near
15. Similarly, if we look at the points with a value close to 20 on the Aggressive Impulse scale, we find that the mean of the values of the associated Physical Aggression scale is located slightly
above 20. So let us take 22 as our guess. Now, we are ready to obtain estimates of fl and a. It is a simple exercise to transform the coordinates of our hypothetical regression line, that is, (10,
15) and (20, 22), into estimates of/3 and a. One obtains as the estimate for/7 a value of 0.7 and as an estimate for a a value of 8. If we insert these values into the equation, Y = / 3 X + a, and
set X = 10 we obtain for Y a value of 15, which is just the corresponding value of Y from which we started. This can be done for the second point, (20, 22), as well. As we have already mentioned, the
scatter of the points is very wide and if we use our estimates for/3 and a to predict physical aggression for, say, a value of 15 or 30 on the Aggressive Impulse scale, we do not expect it to be very
accurate. It should be noted that this lack of accuracy is not caused by our admittedly very imprecise eyeballing method. Of course, we do not advocate using this method in general. Perhaps the most
obvious point that can be criticized about this procedure is that if another person is asked to specify a regression line from eyeballing, he or she will probably come to a slightly different set of
estimates for a and /7. Hence, the conclusion drawn from the line would be slightly different as well. So it is natural to ask whether there is a generally agreed-upon procedure for obtaining the
parameters of the regression line, or simply the regression parameters. This is the case. We shall see that the regression parameters can be estimated optimally by the method of ordinary least
squares given that some assumptions are met about the population the data were drawn from. This procedure will be formally introduced in the next chapters. If this method is applied to the data in
Figure 1.1, the parameter estimates turn out to be 0.6 for/7 and 11 for a. When we compare these estimates to the ones above, we see that our intuitive method yields estimates that are not too
different from the least squares
estimates calculated by the computer. Regardless of the assumed functional form, obtaining parameter estimates is one of the important steps in regression analysis. But as estimates are obtained from
data that are to a certain extent random, these estimates are random as well. If we imagine a replication of the study, we would certainly not expect to obtain exactly the same parameter estimates
again. They will differ more or less from the estimates of the first study. Therefore, a decision is needed as to whether the results are merely due to chance. In other words, we have to deal with
the question of how likely it would be that we will not get the present positive trend in a replication study. It will be seen that the variability of parameter estimates depends not on a single
factor, but on several factors. Therefore, it is much harder to find an intuitive reasonable guess of this variability then a guess of the point estimates for/~ and a. With regression analysis we
have a sound basis for examining the observed data. This topic is known as hypotheses testing for regression parameters or, equivalently, calculating confidence intervals for regression parameters.
This topic will also be discussed in the following chapters. Finally, having estimated parameters and tested hypotheses concerning these parameters, the whole procedure of linear regression rests on
certain assumptions that have to be fulfilled at least approximately for the estimation and testing procedures to yield reliable results. It is therefore crucial that every regression analysis is
supplemented by checking whether the inherent model assumptions hold. As an outline of these would not be reasonable without having formalized the ideas of estimation and hypothesis testing, we delay
this discussion until later in this book. But it should be kept in mind that model checking is an essential part of every application of regression analysis. After having described the main ideas of
regression analysis, we would like to make one remark on the inconsistent use of the term "regression analysis." As can be seen from the example above, we regarded X, the independent variable, as a
random variable that is simply observed. There are other situations where X is not a random variable but is determined prior to the study. This is the typical situation when data from a planned
experiment are analyzed. In these situations the independent variable is usually under the experimenter's control and its values can therefore be set arbitrarily. Hence, X is fixed and not random. In
both situations the
C H A P T E R 1. I N T R O D U C T I O N analysis is usually referred to as regression analysis. One reason for this terminological oddity might be that the statistical analysis, as outlined in this
book, does not vary with the definition of the randomness of X. With X fixed, or at least considered fixed, formulas are typically easier to derive because there is no need to consider the joint
distribution of all the variables involved, only the univariate distribution of the dependent variable. Nevertheless it is important to keep these two cases apart as not all methods presented in this
book can be applied to both situations. When X is fixed the model is usually referred to as the linear model (see, for instance, Searle, 1971; Graybill, 1976; Hocking, 1996). From this perspective,
regression analysis and analysis of variance are just special cases of this linear model.
Chapter 2
SIMPLE LINEAR REGRESSION Starting from linear functions, Section 2.1 explains simple regression in terms of functional relationships between two variables. Section 2.2 deals with parameter estimation
and interpretation.
Linear F u n c t i o n s and E s t i m a t i o n
The following illustrations describe a functional relationship between two variables. Specifically, we focus on functions of the form
where Y is the criterion variable, ~ is some parameter, X is the predictor variable, and a is also some parameter. Parameters, variables, and their relationship can be explained using Figure 2.1. The
thick line in Figure 2.1 depicts the graph of the function Y = a + ~ X , with/3 - 2, a - 1, and X ranging between -1 and +3. Consider the following examples" 9 forX-2weobtainY-l+2,2-5 9
CHAPTER 2. SIMPLE LINEAR REGRESSION 12
- 2
-4 -1
Figure 2.1- Functional linear relationship between variables Y and X. 9 forX=0weobtainY=l+2.0=l Using these examples and Figure 2.1 we find that 1. a is the intercept parameter, that is, the value
assumed by Y when x = 0. For the example in the figure and the third numerical example we obtain y = 1 when X = 0, that is, y = a; the level of the intercept is indicated as a horizontal line in
Figure 2.1. 2. /3 is the slope parameter, that is, the increase in Y that is caused by an increase by one unit in X. For the example in the figure and the above numerical examples we find that when X
increases by 1, for example, from X = 2 to X = 3, Y increases by 2, that is,/3 = 2. 3. Changes in a cause a parallel shift of the graph. Consider the upper thin line in Figure 2.1. This line is
parallel to the thick line. The only difference between these two lines is that whereas the thinner line has an intercept of a = 4, the thicker line has an intercept of a = 1, or, the thicker line
depicts the function Y = 1 + 2X, and the thinner line depicts the function Y = 4 + 2X. Positive changes in intercept move a curve upward. Negative changes in intercept move a curve downward. 4.
Changes in/3 cause the angle that the line has with the X-axis to change. Consider the steepest line in Figure 2.1. This line has the
2.1. L I N EAR FUNCTIONS AND E STI MA T IO N same intercept as the thick line. However, it differs from it in its slope parameter. Whereas the thick line increases by two units in Y for each unit
increase in X. The thinner line increases by three units in Y for each unit in X, the thinner line depicts the function Y = 1 + 3X. Positive values of ~ cause the curve to increase as X increases.
Negative values of ~ cause the curve to decrease as X increases. Figure 2.1 depicts the relationship between X and Y so that X is the predictor and Y the criterion. Consider a researcher that wishes
to predict Cognitive Complexity from Reading Time. This researcher collects data in a random sample of subjects and then estimates a regression line. However, before estimating regression parameters,
researchers must make a decision concerning the type or shape of regression line. For example a question is whether an empirical relationship can be meaningfully depicted using linear functions of
the type that yield straight regression lines. 1 In many instances, this question is answered affirmatively. When estimating regression lines, researchers immediately face the problem that, unlike in
linear algebra, there is no one-to-one relationship between predictor values and criterion values. In many instances, one particular predictor value is responded to with more than one, different
criterion values. Similarly, different predictor values can be responded to with the same criterion value. For instance, regardless of whether one drives on a highway in Michigan at 110 miles per
hour or 120 miles per hour, the penalty is loss of one's driver license. As a result of this less than perfect mapping, a straight line will not be able to account for all the variance in a data set.
There will always be variance left unexplained. On the one hand, this may be considered a shortcoming of regression analysis. Would it not be nice if we could explain 100% of the variance of our
criterion variable? On the other hand, explaining 100% of the variance is, maybe, not a goal worth pursuing. The reason for this is that not only in social sciences, but also in natural sciences,
measurements are never perfect. Therefore, explaining 100% of 1The distinction between linear functions and linear regression lines is important. Chapters 7 and 9 of this book cover regression lines
that are curvilinear. However, parameters for these lines are estimated using linear functions.
the variance either means that there is no error variance, which is very unlikely to be the case, or that the explanation confounded error variance with true variance. While the claim of no error
variance is hard to make plausible, the latter is hardly defensible. Therefore, we use for regression analysis a function t h a t is still linear but differs from (2.1) in one i m p o r t a n t
aspect: it contains a term for unexplained variance, also termed error or residual variance. This portion of variance exists for two reasons. The first is t h a t measures contain errors. This is
almost always the case. The second reason is t h a t the linear regression model does not allow one to explain all of the explainable variance. This is also almost always the case. However, whereas
the first reason is a reason one has to live with, if one can, the second is curable, at least in part: one can try alternative regression models. But first we look at the simple linear regression.
The function one uses for simple linear regression, including the term for residual variance, is Yi = (~ + ~ x i + ~i,
where 9 yi is the value that person i has on the criterion, Y.
9 a is the intercept parameter. Later, we will show how the intercept can be determined so that it is the arithmetic mean of Y. 9 ~ is the slope parameter. Note t h a t neither a nor/~ have the
person subscript, i. Parameters are typically estimated for the entire sample; only if individuals are grouped can two individuals have different parameter estimates. 9 xi is the value that person i
has on the predictor, X.
* ei is t h a t part of person i's value yi that is not accounted for by the regression equation; this part is the residual. It is defined as ei = Yi -
2.1. L I N E A R FUNCTIONS AND E S T I M A T I O N
lo I 8 6 4 .
Figure 2.2: Illustration of residuals.
where #i is the value predicted using the regression equation (2.1), that is, #i = a + ~xi.
Figure 2.2 displays a regression line and a number of data points around this line and the residuals. There is only one point that sits directly on the regression line. The other six data points vary
in their distances from the regression line. Residuals are expressed in units of Y. Thus, the distance from the regression line, that is, the size of the residual, is depicted by the vertical lines.
These lines are not the shortest connections to the regression lines. The shortest lines would be the lines that are orthogonal to the regression line and originate in the data points. However, there
is an explanation for the solution that expresses residuals in units of Y, that is, lines parallel to the Y-axis. The explanation is that differences between estimated values (regression line) and
observed values (square data points) are best expressed in units of the criterion, Y (see Equation (2.2)). For example, weather forecasts typically include temperature forecasts for the coming days.
It seems reasonable to express both the predicted temperature and the difference between measured and predicted temperatures in units of the temperature scale. Therefore, standard regression
expresses residuals in units of Y. Chapter 12 of this book
CHAPTER 2. SIMPLE LINEAR R E G R E SSIO N
presents a solution for regression that expresses residuals in units of the orthogonal distance between the regression line and the data points. Parameters a and/~ are not subscripted by individuals.
This suggests that these parameters describe the entire population. Basics about matrix algebra are given in Appendix A. This variant of algebra allows one to express algebraic expressions, in
particular multiple equations, in an equivalent yet far more compact form. Readers not familiar with matrix algebra will benefit most from Section 2.2 and the following chapters if they make
themselves familiar with it. Readers already familiar with matrix algebra may skip the excursus or use it as a refresher.
Parameter Estimation
This section is concerned with estimation and interpretation of parameters for regression equations. Using the methods introduced in Appendix B, we present a solution for parameter estimation. This
solution is known as the ordinary least squares solution. Before presenting this solution, we discuss the method of least squares within the context of alternative criteria for parameter estimation.
for Parameter Estimation
A large number of criteria can be used to guide optimal parameter estimation. Many of these criteria seem plausible. The following paragraphs discuss five optimization criteria. Each of these
criteria focuses on the size of the residuals, e/, where i indexes subjects. The criteria apply to any form of regression function, that is, to linear, quadratic, or any other kinds of regression
functions. One of the main characteristics of these criteria is that they are not statistical in nature. When devising statistical tests, one can exploit characteristics of the residuals that result
from optimization according to one of certain criteria. However, the criteria themselves can be discussed in contexts other than statistics. For instance, the least squares criterion finds universal
use. It was invented by Gauss when he solved the problem of constructing a road system that had to be optimal for military use.
2.2. P A R A M E T E R
A l l R e s i d u a l s ei M u s t b e Z e r o More specifically, this criterion requires that ei=0,
This first criterion may seem plausible. However, it can be met only under rare conditions. It implies that the relationship between X and Y is perfectly linear, and that the regression line goes
exactly through each d a t a point. As was discussed in Section 2.1, this is impractical for several reasons. One reason is that data are measured typically with error. Another reason is that, as
soon as there is more than one case for values of X, researchers that employ this criterion must assume that the variation of Y values is zero, for each value of X. This assumption most probably
prevents researchers from ever being able to estimate regression parameters for empirical data under this criterion. M i n i m i z a t i o n o f t h e S u m of R e s i d u a l s Let ~)i be the
estimated value for case i and yi the observed value. Then, minimization of the sum of residuals can be expressed as follows: n
Q - Z(Y
- 0.
This criterion seems most reasonable. However, as it is stated, it can yield misleading results. One can obtain Q = 0 even if the regression line is the worst possible. This can happen when the
differences Yi - yi sum up to zero if their sums cancel each other out. This is illustrated in Figure 2.3 Figure 2.3 displays a string of data points. The first regression line suggests that, for
these data points, the criterion given in (2.5) can be fulfilled. This line goes through each and every data point. As a result, we obtain for the sum of residuals Q = 0. Unfortunately, the same
applies for the second perpendicular line. The residuals for this line cancel each other out, because they lie at equal distances y - ~ from this regression line. As a result, the sum of residuals is
Q - 0
Figure 2.3: Two solutions for Q = 0.
Because of the problems that (1) solutions are not unique and (2) one has to devise criteria that allow one to discriminate between these two cases, this second criterion remains largely abandoned.
The next criterion seems more practical.
M i n i m i z a t i o n of t h e S u m of t h e A b s o l u t e R e s i d u a l s To avoid the problems one can encounter when using (2.5), one can attempt minimization of the absolute residuals,
IlYi - ~)ill, that is, n
Q- ~
Ilyi- 9ill-4 min.
To illustrate Criterion (2.6), consider Figure 2.3 again. As a matter of course, the first regression line yields Q = 0. However, no other line yields this minimal value for Q. A large value for Q
results for the perpendicular line. While solving the biggest problem, that is, the problem of counterintuitive regression slopes, Criterion (2.6) presents another problem. This problem concerns the
uniqueness of solutions. Sole application of (2.6) cannot guarantee that solutions are unique. Solutions that are not unique, however, occur only rarely in analysis of empirical data. Therefore, Cri-
2.2. P A R A M E T E R E S T I M A T I O N
terion (2.6) is occasionally applied and can be found in some statistical software packages (L1 regression in S+; Venables & Ripley, 1994).
Tshebysheff Minimization Whereas the last three criteria focused on the sum of the residuals, the Tshebysheff Criterion focuses on individual residuals, ci. The criterion proposes that max
i1 1i = m i n .
In words, this criterion yields a curve so that the largest absolute residual is as small as possible. This is an intuitively sensible criterion. Yet, it is problematic for it is deterministic. It
assumes that there is no error (or only negligible error) around measures. As a result, solutions from the Tshebysheff Criterion tend to severely suffer from outliers. One extreme value is enough to
create bias, that is, dramatically change the slope of the regression line. Therefore, social science researchers rarely use this criterion. In contrast, the following criterion is the most
frequently used.
M i n i m i z a t i o n of the squared Residuals- The M e t h o d of Ordinary Least Squares ( O L S ) While solutions from Criterion (2.6) may be appropriate in many instances, lack of uniqueness of
solutions minimizes enthusiasm for this criterion. Curiously, what was probably the idea that led to the development of OLS estimation 2 was the notion of distance between two points in n-space. Note
that this distance is unique and it can also be easily calculated. It is well known that in a plane the distance between the two points Pl and p2 can be obtained by using the Pythagorean theorem
through 2
dZ(Pi,P2) - -
( X l l -- X l 2 ) 2 -~- (x21 -- x 2 2 ) 2 -- ~ ( x i i i--1
- - x i 2 ) 2,
2On a historical note, OLS estimation was created by the mathematician Gauss, the same person who, among other mathematical achievements, derived the formula for the normal distribution.
C H A P T E R 2. SIMPLE L I N E A R R E G R E S S I O N
Figure 2.4: Distance between two points.
where pl and p2 are two points in a plane with coordinates pl -- (x11, x21) and p2 = (x12, x22) and d2(pl,p2) denotes the squared distance between the two points. This is illustrated in Figure 2.4.
Now, suppose we have two points Pl and p2 in n-space, that is, pl (x11,x21,x31,... ,Xnl) and p2 = (x12,x22,x32,... ,xn2). The squared distance is calculated as before with the sole exception that
index i now ranges from 1 to n, that is, n d2(pl,P2)
-- E ( X i l i-1
- xi2)2.
The least squares criterion for parameter estimation in regression analysis has the following completely analogous form: n
Q _ E ( y i _ ~i)2 ~ min.
In words, Criterion (2.7) states that we select the predicted values ~) such that Q, that is, the sum of the squared distances between the observations and the predictions, becomes a minimum. Of
course, we will not select
2.2. P A R A M E T E R
an arbitrary point in n-space for our prediction, because then we could chose ~ - y and the distance would always be zero. Only predictions that meet the model
~)i -- bo + blxi
are considered. Once the data are collected, the xi values in (2.8) are known. Thus, the squared distance Q depends only on the values of b0 and bl. For every possible pair of b0 and bl we could
calculate Q, and the OLS principle states that we should select as reasonable estimates of the true but unknown regression coefficients/~0 and ~1 the values of b0 and bl for which Q becomes a
minimum. Thus far we have given some reasons why OLS might be a sensible choice for estimating regression coefficients. In fact, when examining O LS estimators it turns out that they have a number of
very desirable characteristics: 1. Least squares solutions are unique; that is, there is only one minimum 2. Least squares solutions are unbiased; that is, parameters estimated using least squares
methods do not contain any systematic error 3. Least squares solutions are consistent; that is, when the sample size increases the solution converges toward the true population values 4. Least
squares solutions are efficient; that is, their variance is finite and there is no solution with smaller variance among all unbiased linear estimators Moreover, these estimators can be determined
using closed forms. As a result, it is straightforward to computationally determine this minimum. 2.2.2
This chapter presents a solution to the least squares minimization problem, that is, it yields the estimates for regression parameters. But before presenting the solution for estimation of regression
parameters we take a closer and more formal look at the regression model and its characteristics.
C H A P T E R 2. S I M P L E L I N E A R R E G R E S S I O N
The Regression Model: Form and Characteristics As was stated in Section 2.1, the population model for simple regression can be formulated as Yi - ~0 + / ~ l x i + ~i,
where yi is subject i's value on the observed variable,/~o and fll are the intercept and the slope parameters, respectively, xi is a known constant, and ei is a r a n d o m residual term. This
regression model has the following characteristics: 1. It is simple; t h a t is, it involves only one predictor variable. 2. It is linear in the parameters; t h a t is, parameters/~0 and ~1 have
exponents equal to 1, do not appear as exponents, and are not multiplied or divided by any other parameter. 3. It is linear in the predictor; t h a t is, the predictor also only appears in the first
power. A model linear in its parameters and the predictor, t h a t is, the independent variable, is also termed a first-order linear
model. 3 4. It is assumed t h a t the random residual term, ci, has an expected mean of E(~i) -- 0 and variance V(ei) = a 2. In words, the residuals are assumed to have a mean of zero and a constant
variance. 5. The residuals for two different observations i and j are assumed to be independent. Therefore, their covariance is zero, t h a t is, C(~i, cj) = 0. 6. Variable X is a constant. 4 7. The
regression model contains a systematic component,/~0 + fllXi, and a r a n d o m term, ci. Therefore, the observed yi is a r a n d o m variable. 3It should be noted that, in curvilinear regression,
predictor variables can appear in nonlinear form. However, as long as the parameters appear only in the first power, the regression model can be termed linear. 4Remember the distinction between fixed
and random predictors in the Introduction.
2.2. P A R A M E T E R E S T I M A T I O N
8. The observed value, yi, differs from the value expected from the regression equation by an amount given by ei. 9. Because the residual terms, ci, are assumed to have constant variance, the
observed values, yi, have the same constant variance: v(y
) =
This implies that the variance of Y does not change with X. 10. Because the residual covariance C(el,ej) = 0, responses are uncorrelated also:
for i ~ j, the
C(yl, yj) = 0, for i ~ j. 11. Y must be real-valued. The regression equation given in (2.9) can yield estimates that are positive, negative, or zero. In addition, (2.9) can yield predicted values at
virtually any level of resolution. If the criterion cannot assume these values, the linear regression model in tandem with OLS estimators may not be the appropriate model. There are no such
constraints placed on X. X can be even categorical (see Chapter 4 of this volume). In order to obtain the estimates of the regression coefficients the easy way, we need some calculus that is not
covered in the excursus on matrix algebra. The excursus in Appendix B reviews this calculus (for more details see, e.g., Klambauer, 1986). Readers well trained in calculus may wish to skip the
excursus. It should be noted that, for valid estimation of regression parameters, neither X nor Y is required to be normally distributed. In contrast, the significance tests and the construction of
confidence intervals developed later require that Y be normally distributed.
Estimating Regression Slope and Intercept Parameters This section presents the standard OLS solution for the slope and intercept regression parameters. Recall from the above section that the
C H A P T E R 2. S I M P L E L I N E A R R E G R E S S I O N
function to be minimized is n
~)i)2 -~ min.
The estimated Y value for case i is (2.11)
f/i - bo + bl xi. Inserting (2.11) into (2.10) yields n
Q(bo, bl) - E ( y i
- bo - blxi) 2.
Note that we consider Q(b0, bl) as a function of b0 and bl and not as a function of Yi and xi. The reason for this is that once the data are collected they are known to us. What we are looking for is
a good guess for bo and bl; therefore, we treat bo and bl as variables. We are now taking the two partial derivatives of Q(b0, bl). After this, all that remains to be done is to find that point (b0,
bl) where these partial derivatives both become zero. This is accomplished by setting them equal to zero, yielding two equations in two unknowns. These equations are then solved for bo and bl. The
solutions obtained are the OLS estimates of flo and/31. First we take the partial derivative of Q(bo, bl) with respect to bo. Interchanging summation and differentiation and then applying the chain
rule, n
Obo i~l
~ ~0
- - bO - - b l X i ) 2
i=1 n
- 2 ( y ~ - bo -
i=1 n
--2E(yi-bo i--1
The last equation is set to zero and can then be further simplified by dropping the factor -2 in front of the summation sign and resolving the expression enclosed by parentheses, n
E(yi i--1 n
-- Do -
blxi) n
n~-nbo-nb12. n
Note that in this derivation we have used the fact that ~7~i=1 xi - n2. This applies accordingly to the Yi. Finally solving for bo yields bo - ~ - b12.
It can be seen that we need bl to obtain the estimate of bo. Now we take the partial derivative of Q(bo, bl) with respect to bl, that is, n Ob---~Q ( b O ' b l )
Ob--~ E(yi. ~.--1
Z ~(y~-
bO -
bo - b~x~)~
i--1 n
- bo - b l x i )
i--1 n
-2 E
x i ( y i - bo - b l x i ) .
Again, setting this equation to zero and simplifying, we obtain n
Exi(yi-b~ i:l n
Z xiYi i-l
-- bo
E xi E- bl
nbo - n b l ~ .
C H A P T E R 2. SIMPLE L I N E A R R E G R E S S I O N
Substituting the expression for b0 from (2.12) yields n O
X l Y i -- ( y --
xi - bl
n E xiyi
E xi i--1
[ 1( ~ ) 2
En ] i
2 x
Finally, solving for bl, n
- -
~-~'~i=1 x i Y i -- n x y n 2 1 n 2" E l - - 1 Xi -- n ( E i = l X i )
Regression parameters are thus obtained by first calculating bl and then using this estimate in the formula for calculating b0. It should be noted that from a purely mathematical viewpoint we would
have to go a step further in proofing that the parameter estimates just obtained indeed give the minimum of the function surface Q(b0, bl) in three-dimensional space and not a maximum or a
saddle-point. But we can assure the reader that this solution indeed describes the minimum. With further algebra the slope parameter, bl, can be put in the following form: bl
Ei~=l (xi
2)(yi -
~) "
The numerator of (2.13) contains the cross product of the centered 5 variables X and Y, defined as the inner product of the vectors ( x - ~) and (y - .y), that is, ( x - ~)' (y - ~) estimation.
Data Example The following numerical example uses data published by Meumann (1912). The data describe results from a memory experiment. A class of children in elementary school (sample size not
reported) learned a sample of 80 nonsense syllables until each student was able to reproduce 100% of the 5 C e n t e r i n g a v a r i a b l e is d e f i n e d as s u b t r a c t i n g t h e v a r i
a b l e ' s a r i t h m e t i c m e a n f r o m e a c h score.
2.2. P A R A M E T E R ESTIMATION
syllables. Five minutes after everybody had reached this criterion, students were asked to recall the syllables again. This was repeated a total of eight times at varying time intervals. After some
appropriate linearizing t r a n s f o r m a t i o n trials were equidistant. Because the experimenter determined when to recall syllables, the independent variable is a fixed predictor. At each time
the average number of recalled syllables was calculated. These are the d a t a t h a t we use for illustration. We regress the dependent variable Y, number of nonsense syllables recalled, onto the
independent variable X, trial without intermittent repetition. 6 Table 2.1 presents the raw d a t a and the centered variable values. Table 2.1 shows the raw d a t a in the left-hand panel. The first
column contains the Trial numbers, X. The second column contains the average recall rates, Y. The mean value for the trial number is 2 = 4.5, while the m e a n for the recall rate is ~ = 52.32. The
next four columns illustrate the numerical steps one needs to perform for calculation of p a r a m e t e r estimates. The model we investigate for the present example can be cast as follows: Recall =
/~o +/~1 * Trial + e.
The results displayed in the first four columns of the right panel are needed for estimating the slope parameter, bl, using Equation (2.13). We use this formula for the following calculations. The
first of these columns contains the centered trial variable. We calculate these values by s u b t r a c t i n g from each trial number the mean of the trials, t h a t is, Z = 4.5. T h e next column
contains the square of the centered trial numbers. We need the sum of these values for the denominator of Formula (2.13). This sum appears at the b o t t o m of this column. T h e fifth column
contains the centered recall rates. The s u m m a n d s of the cross product of the centered predictor, Trials, and the centered
6Considering the well-known, nonlinear shape of decay, that is, forgetting curves, one may wonder whether the distances between recall trials were equal. Without having access to the original data,
we assume that the time distance between recall trials increased systematically with the number of trials. If this assumption is correct, one cannot validly regress recall rates onto time, unless one
possesses information about the time passed between the trials.
C H A P T E R 2. SIMPLE L I N E A R R E G R E S S I O N
Table 2.1: Estimation of Slope Parameter for Data from Forgetting Experiment Variables
xi 1 2 3 4 5 6 7 8
Calculations Needed for Estimation of bl
Yi 78.00 70.88 56.56 37.92 54.24 48.72 39.44 32.80 Sums
-3.5 -2.5 -1.5 -0.5 0.5 1.5 2.5 3.5
12.25 6.25 2.25 0.25 0.25 2.25 6.25 12.25 42.00
25.68 18.56 4.24 -14.40 1.92 -3.60 - 12.88 -19.52
659.46 344.47 17.98 207.36 3.69 12.96 165.89 381.03 1792.84
-89.88 -46.40 -6.36 7.20 0.96 -5.40 -32.20 -68.32 -240.40
aThe variables z~ and z~ denote the centered x and y values, respectively. For instance, z~ = (xi - ~).
criterion, Recall, appear in the last column of Table 2.1. The sum of the cross products appears at the bottom of this column. It is needed for the numerator of Formula (2.13). Using the calculated
values from Table 2.1 in Formula (2.13) we obtain bl -
bl = E ~ : ~ (x~ - ~ ) ( y ~ - ~) _- - 2 4 0 . 4 n ~=1 - s:) 2 42
__ - 5 . 7 2 .
For the intercept parameter we obtain b0 = ~ - bl~ = 52.3 - (-5.72) 94.5 -- 78.04. Figure 2.5 presents the Trial x Recalled Syllables plot. It also contains the regression line with the parameters
just calculated. Figure 2.5 suggests that, with Trial 4 being the only exception, the linear regression line provides a good rendering of the decay of the learned material. In order to quantify how
well a regression line depicts a relationship most researchers regress using methods that include statistical significance tests and measures of variance accounted for. The former are presented in
Section 2.5. One example of the latter is presented in the following
2.2. P A R A M E T E R
"~ 4 0 Z 3O 0
I 1
I 2
I 3
I 4
I 5
I 6
I 7
I 8
I 9
Trial Figure 2.5: Scatterplot showing the dependence of the recall performance on trial. section. 2.2.3
of Fit of the
One measure of how well a statistical model explains the observed data is the coefficient of determination, that is, the square of the Pearson correlation coefficient, r, between y and ~. This
measure describes the percentage of the total variance that can be explained from the predictorcriterion covariance. In its general form, the correlation coefficient is given by
E i ( x i - x)(yi - Y) r
~/E~(~- ~): E~(y~- ~)~ If we replace x with ~) in the above formula we obtain the multiple correlation R. The reasons for making a distinction between r and R are that 9 r is a measure of association
between two random variables whereas
R is a measure between a random variable y and its prediction from a regression model. 9 r lies in the interval - 1 < r <_ 1 while the multiple correlation R cannot be negative; that is, it lies in
the interval 0 < R ~_ 1. 9 R is always well defined, regardless of whether the independent variable is assumed to be random or fixed. In contrast, calculating the correlation between a random
variable, Y, and a fixed predictor variable, X, that is, a variable that is not considered random, makes no sense. The square of the multiple correlation, R 2, is the coefficient of determination. If
the predictor is a random variable the slope parameter can be expressed as 2
E~(Y~ -_;~)2 Y)~ bl -- ~ii(xi
~--~ r, 82
where s x2 and sy2 are the variances of the predictor and the criterion, respectively. If the predictor, X, is a random variable it makes sense to calculate the correlation between X and Y. Then, R 2
is identical to r 2 as 1~ is just a linear transformation of X, and correlations are invariant against linear transformations. Alternatively, R 2 can be expressed as the ratio of variance explained
by regression, SSR, and the total variance, SSTO, or R2 -
- SSTO=I
where S S E is the variance of the residuals. The right-hand term in (2.15) suggests that R 2 is a measure of proportionate reduction in error. In other words, R 2 is a measure of the proportionate
reduction in the variability of Y that can be accomplished by using predictor X.
Interpreting Regression Parameters
This section presents aids for interpretation of the regression parameters b0 and bl. We begin with b0.
2.3. INTERPRETING REGRESSION PARAMETERS 2.3.1
Formula (2.12) suggests that b0 is composed of two summands. The first summand is the arithmetic mean of Y, ~. The second is the product of the slope parameter, bl, and the arithmetic mean of X, 2.
Now suppose that X is centered, that is, the arithmetic mean of X has been subtracted from each value xi. Centering transforms the array of x values as follows: Xl
Xl - - X
-- X
Centering constitutes a parallel shift in the data. Parallel shifts are linear transformations. Regression slopes are invariant against parallel shifts. The effect of centering is that the mean of an
array of data is shifted to be zero. Centering X has the effect that the second summand in (2.12) is zero. Therefore, whenever X is centered, the intercept parameter equals the arithmetic mean of Y.
When X is not centered, the intercept parameter equals the difference between the arithmetic mean of Y and the product of the slope parameter and the arithmetic mean of X. Consider the example given
in Table 2.1. The criterion in this example, Recall, had an arithmetic mean of ~ = 418.56/8 = 52.3. The intercept parameter was b0 = 78.04. We conclude that X had not been centered for this example.
The difference between 52.3 and 78.04 must then be b12 = - 5 . 7 2 . 4 . 5 - -25.74. Adding this value to 52.3 yields 78.04. Readers are invited to recalculate b0 using the centered values for Trial
in the third column of Table 2.1. The interpretation of intercept parameters across samples is possible, in particular when X has been centered. It has the same implications as mean comparisons
across samples.
the Slope
Slope parameters indicate how steep the increase or decrease of the regression line is. However, there is a simpler interpretation. The slope parameter in simple regression indicates the number of
steps one moves
CHAPTER 2. SIMPLE LINEAR R E G R E SSIO N
on Y when taking one step on X. Consider again the example in Table 2.1. For this example we calculated the following regression equation: Recall = 78.04 - 5.72 9Trial + Residual. Inserting a value
from the interval between Trial = 1 and Trial = 8 yields estimates of Recall that are located on the regression line in Figure 2.5. For example, for Trial = 3 we calculate an average estimated Recall
= 60.91. Taking one step on X, from Trial - 3 to Trial = 4, yields the estimate for Recall of 55.18. The difference between these two estimates, 6 0 . 9 1 - 55.18 = -5.72, is exactly the estimated
regression slope.
Interpolation and Extrapolation
Interpolation is defined as estimating Y values for X values that are within the interval between the maximum and the minimum X values used for estimating regression parameters. Specifically,
interpolation refers to X values not realized when estimating regression parameters. In many instances, interpolation is a most useful and meaningful enterprise. Assuming linear relationships, one
can use interpolation to estimate the effects of some intermediate amount of a drug or some intermediate amount of a fertilizer. In these examples, the predictor is defined at the ratio scale level,
and assuming fractions of predictor values is reasonable. The same applies to predictors defined at the interval level. In other instances, however, when predictors are defined at the ordinal or
nominal scale levels, interpolation does not make much sense. Consider the example in Table 2.1 as illustrated in Figure 2.5. This example involves a predictor at the ratio scale level. For instance,
Trial 3.3 indicates that a third of the time between the 3rd and the 4th trial has passed. Therefore, interpolation using the trial scale is meaningful. Thus, if the trial variable is closely related
to some underlying variable such as time, interpolation can make sense. Extrapolation is defined as the estimation of Y values for X values beyond the range of X values used when estimating
regression parameters. As far as scale levels are concerned, the same caveats apply to
extrapolation as to interpolation. However, there is one more caveat. Extrapolation, when not sensibly performed, can yield results that are implausible, or worse. Predictor values that are
conceivable can yield criterion values that simply cannot exist. The predicted values may be physically impossible or conceptually meaningless. Consider the regression equation given in (2.16).
Inserting predictor values using the natural numbers between 1 and 8 yields reasonable estimates for Recall. Inserting natural numbers beyond these boundaries can yield implausible results. For
example, consider a hypothetical Trial 10. From our regression equation we predict a recall rate of 20.84. This is conceptually meaningful and possible. Consider, however, hypothetical Trial 15.
Inserting into our regression equation results in an estimated recall rate of-7.78. This value is conceptually meaningless. One cannot forget more than what one had in memory. Thus, while
extrapolating may use conceptually meaningful predictor values, the resulting estimates must be inspected and evaluated as to their meaningfulness. The same applies to predictor values. In the
present example, there can be no trial number-5. We can insert this number into the equation and calculate the meaningless estimated recall rate of 106.70 out of 80 items on the list. However,
regardless of how conceptually meaningful the estimate will be, when predictor values are impossible, results cannot be interpreted.
Testing Regression Hypotheses
The two most important methods for evaluating how well a regression line describes the predictive relationship between predictor X and criterion Y focus on the proportion of variance accounted for, R
2, and on statistical significance. R 2 was explained in Section 2.2.3. Recall that R 2 describes how valuable the regression model is for predicting Y from knowledge of X. If R 2 is near 1, the
observations in the X Y scatterplot lie very close to the regression line. If R 2 is near 0, the scatter around the regression line is very wide. The present section introduces readers to statistical
significance testing. Three types of hypotheses are covered. The first concerns a single
CHAPTER 2. SIMPLE LINEAR R E G R E SSIO N
regression slope coefficient, 7 bl. The hypothesis to be tested is whether ~1 is different than either zero (Section 2.5.1) or equal to some constant (Section 2.5.3). The second hypothesis concerns
the comparison of two slope regression coefficients from independent samples (Section 2.5.4). The third hypothesis asks whether the intercept coefficient is different than a constant (Section 2.5.6).
Finally, the construction of confidence intervals for regression coefficients is covered in Section 2.5.7. Chapter 3, concerned with multiple regression, covers additional tests. However, before
presenting hypotheses and significance tests in detail, we list conditions that must be fulfilled for significance tests to be valid and meaningful. These conditions include: 1. The residuals must be
normally distributed. 2. The criterion variable must be measured at the interval level or at the ratio scale level. 3. Researchers must have specified a population. 4. The sample must be
representative of this population. 5. Members of the sample must be independent of each other. 6. The sample must be big enough to give the alternative hypothesis a chance to prevail (for power
analyses, see Cohen, 1988). 7. The sample must be small enough to give the null hypothesis a chance to prevail. 8. The matrix X is assumed to be of full column rank; that is, the columns of the
matrix X must be linearly independent of each other. This is typically the case if the columns of matrix X represent numerical variables. For categorical variables full column rank can always be
achieved by suitable reparameterization. Therefore, X can typically be assumed to be of full column rank. 7The term regression coefficient is used as shorthand for estimated regression parameter.
2.5. TESTING REGRESSION HYPOTHESES 2.5.1
Null Hypothesis:
Slope Parameter
The most frequently asked question concerning statistical significance of the regression slope coefficient is whether the regression coefficient is significant. In more technical terms, this question
concerns the hypothesis that the slope coefficient describes a sample from a population with a slope parameter different from zero. In null hypothesis form one formulates Ho :/~i = 0 . There are
three possible alternative hypotheses to this null hypothesis. First there is the two-sided alternative hypothesis H I : ~ I ~ 0. Using this hypothesis researchers do not specify whether they expect
a regression slope to go upward (positive parameter) or downward (negative parameter). The null hypothesis can be rejected in either case, if the coefficient is big enough. In contrast, when
researchers expect a slope parameter to be positive, they test the following, one-sided alternative hypothesis H1 : ~1 > 0. Alternatively, researchers specify the following one-sided alternative
hypothesis when they expect the regression slope to be negative: H1 :~1 < 0. In the memory experiment analyzed in Section 2.2.2, the obvious hypothesis to be tested is H1 : ~1 < 0, because we expect
recall rates to decrease over time. Statistical tests of the two-sided alternative hypothesis are sensitive to large slope coefficients, regardless of sign. In contrast, tests of the onesided
alternative hypotheses are sensitive only to slope coefficients with the right sign. Whenever the null hypothesis can be rejected, researchers conclude that the sample was drawn from a population
with slope parameter/3x > 0, /3x < 0, or/31 ~ 0, depending on what null hypothesis was
tested. This conclusion is based on a probability statement. This statement involves two components, the significance level, a, and the population. The significance level is typically set to a = 0.05
or = 0.01. If the null hypothesis can be rejected, the probability is less than a that the sample was drawn from a population with parameter ~1 = 0. Obviously, this statement makes sense only if the
population was well defined and the sample is representative of this population (see Condition 3, above). The null hypothesis prevails only if there is no (statistically significant) way to predict
the criterion from the predictor. In the following we present an F test that allows one to test the above hypotheses. In Chapter 3, when we cover multiple regression, we present a more general
version of this test. The F test for simple regression can be given as follows: F=
2) 2
, for r 2 < 1 .
The F test given in (2.17) relates two fractions to each other. The fraction in the numerator involves the portion of variance that is accounted for by the linear regression of Y on X, r 2. This
portion, given by the coefficient of determination, is weighted by the numerator's degrees of freedom, dr1 - 2 - 1 -- 1, that is, the number of predictors (including the constant) minus one. The
fraction in the denominator involves the portion of variance that remains unaccounted for by the linear regression of Y on X, 1 - r 2. This portion is weighted by the denominator's degrees of
freedom, dr2 = n - 2, that is, the sample size minus the number of predictors (including the constant). In brief, this F ratio relates the weighted variance accounted for to the weighted unexplained
variance. Formula (2.17) also contains a constraint that is important when employing tests of statistical significance. This constraint requires that less than 100% of the variance be accounted for.
If 100% of the variance is accounted for, and the authors of this book have yet to see a case where empirical data can be explained to 100%, there is no need for statistical testing. If the residual
variance is zero, there is nothing to test against. In this case, one may consider the relationship between the predictor and the criterion deterministic and may skip the significance test. When
deciding whether or not to reject the null hypothesis, one pro-
ceeds as follows: If F >_ F o ~ , d f l , d f 2
then reject H0. This F test is two-sided.
Many statistical software packages, for example, SYSTAT and SAS, use a t test instead of this F test. These two tests are equivalent with no differences in power. The t test for the hypothesis that /
~1 ~ 0 has the form:
~-- 8(bl) '
where s(bl) denotes the estimated standard deviation of bl. The estimated variance of bl is
1 n 2 MSE _ n-2 ~-~i=1 ei 82 (51) -- zin__l(Xi - - ~)2 -- ~in=l(X i _ ~ ) 2 '
where the mean squared error, MSE, is the sum of the squared residuals divided by its degrees of freedom, n - 2. The degrees of freedom of this t test appear in the denominator of the MSE.
The following numerical example uses data from the Vienna Longitudinal Study on cognitive and academic development of children (Spiel, 1998). The example uses the variables Performance in Mathematics
in Fourth Grade,/1//4 and Fluid Intelligence, FI. Using data from a sample of n = 93 children, representative of elementary school children in Vienna, Austria, we regress M4 on FI. Figure 2.6
displays the raw data and the linear regression line. The regression function for these data is M4 - 2.815 + 0.110 9FI + Residual. The correlation between M4 and FI is r = 0.568, and r 2 - 0.323. We
now ask whether the regression of 2144 on F I has a slope parameter that is statistically significantly different than zero. We calculate F by inserting into (2.17), F =
0.323 991 1 - 0.323
= 43.417.
C H A P T E R 2. S I M P L E L I N E A R R E G R E S S I O N ll
lO 9 8
7 -,9
I 30
I I 40 50 Intelligence
I 60
I 70
Figure 2.6: Scatterplot showing the dependence of the performance in M a t h e m a t i c s on Intelligence.
The critical test statistic for a two-sided test is F 0 . 0 5 , 1 , 9 1 - - 3.94. The empirical F - 43.417 is greater than this value. We therefore conclude t h a t the null hypothesis of a zero
slope p a r a m e t e r can be rejected, and favor the alternative hypothesis that the slope p a r a m e t e r is different t h a n zero. The t value for the same hypothesis is t - 6.589. Squaring
yields t 2 43.417, t h a t is, the same value as was calculated for F. This illustrates the equivalence of t and F.
This section presents a t test for the null hypothesis t h a t an empirical slope coefficient is equal to some a priori determined value, k. The t statistic is of the same form as t h a t of the
statistic for the hypothesis that/~1 - 0 and can be described as t :
bl - k '
2.5. TESTING REGRESSION HYPOTHESES 100
E 70
.. . :
E ~e
, " ,
-:. . . . . . . . . . . . . . . . . . . . . . .
-: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
* ....................
Trials Samples
mOriginal Sample
, ~ S a m p l e 1995
Figure 2.7: Comparing two samples in the forgetting experiment.
where s(bl) is defined as in Equation (2.19). The following data example compares results from the Meumann (1912) forgetting experiment (Section 2.2.2) with a hypothetical sample from 1995.
Participants in the replication experiment learned the same syllables until they were able to recall 100%. Forgetting was tested using the same number of trials in the same time intervals. Figure 2.7
displays the original data together with the data from the replication experiment. The data for the new sample appear also in Table 2.2. The data for the original sample appear in Table 2.1. The
figure is arranged so that forgetting is regressed onto trials as in Figure 2.7. The figure suggests that forgetting is less brisk in the replication sample than in the original sample. We test the
hypothesis that the slope parameter for the regression of Recall on Trial in the 1995 sample is 131 - -10. Inserting the values from the table into (2.20) yields t--
-6.543 + 10
- 1.21.
The tail probability for this value is, for df = 6,p = 0.2718 (two-sided test). This value is greater than a = 0.05. Thus, we cannot reject the null hypothesis that the slope coefficient in the
population is equal to
C H A P T E R 2. S I M P L E L I N E A R R E G R E S S I O N
Table 2.2: Estimation of Slope Parameter for Data from Replication of Forgetting Experiment: Comparison of Two Samples Variables xi 1 2 3 4 5 6 7 8
yi 97.8 90.0 67.0 65.0 63.0 59.0 55.0 48.0 Sums
Calculations Needed for Estimation of bl -
-3.5 -2.5 -1.5 -0.5 0.5 1.5 2.5 3.5
12.25 6.25 2.25 0.25 0.25 2.25 6.25 12.25 42.00
29.7 21.9 -1.1 -3.1 -5.1 -9.1 -13.1 -20.1
882.09 479.61 1.21 9.61 26.01 82.81 171.61 404.01 2056.96
- 103.95 -54.75 1.65 1.55 -2.55 -13.65 -32.75 -70.35 - 274.80
aThe variables z~ and zy denote the centered x and y values, respectively. For instance, z.~ -- (xi - 2).
31 -
Equal Many researchers consider it desirable to replicate studies using independent samples. Results are deemed more trustworthy when replication studies exist that confirm results of earlier
studies. Comparing regression coefficients can be an important part of comparing studies. Nobody expects that two estimated regression coefficients, bl and b2, from independent samples are
numerically exactly the same. Rather, one has two expectations: 1. The significance status (significant versus insignificant) of the two coefficients, bl and b2, is the same. 2. The numerical
difference between the two estimates is "small"; that is, the two coefficients are not significantly different from each other. Whereas the first expectation can be tested using individual tests of
bl and b2, the second involves a comparison of slope coefficients. This
2.5. T E S T I N G R E G R E S S I O N H Y P O T H E S E S
comparison can be performed using the following t test. Let bl be the slope coefficient estimated for the first sample, and b2 the slope coefficient for the second, independent sample. Then, the t
test for the null hypothesis that/~1 =/32, or equivalently, that 2~1 --/~2 - - 0 , has the form bl
t - s(bl - b2)'
where s(bl - b2) is the standard deviation of the difference between the two slope coefficients. This test has n l + n2 - 4 degrees of freedom. The variance, s2(bl - b2), of the difference between
two slope coefficients is 82(bl - 52) -- EinJ~l(yli- ~1)2 + Ein2=l(y2i- ~2)2
( $
nl + n 2 - 4 1 nl
Ei--1 (Xli -- Xl
1 )2 -~-
Ei--1 (X2i -- X2
) )2
When deciding whether or not to reject the null hypothesis that the difference between two regression slopes is zero, one proceeds as follows: 1. Two-sided alternative hypotheses: If t > t~,df then
reject H0. 2. One-sided alternative hypotheses: If t > ta/2,d$ and bl - b2 has the right sign, then reject H0. For a data example we use the data in Figure 2.7 again. The thinner regression line for
the new sample has a less steep slope than the thicker regression line for the original sample. In the following we answer the question of whether this difference is statistically significant. The
null hypothesis for the statistical test of this question is/~1 -/~2. If this hypothesis prevails, we can conclude that the two samples stem from one population with one regression slope parameter.
If the alternative hypothesis prevails, we conclude that the samples stem from different populations with different slope parameters. Table 2.2 contains the data and calculations needed for the
comparison of the two samples. (Data and calculations for the original sample are in Table 2.1.) Using these data we calculate the following regression equation for the 1995 sample: Recall2 = 9 7 . 5
4 3 - 6.543, Trial + Residual.
C H A P T E R 2.
We now ask whether the bl = -7.16 for the original sample is significantly different than the b2 = -6.543 for the present sample. We perform a two-sided test because there is no a priori knowledge
that guides more specific expectations. To calculate the variance of the difference between bl and b2 we insert the values from Tables 2.1 and 2.2 into Formula (2.22) and obtain S2 (bl - b2) -
1792.84+2056.96(1 8+8-4
Inserting into (2.21) yields the t statistic t -
-7.16 + 6.54 x/15:27
= -0.158.
This test statistic has 8 + 8 - 4 - 12 degrees of freedom. The twosided tail probability for this t value is p - 0.877. This value is greater than a - 0.05. Thus, we are in no position to reject the
null hypothesis.
for Slope
Sections 2.5.1, 2.5.3, and 2.5.4 presented a selection of tests for the most frequently asked questions concerning slope coefficients in simple regression. As one can imagine, there are many more
questions one can ask. We list three sample questions. For details concerning test procedures please consult the cited sources. First, one can ask whether slope coefficients are constant in the same
sample. Consider a repeated measures design where predictors and criteria are repeatedly observed. This design allows researchers to estimate regression parameters separately for each observation
point. These estimates can be tested using hierarchical linear models (Bryk & Raudenbush, 1992). Second, one can ask whether one regression line is consistently located above another, within a given
interval of predictor values. There is a t test by Tsutakawa and Hewett (1978) that allows one to answer this question. Third, one can ask whether the decision to fit a linear regression line is
supported by the data. Analysis of variance methods allow one to analyze this question (Neter, Kutner, Nachtsheim, & Wasserman, 1996).
2.5. TESTING REGRESSION HYPOTHESES 2.5.6
The preceding sections introduced readers to methods for testing hypotheses concerning the slope coefficient, /31. The following three hypotheses were covered in detail: (1) H0 : /31 = 0, (2) Ho : /
~1 = k, and (3) Ho :/~1 - /~2. Hypothesis (1) can be shown to be a special case of Hypothesis (2) when we set k = 0. This section presents a t test for testing the hypothesis Ho :/30 = k. As before,
we first present a general form for the t test for this hypothesis. This form can be given as t - bo -/30 s(bo)
(2.23) '
where, as before, bo is the estimate of the population parameter,/30, and k =/30. Degrees of freedom are df = n - 2. The variance, s2(bo), is given by
s2(bo) - MSE ( -1 +
n ~2 ~ i = 1 (Xi
) --
where MSE is the sum of the residuals, divided by n - 2, that is, its degrees of freedom. The standard deviation in the denominator of (2.23) is the square root of (2.24). For a numerical
illustration consider again the d a t a in Table 2.2. We test the hypothesis t h a t the estimate, b0 = 97.543, describes a sample t h a t stems from a population with/30 = 0. Inserting into (2.24)
yields the variance
1 (~)2) s2(bo) - 43.164
~ +
- 43.164 9 0.607 - 26.207.
Inserting into (2.23) yields the t value t-
97.543 - 0 X/26.207
97.543 5.119
= 19.054.
The critical two-sided t value for a - 0.05 and df - 6 is t - 2.447. The calculated t is greater than the critical. Thus, we can reject the null hypothesis that/~o - 0.
C H A P T E R 2. S I M P L E L I N E A R R E G R E S S I O N
After this example, a word of caution seems in order. The test of whether/~0 = 0 does not always provide interpretable information. The reason for this caution is that in many data sets researchers
do simply not assume that the intercept is zero. As was indicated before, the intercept is equal to the mean of Y only when X was centered before analysis. If this is the case, the above test is
equivalent to the t test H0 : # = 0. Again, this test is meaningful only if the observed variable Y can reasonably assumed to have a mean of zero. This applies accordingly when the predictor
variable, X, was not centered. Easier to interpret and often more meaningful is the comparison of the calculated b0 with some population parameter, ~0. This comparison presupposes that the regression
model used to determine both values is the same. For instance, both must be calculated using either centered or noncentered X values. Otherwise, the values are not directly comparable. 2.5.7
The t distribution can be used for both significance testing and estimating confidence intervals. The form of the interval is always lower limit _< fly < upper limit, where/~j is the parameter under
study. The lower limit of the confidence interval is
bj - s(bj) t~/2,~-2, where ta/2,n_ 2 is the usual significance threshold. Accordingly, the upper limit is
bi + s(bi)t~/2,n-2. For a numerical example consider the slope parameter estimate bl -6.543. The 95% confidence interval for this estimate is -6.543 - 1.014,2.447 -9.024
<_ ~1 _< ~1
_~ -6.543 + 1.014,2.447 ~
This confidence interval includes only negative parameter estimates.
2.5. TESTING R E G R E S S I O N H Y P O T H E S E S
This is an illustration of the general rule that if both limits of a confidence interval have the same sign, the parameter is statistically significant.
This Page Intentionally Left Blank
Chapter 3
Most typically and frequently, researchers predict outcome variables from more than one predictor variable. For example, researchers may wish to predict performance in school, P, from such variables
as intelligence, /, socioeconomic status, SES, gender, G, motivation, M, work habits, W, number of siblings, NS, sibling constellation, SC, pubertal status, PS, and number of hours spent watching TV
cartoons, C. Using all these variables simultaneously, one arrives at the following multiple regression equation: /5 _
b0 + blI + b2SES + b3G + b4M + b5W + b6NS + bTSC + bsPS + b9C.
This chapter introduces readers to concepts and techniques for multiple regression analysis. Specifically, the following topics are covered: ordinary least squares estimating and testing parameters
for multiple regression (Section 3.1) and multiple correlation and determination (Section 3.3). 43
C H A P T E R 3. MULTIPLE L I N E A R R E G R E S S I O N
Least Squares
The model given in (3.1) is of the form P
y~- Z0 + Z Z~x~j+ ~,
where j indexes parameters, p + 1 is the number of parameters to be estimated, p is the number of predictors in the equation, and i indexes cases. The multiple regression model has the same
characteristics as the simple regression model. However, the number of predictors in the equation is increased from one to $. Accordingly, the deterministic part of the model must be extended to P
Z0 + E Z~x~jj=l
To derive a general O LS solution that allows one to simultaneously estimate all p + 1 parameters we use the tools of matrix algebra. In principle the derivation could be achieved without matrix
algebra, as was outlined in Section 2.1. We insert the equation for the prediction, P
~)j - bo § ~
bjxij + e~i,
into Function (2.7), that is, n
Q - ~~(yi -[b0 + ~ i---1
bjxiy + ci]) 2.
This is now a function of the p + 1 b-coefficients. We calculate all partial derivatives of Q with respect to each b-coefficient and set them equal to zero. This results in a system of p + 1
equations that can be uniquely solved for the p + 1 unknown b-coefficients. With three or more b-coefficients, formulas become quickly inconvenient. However, the general strategy remains the same,
just the notation is changing. Matrix algebra allows a very compact notation. Therefore, we use matrix notation. For the
3.1. ORDINARY LEAST SQUARES ESTIMATION
following derivations we need results of vector differentiation. Appendix C reviews these results. Before we can apply the rules of vector differentiation we first have to recast Equation (3.3) using
matrix algebra. The model given in (3.2) can be written in matrix notation as y = X~+e, where y is the vector of observed values, X is the design matrix, and e is the residual vector. The design
matrix contains the vectors of all predictors in the model. In matrix notation, the OLS criterion to be minimized, Q, is expressed as Q - ( y - X b ) ' ( y - Xb).
The expressions in the parentheses describe vectors. Transposing a vector and then multiplying it with another v e c t o r - in this case with i t s e l f - yields the inner vector product. The
result of an inner vector product is a scalar, that is, a single number. What we need to find is the vector b for which this number is the smallest possible. We now show in detail how one arrives at
the well-known least squares solution. The minimization of the sum of squared residuals proceeds in two steps. 1. We determine the first vectorial derivative with respect to b, or, equivalently, we
calculate all partial derivatives of Q with respect to the p + 1 elements in b. 2. The necessary condition for a minimum of Q at point b is that the vectorial derivative, that is, all partial
derivatives, be zero at that point. The vector b at which this condition is met can be found by setting this vectorial derivative equal to zero and then solving for b. We begin with (3.4).
Multiplying out yields Q = y'y - b'X'y - y'X'b + b'X'Xb.
To complete the first step of the minimization process we determine
C H A P T E R 3. MULTIPLE L I N E A R R E G R E S S I O N
the first vectorial derivative of (3.5) using the rules of appendix C. Before we do this, we inspect (3.5). In this expression vector y and matrix X are known. Only the parameter vector, b, is
unknown. Therefore, we need to determine the derivative in regard to b. Because the vectorial derivative of a sum is equal to the sum of the vectorial derivatives, we apply the rules from the
Appendix separately to the four terms of (3.5), that is, 0 0 0 b'X'yO---~Q - ~ - ~ y ' y -
0 'X' b + ~--~b 0 'X' Xb. ~-~y
For the first term in (3.6) we apply Rule 3, (Appendix C) for the second and third terms we apply Rule 2 and Rule 1, respectively, and for the fourth term we use Rule 5. We obtain
0 O--~Q -= 0 - X ' y - X ' y + [X'X + (X'X)']b.
Because (X'X)' = X ' X we can simplify (3.7) and obtain 0 ~-~Q - - 2 X ' y + 2X'Xb.
However, (3.8) does not provide us yet with the estimate for the parameter vector b that yields the smallest possible sum of the squared residuals. To obtain this vector, we need to perform the
second step of the minimization process. This step involves setting (3.8) equal to zero and transforming the result so that b appears on one side of the equation and all the other terms on the other
side. Setting (3.8) equal to zero yields - 2 X l y + 2XlXb = 0.
Dividing all terms of (3.9) by 2 and moving the first term to the righthand side of the equation yields what is known as the normal equations X'Xb-
To move the term X ' X to the right-hand side of the equation, we premultiply both sides of the equation with (X'X) -1, that is, with the
3.1. ORDINARY LEAST SQUARES ESTIMATION
inverse of X~X. This yields
and because ( X ' X ) - I ( X ' X ) - I, we obtain b-
Equation (3.11) is the ordinary least squares solution for parameter vector b. Recall from Appendix A on matrix algebra that not all matrices have an inverse. That the inverse of X ' X exists follows
from the assumption that X is of full column rank made in Section 2.5. Vector b contains all p + 1 parameter estimates for the multiple regression problem, that is, the intercept and the coefficients
for the variables. To illustrate the meaning of the coefficients in multiple regression, consider the case where p + 1 independent variables, Xi, along with their coefficients are used to explain the
criterion, Y. The first of these coefficients, that is, b0, is, as in simple regression, the intercept. This is the value that Y assumes when all independent variables are zero. As was explained in
the chapter on simple regression, this value equals the mean of Y, ~, if all variables Xj are centered (for j = 1, ..., p). The remaining p coefficients constitute what is termed a regression surface
or a response surface. This term is appropriate when there are two X variables, because two variables indeed span a surface or plane. When there are three or more X variables, the term "regression
hyperplane" may be more suitable. To illustrate, consider the following multiple regression equation, where we assume the regression coefficients to be known: ]~ - 14 + 0.83X1 - 0.34X2. When X 1 - -
X 2 - 0, the criterion is Y - 14. The independent variables, X1 and X2, span the regression surface given in Figure 3.1. The grid in Figure 3.1 depicts the regression surface for - 1.0 < X1, X2 ___+
1.0. The regression coefficients can be interpreted as follows: 9 Keeping X2 constant, each one-unit increase in X 1 results in a 0.83unit increase in Y; this change is expressed in Figure 3.1 by the
'6~I 15"0[
Figure 3.1" Regression hyperplane ~ - 14 + 0.83Xl -0.34x2. lines that run parallel to the X1 axis. 9 Keeping X1 constant, each one-unit increase in X 2 results in a 0.34unit decrease in Y; this
change is expressed in Figure 3.1 by the grid lines that run parallel to the X2 axis. In more general terms, parameters in multiple regression indicate the amount of change in Y that results from a
given variable when all other variables are kept constant. Therefore, coefficients in multiple regression are often called partial regression coefficients. The change is expressed in units of Y. When
a multiple regression equation involves more than two predictors, the regression hyperplane cannot be depicted as in Figure 3.1. Consider an equation with four predictors. The hyperplane spanned by
four predictors has four dimensions. The entire data space under study has five dimensions and cannot be graphed. One important characteristic of the regression surface depicted in Figure 3.1 is that
it is completely linear. Linear regression lines and surfaces
3.1. ORDINARY LEAST SQUARES ESTIMATION
are valid if there is no interaction between predictors. In this case, regression predictors are said to have additive effects. If, however, there exist interactions, the regression surface is no
longer linear. It may be curved or evince leaps. As a matter of course, this can also be the case if the relation between predictor and criterion is basically nonlinear. To test whether the
parameters estimated in multiple regression are significantly different than zero, researchers typically employ either the t test or the F test. The F test will be explained later in this chapter.
The t test can be specified in the following two steps: 1. Estimation of the covariance matrix of the b vector. 2. Division of bk by the kth diagonal element of the estimated covariance matrix. These
two steps result in a t test that has the well-known form t -
bk ,(bk)'
where bk is the kth estimate of the coefficient of the multiple regression model, and s(bk) is the estimate for the standard deviation of this coefficient. To derive the covariance matrix of the b
vector we use the following result:
C(ny) = AC(y)A'. Recall from Section 2.2.2 that C ( y ) - a2I, which expresses the assumptions that the observations are independent and have constant variance. The covariance matrix of the b vector,
C(b), can then be calculated as follows: C(b)
( X ' X ) - l X ' C ( y ) X ( X ' X ) -1
( X ' X ) - l x ' a 2 I X ( X ' X ) -1
0-2 ( X ' X ) - - l x t x ( x ' x )
C H A P T E R 3. M U L T I P L E L I N E A R R E G R E S S I O N
These equations give the true covariance matrix. When analyzing real life data, however, we rarely know the population variance, a 2. Thus, we substitute the mean squared error, MSE, for a2, and
obtain (~(b) - MSE ( X ' X ) -~,
where the hat indicates that this is an estimate of the true covariance matrix. The mean squared error, MSE -
) i ei, i=l
is an unbiased estimate of the error variance a 2. Taking the square roots of the diagonal elements of the covariance matrix given in (3.13) yields the estimates for the standard deviations used in
the denominator in (3.12). These are the values that typically appear in regression analysis output tables as standard error of bk.
D a t a Example
The following d a t a example uses a classical data set, Fisher's iris data. The set contains information on n = 150 iris flowers. The following five variables were observed: Species (three
categories), Sepal Length, Sepal Width, Petal Length, and Petal Width. To illustrate multiple regression with continuous variables, we predict Petal Length from Petal W i d t h and Sepal Width. We
use all cases that is, we do not estimate parameters by Species. 1 The distribution of cases appears in Figure 3.2, coded by Species. We estimate parameters for the following regression equation:
Petal Length = bo + bl 9 Petal W i d t h + b2 9 Sepal W i d t h + Residual.
1It should be noted that disregarding Species as a possibly powerful predictor could, when analyzing real data, result in omission bias. This is the bias caused by the absence of powerful predictors
in a regression model. In the present context, however, we have to omit predictor Species, because multicategorical predictors will be covered later, in Chapter 4.
3.2. DATA E X A M P L E +
SPECIES ~.,+--3
,-~. r
Figure 3.2: Three species of iris flowers.
OLS multiple regression yields the following parameter estimates Petal Length = 2.26 + 2.16, Petal Width - 0.35 9Sepal Width + Residual. Figure 3.3 depicts the regression plane created by this
function. To determine whether the regression coefficients differ from zero we use the t test as explained above. Having already obtained the parameter estimates as given in the last equation, all we
need to be able perform the t test is the estimated standard error of each estimated regression coefficient, that is, the square roots of the diagonal elements of MSE(X'X) -1. The design matrix X has
150 rows and 3 columns; therefore X ' X is a square matrix with 3 rows and columns. In our example the inverse of X ' X yields
0.470 -0.042 -0.135
-0.042 0.013 0.009
-0.135 ) 0.009 . 0.041
Note that this matrix is symmetric. For these data, M S E = 0.209.
C H A P T E R 3. M U L T I P L E L I N E A R R E G R E S S I O N
"7 6
SPECIES j
r ~-"
....i.. 3 • ':) 1
Figure 3.3: Regression surface for prediction of iris petal length.
Now, multiplying (X~X) -1 by M S E yields the estimated covariance matrix of the estimated parameter vector.
(~(b) -
0.209 (X'X) -1 -
0.0983 --0.0088 --0.0282
-0.0088 0.0028 0.0018
-0.0282 ) 0.0018 0.0085
The diagonal elements of this matrix are the estimated variances of the estimated regression coefficients. The off-diagonal elements contain the covariances. The first diagonal element is the
estimated variance of the intercept estimate, the second the estimated variance of the Petal Width coefficient, and the third the estimated variance of the Sepal Width coefficient. Because we do not
need the variance estimates, but rather the corresponding standard errors, we take the square roots of the three diagonal elements. The coefficients with standard errors in parentheses are
Intercept Petal Width Sepal Width
2.26 2.16 -0.35
(0.313) (0.053) (0.092).
We finally perform the t test, by simply dividing the estimate of the regression coefficient by its estimated standard error. Since the intercept parameter is not of interest, we do not consider
testing it. Both slope parameters are statistically significantly greater than zero. More specifically, we obtain t147 -- 40.804 and t147 - -3.843 for bl and b2, respectively. The two-tailed
probabilities for both parameters are p < 0.01. Both t tests have 147 degrees of freedom. For these t tests the degrees of freedom are given by the formula df = n - p 1, where n is the number of
observations and p is the number of predictor variables. (For significance testing in multiple regression using the F test see Section 3.4.) Computer programs for regression analysis usually report
in their output of regression analyses the estimated regression coefficients, the corresponding standard errors, the t values, and the p values. Often, the whole covariance matrix of the estimated
regression coefficients is not needed. Therefore, it is typically not printed by default, but can often be requested as an output option. Many researchers ask whether, beyond statistical
significance, a good portion of variance of the criterion can be accounted for by the predictors. This portion of variance can be measured via the square of the multiple correlation coefficient. The
next chapter covers the multiple R.
Multiple Correlation and D e t e r m i n a t i o n
To explain the concept of multiple correlation we use centered variables. Centering is a linear transformation that involves subtracting the arithmetic mean from each value. The basic equation that
underlies the concept of multiple correlation and determination is that of decomposition of variability: Total Variability = Residual Variability + Explained Variability. (3.14) The total
variability, also termed variability of the criterion, can be
expressed as followsTotal Variability- y~y.
For the sake of simplicity, and because we do not need this detail for the following explanation, we express the residual variability as the Sum of Squared Residuals, SSR, Residual Variability- SSR.
In a fashion analogous to the total variability we express the explained variability as Explained Variability - :9':9-
Using (3.15), (3.16), and (3.17) we can reexpress the decomposition Formula (3.14) as y ' y - SSR + :9'~.
Dividing (3.18) by the criterion variability expresses variability components as proportions of unityy'y y~y
SSR y~y
:9':9 y~y
1 - SSR y'y
S"Y y'y
~ .
Equation (3.19) can be read as follows: The criterion variability, expressed as 1, is composed of one portion that remains unexplained, the residual variability, and one portion that is explained.
The latter portion appears in the second term on the right-hand side of (3.19). This term is the ratio of explained variability to criterion variability. It is termed the coefficient of multiple
determination, R 2.
3.3. MULTIPLE C O R R E L A T I O N AND D E T E R M I N A T I O N
In other words, R 2 -
Because of :~':~ __ Y'Y, we have always 0 _< R 2 _~ 1 and, accordingly, 0 < R < 1. In general, the coefficient of multiple determination is a measure of the strength of the association between one
criterion and multiple predictors. In the iris flowers data example of the last section we predicted Petal Length from Petal Width and Sepal Width. This predictor set allowed us to explain R 2 -
0.933, that is, 93.3% of the variance of Petal Length. The multiple correlation was R - 0.966.
of R 2 and
There exist statistical significance tests that allow one to test the null hypothesis H o " p~ - O,
where p is the multiple correlation in the population. The F statistic for this null hypothesis is F =
R ~ ( ~ - p - 1) (1 _ R 2 ) p
where p denotes the number of predictors and n is the sample size; p and n - p - 1 are the degrees of freedom for the F statistic. Taking up the iris flower example from Section 3.2, we insert the
C H A P T E R 3. M U L T I P L E L I N E A R R E G R E S S I O N
mate for the coefficient of multiple determination into (3.21) and obtain F = 0 . 9 3 3 ( 1 5 0 - 2 - 1) _ 137.15 _ 1023.52 (1 - 0.93a)2 0.067 a value that is statistically significant. While
appropriate (3.20) may not always for this caution is that mean of R 2, is greater
in many instances, the null hypothesis given in be reasonable to ask (Huberty, 1994). The reason the expectancy of R 2, E(R2), t h a t is, the long-run than zero if p = 0 (Morrison, 1990).
Specifically, E(R2)-
n -P 1 , if p - 0 .
For example, in a sample of n = 26 with p = 6 parameters to estimate we have E ( R 2) = 5/25 = 0.20. This means that, across many independent samples, one can expect to obtain, by chance, R 2 --
0.20. Thus, it can occur that a R 2 is statistically greater than zero but is hardly different or even smaller than what one can expect it to be from chance. Therefore, it has been proposed to
replace the null hypothesis Ho : p2 = 0 by
Ho 9p2 _ p02, where p2 _ E(R2). This null hypothesis can be tested using the F statistic R 2 (n - p - 1)2 F = (1 - R 2)p(2n - p - 2)'
with numerator degrees of freedom
dr1 -
4 p ( n - 1) 3n+p-3
and denominator degrees of freedom d.f2 - n - p Huberty, 1994).
(3.23) 1 (Darlington, 1990;
Consider the data example with n = 26 and p = 6. Suppose we have
3.3. M U L T I P L E C O R R E L A T I O N AND D E T E R M I N A T I O N
an estimated R 2 - 0 . 5 0 . By inserting into (3.22) we obtain F -
0.5(26 - 6 - 1) 2 _ 180.5 = 1.64, (1 - 0.5)(6 - 1)(2 9 26 - 6 - 2) 110
with n u m e r a t o r degrees of freedom
4 9 6(26 - 1) 5-3-
dr1- 3,26+
600 = 7.41. 81
Denominator degrees of freedom are dr2 - 2 6 - 6 - 1 - 19. With these degrees of freedom the F value has a tail probability of p - 0.18. Thus, we have no reason to reject the null hypothesis that p2
_ 0.20. Now, in contrast to this decision, we calculate F - 3.17 from inserting into (3.21). W i t h dr1 - 6 and dr2 - 19, this value has a tail probability of p - 0.0251. Thus, we would reject the
null hypothesis that p2 _ 0. When interpreting results, we now have a complicated situation: while p2 is greater than zero, it is not greater than what one would expect from chance. In other words,
p2 is greater than some value that is smaller t h a n what one would expect from chance. Since this smaller value may be arbitrary, it is recommended using the test statistic given in (3.22). It
allows one to test whether p2 is greater than one would expect in the long run for a sample of size n and for p predictors. In contrast (see Huberty, 1994, p. 353), the null hypothesis H0 " R 2 = 0
is equivalent to the test that all correlations between the predictors and the criterion are simultaneously equal to 0. Thus, the F test for this hypothesis can be applied in this sense. The paper by
Huberty (1994) has stimulated a renewed discussion of methods for testing hypotheses concerning R 2. Snijders (1996) discusses an estimate of p2 t h a t goes back to Olkin and P r a t t (1958). This
estimate is very hard to calculate. A second-order approximation of this estimator is
R2oP2 - 1 9
{ 1+
n- 3 n-p-1
( 1 - R 2) (1 - R2) +
(1 - R 2 ) } .
As long as n < 50, this estimator has considerably less bias than the
C H A P T E R 3. M U L T I P L E L I N E A R R E G R E S S I O N
ordinary R 2. Yung (1996) notes that Huberty's (1994) numerator degrees of freedom formula [Formula (3.23)] is incorrect. The correct formula is df-
[(n - 1)a § p]2 (n-1)(a§247
where a = p2 / (1 - p2). In addition Yung (1996) notes that the test proposed by Huberty is inconsistent, and cites Stuart and Ord (1991) with a more general formula for the expected value of R 2,
given p2 = k. The formula is ,
E(R21p 2 - k) - k +
P (l-k)n- 1
2(n p 1 ) k ( l _ k) 4_ __ n 2- 1 n 2"
The last term in this formula vanishes when p2 = 0 or when p2 __ 1. Most important for practical purposes is that Yung's results suggest that a test of the null hypothesis that the observed R 2 is
statistically not different than some chance value can proceed just as the above classical significance test. Only the significance level a must be adjusted to accommodate the probability of the
chance score. If the classical test is performed, for instance, in a sample of n - 30 with p = 5 predictors, and the population p2 = 0 and the specified a* = 0.1, then the appropriate significance
level is a = 0.04. Yung (1996, p.296) provides figures for p2 _ 0, 0.15, 0.30, and 0.90 that cover between 1 and 8 predictors, and give the appropriate significance thresholds for the sample sizes n
- 10, 11, 14, 17, 20, 30, 50, and 100. More elaborate tables need to be developed, and the test needs to be incorporated into statistical software packages.
Significance Testing
Section 2.5.1 presented an F test suitable for simple regression. This chapter presents a more general version of this test. The test requires the same conditions to be met as listed in Section 2.5.
The F test to be introduced here involves comparing two nested models, the constrained model and the unconstrained model. The uncon-
3.4. SIGNIFICANCE TESTING
strained model involves all predictors of interest. Consider a researcher that is interested in predicting performance in school from six variables. The unconstrained model would then involve all six
variables. To measure whether the contribution made by a particular variable is statistically significant, the researcher eliminates this variable. The resulting model is the constrained model. If
this elimination leads to statistically significant reduction of explained variance, the contribution of this variable is statistically significant. This applies accordingly when more than one
variable is eliminated. This way one can assess the contributions not only of single variables but also of entire variable groups. Consider the following example. As is well known, intelligence tests
are made of several subtests, each one yielding a single test score. Using different subtests researchers try to separate, say, verbal intelligence from abstract and more formal mental abilities.
Thus, it is often of interest to assess the contribution made by a group of intelligence variables to predicting, for instance, performance in school. To assess the contribution of the constrained
model relative to the unconstrained model, we compare the R 2 values of these two models. Let R2u denote the portion of variance accounted for by the unconstrained model, and Re2 the portion of
variance accounted for by the constrained model. Then, the F statistic for comparing the constrained with the unconstrained models is F=
/ (p,, - pc)
for R u < l .
The upper numerator of (3.24) subtracts the portions of variance accounted for by the unconstrained and the constrained models from each other. What remains is the portion of variance accounted for
by the eliminated variable(s). The upper denominator contains the number of parameters estimated by the eliminated variable(s). In the lower numerator we find the portion of variance that remains
unaccounted for by the unconstrained model, weighted by the degrees of freedom for this model. The upper denominator in (3.24) contains the numerator degrees of freedom. The bottom denominator
contains the denominator degrees of freedom.
CHAPTER 3. MULTIPLE LINEAR REGRESSION The null hypothesis tested with (3.24) is
Ho:~=O or, in words, the weight of the eliminated variables is zero in the population under study. The alternative hypothesis is
no : 3 - # 0 or, in words, the weight of the eliminated variables is unequal to zero. Note that this test is applicable in situations where ~ is a vector. This includes the case that ~ has length
one, or equivalently, that ~ is a real number. For the latter case, that is, when there is only one regression coefficient to be tested, we already have described the t test that can be used. It will
be illustrated in the exercises that the t test and the F test just described are equivalent when ~ is one real number. This equivalence between the two tests was already noted in Section 2.5.2 where
significance tests for the simple linear regression were illustrated. The F test given above can be considered a generalization of the t test. In the following data example we attempt to predict
Dimensionality of Cognitive Complexity, CC1, from the three predictors Depth of Cognitive Complexity, CC2, Overlap of Categories, O VC, and Educational Level, EDUC. Dimensionality measures the number
of categories an individual uses to structure his or her mental world. Depth of Cognitive Complexity measures the number of concepts used to define a category of Cognitive Complexity. Overlap of
Categories measures the average number of concepts two categories share in common. Educational Level is measured by terminal degree of formal school training, a proxy for number of years of formal
training. A sample of n - 327 individuals from the adult age groups of 2 0 - 30, 40 - 50, and 60 and older provided information on Educational Level and completed the cognitive complexity task. Using
these data we illustrate two types of applications for the F test given in (3.24). The first is to determine whether the/~ for a given single predictor is statistically significantly different than
zero. The second application concerns groups of predictors. We test whether two predictors as a group make a statistically significant contribution. The first step of data analysis requires
estimation of the unconstrained
model. We insert all three predictors in the multiple regression equation. The following p a r a m e t e r estimates result: CC1 - 22.39 - 0.21 9 CC2 - 22.40 9 OVC + 0.39 9 E D U C + Residual. All
predictors made statistically significant contributions (one-tailed tests), and the multiple R 2 -- 0.710 suggests t h a t we explain a sizeable portion of the criterion variance. Readers are invited
to test whether this R 2 is greater t h a n what one would expect from chance. To illustrate the use of the F test when determining whether a single predictor makes a statistically significant
contribution, we recalculate the equation omitting EDUC. We obtain the following p a r a m e t e r estimates: CC1 - 31.40 - 0.21 9 CC2 - 22.83 9 OVC + Residual. Both predictors make statistically
significant contributions, and we calculate a multiple R 2 - 0.706. To test whether the drop from R 2 -- 0.710 to R e - 0.706 is statistically significant we insert in (3.24), 0.7097--0.7064 F =
4-3 I-0.7097 327--4--2
= 3.6490.
Degrees of freedom a r e dr1 - 1 and dr2 - 321. The tail probability for this F value is p - 0.0570. The t value for predictor E D U C in the unconstrained model was t - 1.908 (p - 0.0573;
two-sided), and t 2 = 1.908 e - 3.6405 - F. The small difference between the two F values is due to rounding. Thus, we can conclude t h a t predictor E D U C makes a statistically significant
contribution to predicting CC1. However, the value added by ED UC is no more than 0.4 % of the criterion variance. One more word concerning the equivalence of the t test and the F test. The t
distribution, just like the normal, is bell-shaped and symmetrical. Thus, the t test can, by only using one half of the distribution, be performed as a one-sided test. In contrast, the F distribution
is asymmetric. It m a y be the only sampling distribution t h a t never approximates the normal, even when sample sizes are gigantic. Thus, there is no one-sided version of the F test. Information
concerning the contribution of single predictors is part of
C H A P T E R 3. M U L T I P L E L I N E A R R E G R E S S I O N
standard computer printouts. 2 In addition, one also finds the F test t h a t contrasts the unconstrained model with the model t h a t only involves the intercept. However, one rarely finds the
option to assess the contribution made by groups of predictors. This option is illustrated in the following example. In the example we ask whether the two cognitive complexity variables, Depth and
Overlap, as a group make a statistically significant contribution to explaining Breadth. This is equivalent to asking whether these two predictors combined make a significant distribution. It is
possible, t h a t groups of predictors, none of which individually makes a significant contribution, do contribute significantly. To answer this question we first calculate the parameters for the
simple regression t h a t includes only Education as predictor. The resulting equation is CC1 - 1.963 + 1.937 9 EDUC § Residual. The/~ for predictor Education is statistically significant (t - 5.686;
p < 0.01), and R 2 - 0.090. Inserting into (3.24) yields 0.710-0.090 F
4-2 1-o.71o
0.31 - 0.0009 - 343.15.
3 2 7 - - 4 --2
Degrees of freedom are dr1 - 2 and d.f2 - 321. The tail probability for this F value is p < 0.01. Thus, we can conclude that the two variables of cognitive complexity, CC2 and OVC, as a group make a
statistically significant contribution to predicting CC1. As an aside it should be noted that the F test that tests whether the f~ for a group of variables is unequal to zero can be applied in
analysis of variance to test the so-called main effects. Each main effect involves one (for two-level factors) or more (for multiple-level factors) coding vectors in the design matrix, X. Testing all
coefficients together t h a t describe one factor yields a test for the main effect. The same applies when testing together all vectors that describe an interaction. Testing single vectors is
equivalent to testing single contrasts.
2What we discuss here is standard output in SYSTAT's MGLH module and in the Type III Sum of Squares in the SAS-GLM module. Variable groups can be specified in SYSTAT.
Chapter 4
CATEGORICAL PREDICTORS We noted in Section 2.2.1 that the linear regression model does not place constraints on the predictor, X, that would require X to be distributed according to some sampling
distribution. In addition, we noted that X can be categorical. The only constraint placed on X was that it be a constant, that is, a measure without error. In the present chapter we discuss how to
perform simple regression analysis using categorical predictors. Specifically, we discuss regression analysis with the predictor measured at the nominal level. Example of such predictors include
religious denominations, car brands, and personality types. An example of a two-category predictor is gender of respondents. This chapter focuses on two-category predictors. Section 4.2 covers
multiple-category predictors. In each case, the scale that underlies X does not possess numerical properties that would allow one to perform operations beyond stating whether two objects are equal.
Therefore, the step on X that determines the regression slope is not quantitatively defined. Not even the order of categories of X is defined. The order of categories can be changed arbitrarily
without changing the information carried by a categorical variable. However, there are means for using categorical, nominal-level predictors in regression analysis. One cannot use categorical
predictors in the 63
C H A P T E R 4. C A T E G O R I C A L P R E D I C T O R S 8 7 6 5 4
3 ............
. . . . . . . . . . .
-k. . . . . . . . . . . . . .
Religious Groups
Figure 4.1" Perceived attitudes toward issues in education in four religious groups.
usual way, that is, inserting it in the regression equation "as is." Rather, one must decompose categorical predictors. This decomposition involves creating dummy variables or effect coding variables
that contrast categories or groups of categories. Consider the following example. A researcher asks whether adolescents view faith as predicting liberality of attitudes toward key issues of
education. Adolescents from the following four groups responded to a questionnaire: Roman Catholic, Protestant, Agnostics, and Muslims. The Catholics were assigned a 1, the Protestants were assigned
a 2, the Agnostics were assigned a 3, and the Muslims were assigned a 4. These numbers must be interpreted at the nominal scale level, that is, they only serve to distinguish between these four
groups. There is no ranking or interpretation of intervals. Any other assignment of numbers would have served the same purpose, as long as the numbers are different from each other. Assigning symbols
or characters is an alternative to assigning numbers. The response scale, Y, ranged from 0 to 7, with 0 indicating the least liberal attitudes and 7 indicating the most liberal attitudes. Figure 4.1
displays the responses of the N = 40 adolescents (overlapping values are indicated as one value). The figure suggests that Agnostics are viewed as displaying the most
Table 4.1: Means and Standard Errors of Responses to Attitude Questionnaire Group of Respondents Catholics (1) Protestants (2) Agnostics (3) Muslims (4)
Descriptive Statistics Mean Standard Dev. N 2.7 4.1 4.9 1.8
0.95 1.20 2.10 0.79
liberal attitudes. Muslims are viewed as displaying the least liberal attitudes. Protestants and Catholics are in between. Table 4.1 presents means and standard deviations for the four groups of
respondents. Figure 4.1 also displays a regression line. This line is incorrect/ It was estimated under the assumption that the numerical values assigned to the four respondent groups operate at the
interval level. However, the numerical values simply serve as names of the respondent groups. Thus, they operate at the nominal level and are arbitrary. Readers are invited to assign the "1" to the
Muslims, to increase all other group indicators by 1, and to recalculate the regression line. The result, a regression line with a positive slope, seemingly contradicts the result presented in Figure
4.1. This contradiction, however, is irrelevant, because both solutions are .false. 1
D u m m y and Effect Coding
There are several equivalent ways to create dummy variables and effect coding variables for analysis of variance and regression analysis. In the present section we introduce two of these methods. A
dummy variable (also termed indicator variable or binary variable) can only assume the two values 0 and 1. Dummy variables can be used to discriminate between the categories of a predictor. Consider
the following 1The calculated regression equation for the line presented in Figure 4.1 is: Attitude -- 0.38- 0.19, Group + Residual, and R2 -- 0.017; the calculated regression line for the recoded
group values is: Attitude - 0.25 + 0.35 9Group + Residual, and R 2 = 0.059.
example. A researcher is interested in predicting durability of cars from car type. Two types of cars are considered, pickup trucks and sedans. The researcher sets up a dummy variable as follows: 1 0
if car is a pickup if car i s a s e d a n
if car is a sedan if car is a pickup.
X2 -
The constant in the design matrix, X, is often also considered a dummy variable (although it only assumes the value 1). While intuitively plausible, the approach demonstrated using Formulas (4.1) and
(4.2) leads to a problem. The problem is that the design matrix, X, will have linearly dependent columns and, therefore, not have full column rank. Recall that in Section 2.2.1 we assumed that the
design matrix was of full column rank, and that this can always be achieved by reparameterization. Consider the following example. The researcher investigating durability of cars analyzes data from
the first five cars: three pickups and two sedans. The design matrix, X, for these five cars, appears below: 1 1
In this matrix, the first column is the sum of the last two columns. Thus, X is not of full column rank. As a result, the product X~X is singular also, and there are no unique estimates of regression
coefficients. The problem is solved by reparameterization. The term reparameterization indicates that the meaning of the parameters depends on how the rank deficiency of X is overcome. There are
several ways to achieve this. The most obvious, simple, and frequently used option involves dropping one of the vectors of the indicator variable. In the car-type example,
4.1. D U M M Y AND E F F E C T CODING
dropping the second vector results in the following design matrix: 1 1 1 1 1
In contrast to (4.3), this matrix is easily analyzed using standard simple regression because now the two columns of X are linearly independent and, therefore, X is of full column rank. In general,
for a variable with c categories, one sets up no more than c - 1 independent dummy coding vectors. The following data example analyzes data that describe the length of usage of three pickup trucks
and two sedans, measured in years. The regression equation for these data can be presented in the following form: 6
(b~ bl
e2 e3 e4
Estimating the parameters for this equation yields Years = 9.5 - 1.5 9Type of Car + Residual. The coefficient of determination is R 2 = 0.24 and the t test for the slope coefficient is not
significant (t = - 0 . 0 9 7 6 ; p = 0.401). T h a t the slope coefficient is not statistically different from zero is most probably due to the small sample size and the resulting lack of power. The
interpretation of the estimated slope coefficient is that for pickup trucks length of usage is about one and a half years shorter than t h a t for sedans. To be more specific, we estimate length of
usage in years for pickup trucks to be 9.5 - 1.5 9 1 = 8 years, while for sedans we calculate estimated length of usage to be 9 . 5 - 1 . 5 , 0 = 9.5 years. The estimated difference is therefore 1.5
years. The intercept parameter estimates the mean value for sedans, and the slope coefficient estimates the difference
of mean length of usage between sedans and pickup trucks. As an alternative to dummy coding, effect coding is often used. Effect coding makes it easy to set up comparisons. Comparisons always involve
two groups of cases (subjects, data carriers, respondents, etc.). As this section focuses on two-category predictors, effect coding is established quite easily. Vectors are set up according to the
following rule: Members of the first group are assigned a 1, and members of the second group are assigned a - 1 . It is of no importance which of the groups is assigned the 1 and which is assigned
the -1. Reversing assignment results in a change in the sign of the regression coefficient. Results of significance testing remain the same. Using effect coding, the regression equation is set up as
follows: 6
1 1 -1 -1
el (b0) bl
e2 ~
e4 e5
Estimating parameters using this design matrix, one obtains Years = 8.75 - 0.75 9Type of Car + Residual. Both the t test for the slope coefficient and the coefficient of determination do not change
from the results obtained using dummy coding. We therefore come to the same conclusion as before. What has changed are the values of the regression coefficients. The intercept parameter now estimates
the overall mean of the five cars in the analysis, plus b , X1, and the slope coefficient estimates how much length of usage [in years] differs for the two types of cars from this overall mean. For
pickups we estimate, as before, 8 . 7 5 - 0 . 7 5 , 1 = 8 years, and for sedans we estimate 8 . 7 5 - 0 . 7 5 , ( - 1 ) = 9.5 years, also as before. Again, the difference between the two car types is
1.5 years. It is often seen as problematic in this analysis that the intercept parameter is not a very reliable estimate of overall length of usage, as it depends on more sedans than pickup trucks.
Imagine that we have data available from 998 sedans and only two pickup trucks. When we calcu-
4.1. D U M M Y A N D E F F E C T CODING
late the average length of usage for the 1000 cars, this mean would be just an estimate of the mean for sedans as the two values of the pickup trucks virtually do not influence the overall mean. In
cases where group sizes differ it is often more meaningful to calculate a weighted mean that reflects group size. This can be accomplished by selecting scores for the effect coding vector that
discriminates groups such that the sum of scores is zero. The following example analyzes the car-type data again. Using effect coding with equal sums of weights for the two types of cars we arrive at
the following regression equation: 6 10 8 9 10
1 1 1 1 -
1 -1.5 -1.5
el ~ bl ) +
e3 e4 e5
Once again, the t test for the slope parameter and R 2 have not changed. The interpretation of the intercept and the slope parameter is the same as in the last model, but the intercept is now a more
reliable estimate of overall length of usage of sedans and pickup trucks. The regression equation is now Years - 8.6 - 0.6 9Type of Car + Residual. This equation suggests that the estimated duration
for pickups is 8 . 6 0 . 6 , 1 = 8 years, and for sedans it is 8 . 6 - 0 . 6 , (-1.5) = 9.5 years, which is the same as before. W h a t should be noted from these comparisons is that different coding
schemes do change values and interpretation of regression coefficients, that is, the parameters of the model - hence the name reparameterization. However, model fit, predicted values, and the
significance test for the slope parameter stay the same, regardless of which coding scheme is employed. This section covered the situation in which researchers attempt to predict one continuous (or
categorical, dichotomous) variable from one dichotomous predictor. The following section extends this topic in two ways. First, it covers predictors with more than two categories. Second,
C H A P T E R 4. C A T E G O R I C A L P R E D I C T O R S
it covers multiple predictors.
More Than Two Categories
Many categorical predictors have more than two categories. Examples include car brand, religious denomination, type of cereal, type of school, topic to study, type of mistake to make, race,
citizenship, ice cream flavor, swimming stroke, soccer rule, and belief system. To be able to use categorical predictors in regression analysis one creates dummy coding or effect coding variables as
was previously explained. Recall that the way in which the predictors are coded determines the way the regression coefficient is interpreted. Therefore, if the coding of the design matrix can be done
such that the parameters to be estimated correspond to research hypotheses of interest, then these hypotheses tests are equivalent to the tests of the related regression coefficient. How this can be
achieved is demonstrated in this section by applying effect coding of multicategory variables. Consider a predictor with k categories. For k categories one can create up to k - 1 independent
contrasts. Contrasts compare one set of categories with another set, where each set contains one or more categories. There are several ways to create contrasts for these k categories. If the number
of cases per category is the same, researchers often create orthogonal contrasts, that is, contrasts that are independent of each other. If the number of cases per category differs, researchers
typically create contrasts that reflect category comparisons of interest and either accept bias in the parameter estimates or deal with multicollinearity. Orthogonal Contrasts To begin with an
example, consider the variable Swimming Stroke with the four categories Back Stroke (BS), Butterfly (BF), Breast (BR), and Crawl (CR). Since we focus in the following paragraphs on the mechanics of
constructing a design matrix we do not have to consider a particular dependent variable. The meaning of contrasts will be illustrated in the subsequent data examples. To create orthogonal contrasts
one proceeds as follows
4.2. MORE THAN TWO CATEGORIES ~
Select a pair of categories to contrast. This first selection is arbit r a r y from a technical viewpoint. The following orthogonal contrasts may not be, depending on the number of categories. In the
present example, we select the category pair BS - BF. The effect coding variable identifies members of this pair by assigning value +1 to members of one pair and -1 to members of the other pair. If
one group contains more categories than the other, values other than 1 and -1 can be assigned. For instance, if one group contains four categories and the other group contains two categories, the
scores for each category in the first group can be +0.5, and the scores for each category in the second group can be -1. The values assigned must always sum up to zero. Cases that do not belong to
either pair are assigned a 0. For the present example we thus create the contrast vector c ~ = ( 1 , - 1 , 0, 0). Repeat Step 1 until either all k - 1 contrast vectors have been created or the list
of contrasts that are of interest is exhausted, whatever comes first. 2 If the goal is to create orthogonal contrasts, it is not possible to create k - 1 contrasts where each comparison involves only
two single-categories. One must combine categories to create orthogonal contrasts. Specifically, for even k one can create k/2 single-category orthogonal contrasts. For odd k one can create ( k - 1)/
2 single-category orthogonal contrasts. For the present example we can create 4/2 single-category orthogonal contrasts, for example c~ - (1, - 1 , 0, 0) and c~ - (0, 0, 1, - 1 ) . Alternatively, we
could have created c~ - (1, 0, - 1 , 0) and c~ - (0, 1, 0, - 1 ) , (0,1 , - 1 , 0). All contrasts after or c~ - ( 1 , 0 , 0 , - 1 ) and c 2 single category orthogonal contrasts combine two or more
categories. (One can, however, start creating contrasts with combinations of categories and use single category contrasts only when necessary.) One complete set of k - 1 = 3 orthogonal contrasts for
the present example is c~ - ( 1 , - 1 , 0, 0), c~ - (0, 0, 1 , - 1 ) , and '
2Note that some authors recommend including contrasts even if they are not of particular substantive interest if they allow researchers to bind significant amounts of systematic variance. The price
for each of these vectors is one degree of freedom. The benefit is that the residual variance will be reduced and, therefore, the contrasts of particular interest stand a better chance of capturing
significant portions of variance.
C H A P T E R 4. C A T E G O R I C A L P R E D I C T O R S c~ - (0.5, 0 . 5 , - 0 . 5 , - 0 . 5 ) . It should be noted that the term orthogonality refers to the columns of the design matrix and not
to the orthogonality of the c~. Only if the number of observations in each group is the same is the orthogonality of c i' and c]' (where i ~ j) equivalent to the orthogonality of the corresponding
columns of the design matrix. Note that some authors recommend t h a t for weighting purposes the sum of all absolute values of contrast values be a constant. Therefore, the last vector would contain
four 0.5 values rather than four l's. If the absolute values of coefficients are of no interest, the following specification of c~ is equivalent" e~ - (1, 1 , - 1 , - 1 ) .
Create coding vectors for the design matrix. Each contrast vector corresponds to one coding vector in the design matrix. Contrast ! vectors, %, are translated into design matrix coding vectors, xl,
by assigning each case the same value as its group in the contrast vector. In the present example, each case that belongs to category BS is coded 1 for the first coding vector, 0 for the second
coding vector, and 0.5 for the third coding vector. Accordingly, each case that belongs to category BF is coded-1 for the first coding vector, 0 for the second coding vector, and 0.5 for the third
coding vector.
Suppose a researcher investigates four athletes in each of the four swimming stroke categories. Using the above set of coding vectors, the design matrix displayed in Table 4.2 results. Readers are
invited to create alternative design matrices using the alternative first two contrast vectors listed above. To make sure the coding vectors in the design matrix are orthogonal, one calculates the
inner product of each pair of vectors. Specifically, there are (~) pairs of coding vectors, where m - k - 1. If the inner product equals zero, two vectors are orthogonal. For example, the inner
product, x~x3, of the first and the last vectors is x~ x3 - 1 90.5 + 1 90.5 + 1 90.5 + 1,0.5- 1,0.5- 1,0.5- 1,0.5- 1,0.5-0,0.5-0,0.5-0,0.50 9 0.5 - 0 9 0.5 - 0 9 0.5 - 0 9 0.5 - 0 9 0.5 - 0. Readers
are invited to calculate the inner products for the other two pairs of vectors, x~x2 and X~X3.
4.2. M O R E T H A N T W O CATEGORIES
Table 4.2- Design Matrix ]or Analysis o] Four Swimming Strokes
Case 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
xl 1 1 1 1 -1 -1 -1 -1 0 0 0 0 0 0 0 0
Coding Vectors x2 X3 0 0 0 0 0 0 0 0 1 1 1 1 -1 -1 -1 -1
0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 -O.5 -0.5 -0.5 -0.5 -0.5 -0.5 -0.5 -0.5
S e l e c t i o n a n d C h a r a c t e r i s t i c s of C o n t r a s t s
In the following paragraphs we discuss a number of issues related to selection and characteristics of contrasts in regression. 1. As was indicated earlier, the selection of the first contrast is
largely arbitrary. Therefore, researchers often select the substantively most interesting or most important groups of categories for the first pair. 2. If one creates orthogonal contrasts for k
categories, the kth contrast is always a linear combination of the k - 1 orthogonal ones. Consider the following example. For a three-category variable, we create the following two orthogonal
contrast vectors" c~ - ( 1 , - 1, 0) and c~ = (0.5, 0.5, - 1 ) . A third vector could be c~ - (1, 0, - 1 ) . Vector c3 is linearly dependent upon c~ and c~ because c~ - 0.5c~ + c~. Specifically, we
obtain for the first value of c3, 1 - 0.5 9 1 + 0.5; for the second value of c3, 0 - 0.5 9 ( - 1 ) + 0.5; and for the third
C H A P T E R 4. C A T E G O R I C A L P R E D I C T O R S value of c3, - 1 - 0 . 5 . 0 1. It is important to note that the bivariate correlations among linearly dependent variables are not
necessarily equal to one. The correlations among the three vectors Cl, c2, and c3 are c~c2 = 0.00, c~c3 - 0.50, and c~c3 = 0.87. Thus, inspection of correlation matrices does not always reveal
patterns of linear dependencies. Only if correlations are numerically zero, that is, r - 0 . 0 , can variables be assumed to be independent. .
The sign given to categories is arbitrary also. Reversing the sign only reverses the sign of the parameter estimated for a particular vector. Orthogonality and magnitude of parameter estimates will
not be affected, and neither will results of significance testing. Signs are often selected such that those categories that are assumed to predict larger values on Y are assigned positive values in
the coding vectors. In other words, the signs in coding vectors often reflect the direction of hypotheses.
4. Effect coding vectors that are created the way introduced here are identical to effect coding vectors created for analysis of variance main effects. Defining analysis of variance main effects
using this type of effect coding vectors is known as the regression approach to analysis of variance (Neter et al., 1996). 5. There is no necessity to always create all possible effect coding
vectors. If researchers can express the hypotheses they wish to test using fewer than k - 1 vectors, the remaining possible vectors do not need to be included in the design matrix. The following data
example analyzes user-perceived durability of German cars, measured in number of miles driven between repairs. Five brands of cars are considered: Volkswagen (V), Audi (A), BMW (B), Opel (0), and
Mercedes (M). Five customers per brand responded to the question as to how many miles they typically drive before their car needs repair. Figure 4.2 displays the distribution of values. Note that in
this example the sample sizes in each category are equal. The figure suggests that customers perceive BMW and Mercedes cars as equally durable - more durable than the other three brands that seem to
be perceived as equal. To compare these responses using regression
4.2. M O R E T H A N T W O C A T E G O R I E S
9 tlo B
oM oM
Ao Ao
m 20
I .....
B M O Car B r a n d
Figure 4.2" Perceived durability of cars.
analysis we have to first specify contrast vectors. For the present example we create the following orthogonal vectors: c~ - (1, - 1 , 0, 0, 0), c~ (0, 0, 1 , - 1, 0), c~ - (1/2, 1 / 2 , - 1 / 2 , -
1 / 2 , 0), and c~ - (1/4, 1/4, 1/4, 1 / 4 , - 1 ) . -
The first of these vectors compares user perceptions of Audis and BMWs. The second vector compares Mercedes and Opels. The third compares perceptions of Audi and BMW in one group with Mercedes and
Opel in the other. The fourth contrast juxtaposes the largest German car producer, Volkswagen, and all the other brands in the sample. After specifying contrasts one translates the contrast vectors
into coding vectors. Each respondent is assigned the values of the respective contrast category. Table 4.3 displays the raw data and the resulting coding vectors. The constant vector, consisting of
only ones, is omitted in the table. Using the data from Table 4.3 we estimate parameters for the following multiple regression equation: Miles Between Repairs - bo + blxl + b2x2 + b3x3 + b4x4 +
C H A P T E R 4. C A T E G O R I C A L P R E D I C T O R S
Table 4.3" Perceived Durability of Cars: Raw Data and Design Matrix Car Brand A A A A A B B B B B M M M M M O O O O O V V V V V
Miles 24 21 29 26 3O 29.5 29 34 30 33 28 29 34 30 32 20 23 29 30 27 19 25 27 28 31
Effect Coding Vectors Xl x2 x3 x4 1 0 1/2 1/4 1 0 1/2 1/4 1 0 1/2 1/4 1 0 1/2 1/4 1 0 1/2 1/4 -1 0 1/2 1/4 -1 0 1/2 1/4 -1 0 1/2 1/4 -1 0 1/2 1/4 -1 0 1/2 1/4 0 1 -1/2 1/4 0 1 -1/2 1/4 0 1 -1/2 1/4 0
1 -1/2 1/4 0 1 -1/2 1/4 0 -1 -1/2 1/4 0 -1 -1/2 1/4 0 -1 -1/2 1/4 0 -1 -1/2 1/4 0 -1 -1/2 1/4 0 0 0 -1 0 0 0 -1 0 0 0 -1 0 0 0 -1 0 0 0 -1
Table 4.4: Regression Analysis of Perceived Car Durability Predictor
27.90 -2.55 2.40 0.35 1.90
Xl x2 x3 x4
Std. Error 0.71 1.11 1.11 1.58 1.41
t value
p value, 2-tailed
39.60 -2.29 2.15 0.22 1.35
< 0.01 0.03 0.04 0.83 0.19
Results of this analysis appear in Table 4.4. The portion of criterion variance accounted for is R 2 = 0.37. Table 4.4 suggests that two parameters are unequal to zero. Specifically, parameters for
xl and x2 are statistically significant. Since each regression parameter corresponds to the hypothesis associated with the corresponding contrast vector, we can conclude that BMWs are perceived as
more durable than Audis, and that Mercedes are perceived as more durable than Opels. Readers are invited to 1. recalculate this analysis including only the first two coding vectors and to calculate
whether omitting the last three vectors leads to a statistically significant loss in explained variance; 2. recalculate this analysis using different sets of coding vectors; for instance, let the
first two contrast vectors be Cl = (0, 1 , - 1 , 0, 0) and c~ = (1, 0, 0, 0, - 1 ) . (Hint: the remaining three contrast vectors can be constructed in a fashion analogous to Table 4.3;) 3.
recalculate this analysis using a program for ANOVA and compare the portions of variance accounted for.
Multiple Categorical Predictors
In general, one can treat multiple categorical predictors just as single categorical predictors. Each categorical predictor is transformed into effect coding variables. Specifically, let the j t h
predictor have kj categories,
Figure 4.3: Scatterplot of the predictors, Educational Level and Text Concreteness, and the criterion, Recall.
with kj > 1. Then, the maximum number of independent coding variables for this predictor is kj - 1. Interactions among predictors are rarely considered in regression analysis. However, interactions
among coding variables can easily be taken into account, in particular when the number of cases for each category is the same, that is, in orthogonal designs. Chapter 10 covers interactions in
regression in detail. In the following paragraphs we present a data example that involves two categorical predictors. The data describe Recall Rates, R E C , of n = 37 participants of a memory
experiment. From the number of participants it should already be clear that this time the numbers within each group cannot be the same. The demographic variables describing the participants included
the variable Education (4, high school; 5, baccalaureate; 6, masters or equivalent). Subjects read texts (Text Groups, TG) that were either concrete (TG = 1) or abstract (TG = 2). Figure 4.3 presents
the scatterplot of the predictors, T G and E D U C , and the criterion, R E C .
4.3. MULTIPLE C A T E G O R I C A L P R E D I C T O R S
Table 4.5: Regression of Recall on Education and Text Group Variable
Std. Error
t Value
p Value
Intercept EDUC1 EDUC2 TG
147.00 7.16 12.98 -45.12
14.42 6.26 8.86 9.43
10.19 1.15 1.46 -4.78
< 0.01 0.26 0.15 < 0.01
The plot suggests that concrete texts are recalled better than abstract texts. In contrast, Education does not seem to have any effect on Recall. To test the predictive power of ED UC and TG we first
create contrast and coding vectors for EDUC. There is no need to create contrast or coding vectors for TG, because it only has two categories. Thus, the only possible contrast compares these two
categories, and the contrast vector for TG is C~rc - ( 1 , - 1). When the number of respondents in the two categories of TG is unequal, the coding of TG is arbitrary and has no effect on results
(except for scaling of the parameter estimate). If, however, this number is equal, the above contrast coding for TG will yield a contrast vector that is orthogonal to the other vectors in the design
matrix, X. For the three categories of ED UC, we create the following two contrast vectors: c~ - (1, - 1 , 0) and c~ = (-0.5, -0.5, 1). These two contrast vectors are inserted into the design matrix,
along with the codes for TG. The regression equation that we now estimate is Recall = b0 + bl 9EDUC1 + b2 9EDUC2 + b3 * TG + Residual, where EDUC1 is the coding vector from cl and EDUC2 is the coding
vector from c2. Parameter estimates and their standard errors, t values, and tail probabilities (two-tailed) appear in Table 4.5. The multiple R 2 = 0.482 suggests that we can explain 48.2% of the
variation of Recall Rates from Education and Text Group. Table 4.5 indicates that Text Group is the only variable that has a statistically significant effect on Recall. The two EDUC variables do not
have statistically significant effects. This result confirms the impression we had when inspecting Figure 4.3. In the following paragraphs we ask whether the coding vectors in X
C H A P T E R 4. C A T E G O R I C A L P R E D I C T O R S
are orthogonal or are correlated. If they are correlated they could possibly suffer from multicollinearity. Vector correlations are possible because the predictor categories did not occur at equal
frequencies. Specifically, we obtain the following univariate frequency distributions for the predictors: EDUC1
(-1:6 times; 0 : 6 times; 1:25 times),
(-0.5:31 times; 1 : 6 times),
(1 : 19 times; 2 : 1 8 times).
From the uneven distribution of the Education categories one cannot expect predictors to be orthogonal. Indeed, correlations are nonzero. Table 4.6 displays the correlations among the predictors,
EDUC1, EDUC2, and TG, and the criterion, REC. It is obvious from these correlations that the strategy that we use to create contrast vectors leads to orthogonal coding vectors only if variable
categories appear at equal frequencies. The magnitude of correlations suggests that there might be multicollinearity problems. Table 4.6: Spearman Correlations among the Predictors, EDUC1, EDUC2, and
TG, and the Criterion, Recall
EDUC2 TG REC
EDUC1 -0.47 0.17 -0.01
-0.28 0.25
Chapter 5
OUTLIER ANALYSIS While in many instances researchers do not consider fitting alternative functions to empirical data, outlier analysis is part of the standard arsenal of data analysis. As will be
illustrated in this chapter, outliers can be of two types. One is the distance outlier type, that is, a data point extremely far from the sample mean of the dependent variable. The second is the
leverage outlier type. This type is constituted by data points with undue leverage on the regression slope. The following sections first introduce leverage outliers and then distance outliers.
Leverage Outliers
Leverage outliers are defined as data points that exert undue leverage on the regression slope. Leverage outliers' characteristics are expressed in terms of the predictor variables. In general, a
data point's leverage increases with its distance from the average of the predictor. More specifically (for more details on the following sections see Neter et al., 1996), consider the hat matrix, H,
H = X(X'X)-lx
where X is the design matrix. X has dimensions n • p, where p is the number of predictors, including the term for the constant, that is, the intercept. Therefore, H has dimensions n • n. It can be
shown that 81
the estimated values, yi, can be cast in terms of the hat matrix and the criterion variable, Y, S' - H y ,
and that, accordingly, the residuals can be expressed as e-y-Hy. From (5.1) it can be seen that matrix H puts the hat on y. The ith element of the main diagonal of the hat matrix is
hii - x ~ ( X ' X ) - l x i ,
where xi is the ith row in X, that is, the row for case i. The element hii has the following properties: 1. It has range 0 _ hii < 1. 2. The sum of the hii is p: n
hii - p, for i -
3. hii indicates how far the x value of case i lies from the mean of X. 4. hii is known as the leverage of case i. Large values of hii indicate that the x value of case i lies far from the arithmetic
mean of X; if hii is large, case i has a large influence on determining the estimated value Yi. 5. The variances of residual ei and hii are related to each other as follows:
a2(ei) = a2(1 - hii). From (5.2) one can see t h a t the variance of a residual depends on the value of the predictor for which the residual was calculated. Specifically, a2(ei) decreases as hii
increases. Thus, we have zero variance for ei if
5.1. LEVERAGE OUTLIERS
hii - 1. In other words, when hii increases, the difference between the observed and the estimated y values decreases, and thus the observed y value will lie closer to the regression line. The
observed y value lies exactly on the regression line if hii = 1. Because of the dependence of the variance of a residual upon the predictor value one often uses the studentized residual, which is
defined as ei/a2(ei), for comparisons. In empirical data analysis the following rules of thumb are often used when evaluating the magnitude of leverage values: 1. hii is large if it is more than
twice as large as the average leverage value. The average leverage value is
h i i - P.
2. hii > yl indicates very high leverage; 0.2 _< hii _< 0.5 suggests moderate leverage. 3. There is a gap between the leverage for the majority of cases and a small number of large leverage values.
For the following example we use data from the Finkelstein et al. (1994) study. Specifically, we use the scores in verbal aggression (VA85) and physical aggression against peers (PAAP85), collected
in 1985. The sample consists of n = 77 boys and girls. We ask whether verbal aggression allows one to predict physical aggression. The following parameters are estimated for this regression problem:
PAAP85 = 9.51 + 0.46 9VA85 + Residual. The slope parameter is significantly greater than zero (t - 4.07; p < 0.01). The scatterplot of VA85 and PAAP85 appears in Figure 5.1. The figure shows that
most of the data points nestle nicely around the regression line. It also suggests that applying a linear regression line matches data characteristics. However, there are two data points that are
abnormal. One of these points describes a boy with slightly above average verbal aggression but a very large number of physical aggression acts against peers. While an outlier on the dependent
variable's scale, this
CHAPTER5. 60
O r~ "-' 9
9. :
.OIQ~ - - ~ QO
20 30 Verbal Aggression
i 40
Figure 5.1" Regression on physical aggression on verbal aggression in Adolescents.
data point has no strong influence on the regression slope. In contrast, the data point in the lower right corner of the figure does have above average leverage. It is an outlier on the predictor
variable's scale, and thus a leverage point with a leverage value of h = 0.126. We now analyze the leverage values for the present sample. The mean of leverage values is = 2/77 = 0.026 with a minimum
of h = 0.013 and a maximum of h = 0.126. The median is md = 0.021. Clearly, the leverage point meets the first of the above criteria by being more than twice as large as the average leverage. In
addition, it meets the third criterion in that there is a gap between the crowd of leverage points and this individual point. This is illustrated by the histogram in Figure 5.2. Figure 5.2 shows that
the majority of the leverage values is grouped around the median. The leverage value of this boy appears only after a large gap. For a more in-depth analysis of the effects of this leverage point we
reestimate the regression parameters after excluding it. The resulting
5.1. L E V E R A G E 0 UTLIERS
o 0
20 0.2 10
0 0.0
0.0 0.15
Leverage Figure 5.2" Bar graph of leverage scores of data points in Figure 5.1.
regression equation is PAAP85 - 8.113 + 0.53 9VA85 + Residual. The slope parameter is significantly greater than zero (t = 4.52; p < 0.01). A comparison with the original regression parameter
estimates suggests that the leverage point did indeed "pull the slope down." Figure 5.3 shows the new regression slope for the data without the leverage point. Inspection of leverage values for this
analysis does not reveal any additional or new leverage point. Only the outlier, high up in the number of aggressive acts, is still there.
Leverage outliers are cases with extreme values on the predictor variable(s). Distance outliers are cases with extreme values on the criterion variable. They can be identified using the studentized
residuals, that is, residuals transformed to be distributed as a t statistic. Specifically, for
-~ 9
~r ~r 11r
20 30 Verbal Aggression
Figure 5.3: Regression of physical aggression on verbal aggression after removal of leverage point. each case, the studentized residual, d~, is given by /
~ ei
~ n2- p - 1 (Y'~i=I
hii) - e i
where p is the number of parameters estimated by the regression model, including the intercept parameter. Studentized residuals have the following characteristics: 1. Each value d~ is distributed as
a t statistic with n - p of freedom.
1 degrees
2. The d~ are not independent of each other. 3. To calculate the d~* one only needs the residuals and the hii values. To determine whether a given value, d~, is an outlier on the criterion, one (1)
calculates the critical t value for c~ and the n - p 1 degrees of freedom and (2) compares the d~ with this value, t~,dj. If d~ > ta,df, case i is a distance outlier.
Figure 5.4: Density histogram of studentized residuals for d a t a in Figure 5.1
For a numerical example we use the v e r b a l - physical aggression d a t a from Figure 5.3. The critical t value for d f - 7 6 - 2 - 1 - 73 and a - 0.05 is t0.05,73 - 1.666. The density histogram
for the 76 cases appears in Figure 5.4. The histogram shows t h a t there is one case t h a t is far out on the t distribution scale. This is the case with the very large number of physical
aggressive acts in Figure 5.3. This boy's studentized residual is d~ - 6.77; this value is clearly greater than the critical t. However, the figure also suggests t h a t there are more cases with d~
values greater than the critical one. We find two such cases. The first (order is of no importance) has values V A 8 5 - 12, P A A P 8 5 - 27, and d~ - 1.85. This is the case in the upper left of the
d a t a cloud in Figure 5.3. The second has values V A 8 5 = 23, P A A P 8 5 - 32, and d* - 1.85. This is the case with the second highest P A A P 8 5 score in the distribution. These two d a t a
points suggest t h a t it is both the absolute magnitudes of scores on Y and the difference between the observed and the estimated scores t h a t determine whether or not a case is a distance outlier
in terms
88 of Y.
Cook's Distance, Di Cook's distance is a measure of the magnitude of the influence that case i had on all of the parameter estimates in a regression analysis. The measure is defined as
Di =
e2hii t n e2)(l_ P~(Ei=I
hii) 2,
where p is the number of parameter estimates, including the intercept parameter. Cook's distances, Di, have the following characteristics: 1. Although Di is not distributed as an F statistic, it
usually is evaluated in regard to Fa with p and n - p degrees of freedom. 2. The following rules of thumb apply: 9 When p(Di) < 0.10, case i is said to have little influence on the magnitude of
parameter estimates; 9 When p(Di) > 0.50, case i is said to have considerable influence on the fit of the regression model 3. As the dii, the Di can be calculated from the residuals and the hii
values. Instead of giving an application example of Cook's distance, we illustrate the relationship between Cook's distance and the studentized residual. A comparison of Formulas (5.3) and (5.4)
suggests that, while dii and Di use the same information, they process it in different ways. Specifically, the relationship between the two measures does not seem to be linear. This is illustrated in
Figure 5.5. Figure 5.5 displays the scatterplot of the studentized residuals and the Cook distances for the data in Figure 5.1. In addition to the data points, the figure shows the quadratic
regression line for the regression of the Cook distances onto the studentized residuals. Obviously, the fit is very good, and both measures identify the same cases as extremes.
5.2. R E M E D I A L M E A S U R E S 0.06
I 9
0.05 9
oo 0.03 r,.) 0.02
0.01 0.0 -2
Figure 5.5: Scatterplot and quadratic curve for regression of the Cook statistic on studentized residuals. Around the value of dii = 0 the data are denser and follow a quadratic pattern. The outlier
is identified by both measures. Thus, we conclude that, while dii and Di will not always suggest the same conclusions, their relationship is strong and they tend to agree in outlier identification.
Remedial Measures
Thus far, this chapter has presented methods for identifying specific problems with the d a t a - model fit when employing a particular regression model. It is most important to note that any of
these problems is defined only with respect to the data estimation methods and statistical evaluation methods employed. Using other methods, the problems may not surface or even exist. For example,
the problem with the residuals not being much different than the raw scores in the two panels of Figure 6.1 could have been avoided by using curvilinear regression. Nevertheless, the question is what
one can do to overcome or remedy problems with the d a t a - model fit. Obviously, problems with this fit can lead to misestimations of parameters and their standard errors and thus
to misrepresentations of variable relationships. There is no common cure for lack of d a t a - model fit. Each problem must be approached by specific remedial measures. The following sections present
remedial measures for specific problems with the model - data fit.
Scale Transformations
When data show a pattern that is nonlinear, as, for example, in Figures 6.1 and 8.1, researchers may wish to consider either of the following measures: 1. Application of curvilinear regression; 2.
Data transformation.
Solution 1, the application of curvilinear regression, is typically approached either from an exploratory perspective or from a theory-guided perspective. The exploratory approach is where
researchers try out a number of types of curves and select the one that yields the smallest sum of squared residuals, that is, the smallest value for the least squares criterion (see Section 2.2.2).
While appealing in many respects, this approach is seen by many as too close to data fitting, that is, as an approach where functions are fit to data with no reference to theory. Theory-guided
approaches to curvilinear regression start from a type of function for a shape that a curve can possibly take. Then, the data are fit to a selection of such functions. Only functions with
interpretable parameters are selected, and each of these functions corresponds to a particular interpretation of substantive theory. The second measure, data transformation, is a widely used and
widely discussed tool. It can serve to deal not only with problems of nonlinearity, but also with problems of unequal error variances, skewness of the distribution of error terms, and nonnormality.
The best known group of procedures to deal with the first three of these problems are known as Box-Cox transformations (Box & Cox, 1964; Cox, 1958; Neter et al., 1996; Ryan, 1997). Box-Cox
transformations use power transformations of Y into Y' of the kind yI __--y~X,
5.2. REMEDIAL MEASURES
+ S q r t (X) . ln{X)
l/sqrt{X) I/X
16 21 2 6 31 36 41 46 Variable Y
Figure 5.6: Sample t r a n s f o r m a t i o n s of Y.
where A is a p a r a m e t e r t h a t is d e t e r m i n e d specifically for a given d a t a set. E x a m p l e s of A and the resulting t r a n s f o r m e d variables Y' a p p e a r in Table 5.1
(cf. Ryan, 1997). T h e effects of the last four of these nonlinear t r a n s f o r m a t i o n s I on a variable Y t h a t ranges from 1 to 50 (in steps of 1) are illustrated in Figure 5.6. Table
5.1: Examples of Power Transformations of the Criterion Variable, Y Value of P a r a m e t e r A 2 1/2 0 -1/2
T r a n s f o r m e d Variable Y' y,_
Y'-V~ Y' - log Y (by definition)
Y'-l/x/~ yl_ 1/Y
1It should be noted that these transformations do not vary monotonically depending on A when Y~ -- log Y for A - 0. Therefore, Box and Cox (1964) specified that Y' = (Y;~ - 1)/A if A ~ O, and Y' -
logY if A - O.
C H A P T E R 5. O U T L I E R A N A L Y S I S
Explanations of these transformations follow below. A typical application of Box-Cox transformations aims at minimizing the residual sum of squares. There is computer software that performs the
search for the optimal transformation. However, in many instances, researchers do not need to search. Rather, there is a data problem that can be solved by a specific transformation. In addition,
some transformations help solve more than just one data problem. The following paragraphs describe examples.
The Square Root Transformation Both across the categories of categorical predictors and for continuous predictors it can occur that means and standard deviations of the criterion variable are
functionally related. This is the case for Poisson processes, that is, rare events assessed by counting, given certain assumptions like independence of events for different time intervals. The
standard square root transformation is
When there are measures 0 < Y < 10, the following transformation seems more suitable:
y ' - y/-y + l/2. Square root transformations stabilize, that is, render more homogeneous, variances and, in addition, normalize distributions.
The Logarithmic Transformation When, for a continuous predictor or across the categories of a categorical predictor, standard deviations are proportional to means, logarithmic transformations may be
considered. Specifically, one transforms Y into Y' by Y' - log Y, where log is the natural logarithm, that is, the logarithm with base e 2.71828 . . . . When there are values yi - 0 one adds to all
scores a constant
5.2. REMEDIAL MEASURES
A, where A is a small constant that is often set to 1 or 0.5. In other words, one uses the transformation
v ' - l o g ( y + A) instead of a simple logarithmic transformation. The logarithmic transformation is among the most frequently employed transformations. Some of the scales we use everyday, for
example, the phone scale for acoustic intensity, are logarithmic scales. In psychology, GSR (galvanic skin resistance) scales are typically logarithmic also.
Trigonometric Transformation Chiefly to reduce instability of variances when the dependent measures are proportions, one employs trigonometric transformations. The best known of these is the inverse
sine transformation Y' - arcsin v ~ .
Often researchers multiply the term on the right-hand side of (5.5) by 2 and add a constant to Y when there are values Y - 0.
Reciprocal Transformation When the squares of means are proportional to some unit of Y, one may consider a reciprocal transformation such as
1 Y+I'
where the second of these equations is appropriate when there are values y - 0. The reciprocal transformation is often employed when the dependent measure is time, for example, response times or
problem solving times. Many more transformations have been discussed, in particular, transformations to correct lack of normality (cf. Kaskey, Koleman, Krishnaiah, & Steinberg, 1980; Ryan, 1997). It
is most important to realize that nonlinear transformations can change many variable characteristics. Specifically, nonlinear transformations not only affect mean and standard
deviations, but also the form of distributions (skewness and kurtosis) and additivity of effects. In addition, the power of tests on transformed variables may be affected (Games, 1983, 1984; Levine &
Dunlap, 1982, 1983). Therefore, routine application of transformations can have side effects beyond the desired cure of data problems. Researchers are advised to make sure data have the desired
characteristics after transformation without losing other, important properties. 5.2.2
This section describes one of the most efficient approaches to dealing with unequal variances of the error terms: Weighted Least Squares (WLS) (for the relationship of WLS to the above
transformations see, for instance, Dobson (1990, Chapter 8.7)). In Section 2.2.2 the solution for the ordinary least squares minimization problem, that is, ( y - X b ) ' ( y - Xb)
:; min,
was given as b = (X'X)-lX'y, where the prime indicates transposition of a matrix. The main characteristic of this solution is that each case is given the same weight. Consider the weight matrix, W,
that is, a matrix with a weight for each case. Then, the OLS solution can be equivalently rewritten as b-
where W - I, the identity matrix. If W is a diagonal matrix with unequal diagonal elements, (5.6) is the WLS solution for the least squares problem.
5.2. REMEDIAL MEASURES
S e l e c t i o n of W e i g h t s
In most instances, weights cannot be straightforwardly derived from theory or earlier results. Therefore, researchers typically select weights such that they address their particular data problem.
For example, if the heteroscedasticity problem, that is, the problem with unequal error variances, is such that the variance of residuals increases with X, one often finds the recommendation to
consider the following weight: 1 Wii
~. Xi
Accordingly, when the variance of residuals decreases with X, one can find the recommendation to consider the weight 1
wii =
, Xmax
-- Xi
where xmax is the largest value that X can possibly assume. The following example illustrates the effects one can expect from estimating regression parameters using (5.7) and WLS. We analyze a data
set with error variance depending on X. For Figure 5.7 we analyzed data that describe pubertal developmental status, operationalized by the Tanner score, T83, and the number of aggressive acts
against peers, PAAP83, in a sample of n - 106 boys and girls in 1983, all in early puberty. The two panels of Figure 5.7 show that there is at least one outlier in this data set. In addition, the
right panel suggests that the size of residuals depends heavily on the predictor, T83. In the following paragraphs we reestimate regression parameters for these data using WLS. Before we do this, we
recapitulate the OLS regression equation obtained for these data: PAAP83 - 21.86 - 0.13 9T83 + Residual. The slope parameter for this analysis had not reached statistical significance (t = -0.32; p =
0.747). The right panel of Figure 5.7 suggests that the error variance decreases as the values of T83 increase. Therefore, we define as the weight variable W - 1 / ( 1 3 - T83), with 13 being the
largest Tanner score in this sample and age group. Using this weight variable we perform WLS regression as
C H A P T E R 5. OUTLIER A N A L Y S I S
I ,k
I 0-
-5 ""
-lO -10 0
Figure 5.7: Residual plots for OLS (left) and WLS (right) analyses.
follows: 1. We multiply PAAP83 with W to obtain PAAP83w - P A A P 8 3 , W 2. We multiply TA83 with W to obtain TA83w - T A 8 3 , W 3. We estimate the regression parameters. For the present example we
calculate PAAP83 - 18.04 9 W + 0.69 9T83 9Residual. The F test for the predictor, T83w, now suggests that the slope parameter is greater than zero. Specifically, we obtain F1,103 = 2.19 and p - 0.03.
Table 5.2 presents the parameter estimates for both solutions. The two standard errors for the predictor parameters show the benefit from WLS most dramatically. The WLS standard error is both in
absolute terms and relative to the magnitude of the parameter estimate smaller than the OLS standard error. As a result, the relationship between physical pubertal development and number of
aggressive acts against peers,
5.2. R E M E D I A L M E A S U R E S
Table 5.2: OLS and WLS Parameter Estimates for Aggression Data Parameter
OLS Estimate
OLS Std. Error
Constant Weight T83w
WLS Estimate
WLS Std. Error
18.040 0.692
2.170 0.315
inconspicuous from OLS analysis, is now statistically significant. Figure 5.7 contains the residual plots for the OLS analyses and the present WLS solution. The first panel of Figure 5.7 shows the
residual plot for the OLS solution. It suggests that the variance of the residuals decreases as the predictor values, T83, increase. The right panel of Figure 5.7 shows the effects of the weighting
for the WLS solution. The effect is that the error variance, that is, the variation of residuals around zero, is more even for the WLS solution than for the OLS solution. The data points are coded
according to the value of the original predictor variable, T83. As a result, one can make out that the weighting did not change the rank ordering of data points. It only changed the scale units and
reduced the error variance. Caveats
In a fashion similar to the caveats given concerning variable transformations, caveats concerning WLS seem in order. The first concerns recommendations that are not grounded in substantive theory.
Whenever weights for WLS are estimated from the data, WLS loses its desirable optimality characteristics (even though it may still be better than OLS). Relatively unproblematic may be the options to
use the scatterplot of residuals with Y for weight estimation or to use the residuals for weight estimation. As was obvious from the present example, WLS can, sometimes, be used to change the
significance of results. However, WLS is not an allencompassing cure for lack of statistical significance. The specification of weights can be as arbitrary as the selection of a transformation proce-
dure. Therefore, researchers are well advised to switch from OLS to WLS only if either there is a priori knowledge of weights, e.g., in the form of a variance-covariance matrix of estimated
regression coefficients, or a derivation of weights is performed on substantive or theoretical grounds. Researchers have to resist abusing such tools as variable transformation and weighted least
squares for arbitrary manipulation of data.
Chapter 6
Using methods of residual analysis one can determine whether 1. the function ships present line for data in part, if at
type employed to describe data reflects the relationin the data; for example, if researchers chose a straight description, curved relationships can be captured only all;
2. there are cases that contradict 1 this type of relationship, that is, whether there are outliers; and 3. there are anomalies in the data; examples of such anomalies include standard deviations
that vary with some predictor (heteroscedasticity). This chapter presents analysis of residuals, which can be defined as ei - Yi - Yi, where ei is the residual for case i, yi is the value observed
for case i, and ~)i is the expected value for case i, estimated from the regression equation. Before providing a more formal introduction to residual analysis we show the benefits of this technique
by providing graphical examples of cases (1), (2), and (3). 1The t e r m s "contradicting" and "outliers" are used here in a very broad sense. L a t e r in this chapter we introduce more specific
definitions of these terms.
CHAPTER6. ....
I ,]rAtr I
O r
9 ._
o~_o$ e oo~
.,p_d~ o 9 ~l..,a 9
3 4 5 6 Stress Level
_'~. 8
4.1 4.2 4.3 4.4 4.5 4.6 4.7 Estimate
Figure 6.1" Raw data and residual plot for straight line regression of a V-shaped relationship.
Illustrations of Residual Analysis
First, we illustrate how far a chosen regression function can be from validly describing data. We select linear regression as the sample regression function. Similar examples can be constructed for
curvilinear regression. Figure 6.1 gives an example (two panels). The example describes the number of misses made by professional dart players under a range of stress intensities. Stress ranged from
very low as in a pub situation, to intermediate as in a local competition, to very high as in the world championships. There was a total of eight stress levels, with 1 indicating the lowest level.
Levels 2 through 8 were realized in the experiment. The dependent variable was the number of misses, averaged over teams of three players in a total of n = 180 games. The left-hand panel of Figure
6.1 displays the scatterplot of the misses in n = 180 games. The distribution of averaged misses suggests a Vshaped function of misses, depending on stress level. In addition, the curve suggests
that, on average, higher stress causes more misses than lower stress. This is also indicated by the regression line, which has a positive slope. The regression function for the relationship between
Number of Misses M and Stress Level S is M - 3.99 + 0.08 9S + Residual.
6.1. ILLUSTRATIONS OF RESIDUAL ANALYSIS
The F value for the regression slope is F = 6.66. This value is, for dr1 = 1, and dr2 = 178, statistically significant, p = 0.011. The squared multiple R is smallish. It is R 2 = 0.036. In standard
application of regression analysis researchers may be tempted to content themselves with this result. They might conclude that there is a statistically significant linear relationship between Number
of Misses and Stress Level such that increases in stress cause increases in the number of misses. However, an inspection of the residuals suggests that the linear regression model failed to capture
the most important aspect of the relationship between Stress Level and Number of Misses, that is, the curvilinear aspect. The right-hand panel of Figure 6.1 displays the residual plot (predictor x
residuals) for the above regression equation. It plots Estimate against Size of Residuals. The comparison of the two panels in Figure 6.1 reveals two important characteristics of the present example:
1. The curvilinear characteristics of the raw data and the residuals are exactly the same. This does not come as a surprise because the regression model employed in the left panel is not able to
capture more than the linear part of the variable relationship. 2. The increase in misses that comes with an increase in stress does not appear in the residual plot. The reason for this is that the
linear regression model captured this part of the variable relationship. Thus, we can conclude that when a regression model captures all of the systematic part of a variable relationship the
residuals will not show any systematic variation. In other words, when a regression model captures all of the systematic part of a variable relationship, the residuals will vary completely at random.
This characteristic is illustrated in Figure 6.2. The left-hand panel of this figure displays two random variates that correlate to r = 0.87. The joint distribution was created as follows. A first
variable, NRAN1, was created using a standard normal random number generator for n = 100. A second variable, NRAN2, was created the same way, A third variable, NRAN3, was created using the following
formula: NRAN3 = NRAN1 + 0.6 9NRAN2.
C H A P T E R 6. R E S I D U A L A N A L Y S I S ,
, /
:~"_z~, ,-,
.~., ./..
~.'~. _ 9 .
/ 3
-. . , . . ,...-.:. ,
.... 9 o
",t , ~ - ' , - " ~ 9 oo . - . ".
-2 -3
. . . . 9. ,
1 .
Figure 6.2: Residual plot for linear relationship.
The left-hand panel of Figure 6.2 displays the joint frequency distribution of N R A N 1 and N R A N 3 . Because of the r a n d o m character of these two variables and because of the built-in linear
relationship between them, we expect no systematic variation in the residuals. This is illustrated in the right-hand panel of Figure 6.2. This panel displays a bivariate normal distribution. It is as
perfect as a r a n d o m number generator can create for the (relatively small) sample size of n - 100. In the second tions of outliers, located unusually the m a n y reasons discussed:
example we illustrate outliers. From the m a n y definiwe use here distance outliers. These are d a t a points far from the mean of the dependent measure. Among why there are outliers, the following
two are most often
1. Measurement error. This is the most often considered reason for the existence of outliers. The measurement instrument may have indicated wrong values or may have been misread; d a t a typists may
have hit the wrong key; coders may have miscoded a response; or respondents may have crossed the wrong answer. If the number found in the d a t a is theoretically possible (as, for instance, an IQ of
210), it may be hard, if not impossible, to identify a value as a mistake. If, however, a value lies beyond the limits of the scale used (as, for instance, the value 9 on a rating scale with a range
from 1
6.1. ILLUSTRATIONS OF RESIDUAL ANALYSIS
to 5), it can be detected relatively easily. .
Presence of unique processes. If an outlier displays an extreme value that, in theory, is possible, this value is often explained as caused by unique processes. Examples of such processes include
luck, extreme intelligence, cheating, and pathological processes. If any of these or similar phenomena are considered, researchers often feel the temptation to exclude outliers from further analysis.
The reason given for excluding cases is that they belong to some other population than the one under study. Statistical analysis of outliers can provide researchers with information about how (un)
likely a given value is, given particular population characteristics.
The following data example illustrates the presence and effects of distance outliers. Finkelstein et al. (1994) analyzed the development of aggressive behavior during adolescence in a sample of n =
106 adolescent boys and girls with a 1983 average age of 12 years. The authors estimated a Tanner score and an Aggression score for each adolescent. The Tanner score, T, is a measure of physical
development. The higher the Tanner score, the more advanced is a child's physical development. The Aggression score measured frequency of Physical Aggression Against Peers, PAAP83. The higher the
PAAP83 score, the more frequent are an adolescent's aggressive acts against peers. In their analyses the authors attempted to predict aggression from physical pubertal development. The left-hand
panel of Figure 6.3 displays the scatterplot of Tanner scores and PAAP83 scores in the sample of 106 adolescents. The plot displays the data points and the regression line. The regression function is
PAAP83 = 21.86 - 0.13 9T 4- Residual. The t value 2 for the slope coefficient is t = -0.324 and has a tail probability of p = 0.747. Thus, physical development does not allow researchers to predict
frequency of physical aggression against peers. In the left-hand panel of Figure 6.3 stars below the regression line 2It should be emphasized that, in the present context, the t test and the F test
for regression coefficients are equivalent. Selection of test is arbitrary.
CHAPTER6. 50
~-~-~.'~ 0 0
i i 5 10 Tanner Score
[ 20
~ 3o
I i 5 10 Tanner Score
Figure 6.3: Leverage outlier and linear relationship.
indicate aggression values that are smaller than estimated from the regression function (bottom half of the scatterplot). Stars above the regression line indicate aggression values greater than
estimated from the regression function (top half of the scatterplot). The largest residual was calculated for a boy with a PAAP83 value of 44 (highest in the sample) and a Tanner score of 11 (tied
for second highest in the sample). From the regression function, this boy was expected to have an aggression score of PAAP83 = 20.46. Thus, the residual is e = 23.54. The star for this boy appears in
the upper right corner of the plot. Assuming this boy may not only have an extremely large PAAP83 score but also may have leveraged the slope of the regression line to be less steeply decreasing (see
leverage outliers in Section 5.1), we reestimated the regression parameters under exclusion of this boy. The resulting function is PAAP83 - 2 3 . 2 6 - 0.49, T + Residual. This function suggests a
steeper decrease of aggression frequency with increasing Tanner score. Yet, it is still not statistically significant (t -1.25; p - 0.216). The right panel of Figure 6.3 displays the data points and
the regression line for the reduced data set. The data points in both panels of Figure 6.3 are presented under the assumption that each point carries the
6.1. ILL USTRATIONS OF RESIDUAL ANALYSIS 40
.g <
~**i <
- -:, :t I 0 0
I 5 10 Tanner Score
-10 14
I 15
16 Tanner Score
Figure 6.4: Heteroscedasticity in a linear relationship.
same weight. The following, third data example illustrates the use of residual analysis for detection of data anomalies. One data characteristic that tends to inflate standard errors of parameter
estimates is that error variances are unequal (heteroscedasticity). An example of unequal error variances is that the error variance depends on the predictor. The following example shows error
variances that increase monotonically with the level of the predictor. In this example, the regression slope is positive and the variance of the residual term increases with the expected value of the
response variable. Indeed, this a very common feature of many empirical data sets. Therefore, one should always check one's data for presence of heteroscedasticity. The data describe the n - 69 boys
and girls available for a third examination of Tanner scores (T) and Aggression in 1987 (PAAP87) from the Finkelstein et al. (1994) study. The left-hand panel of Figure 6.4 presents the scatterplot
of the predictor, Tanner Score, and the criterion, Frequency of Aggressive Acts. The regression equation estimated for the data in Figure 6.4 is PAAP87 - 13.43 + 0.30 9T + Residual. The statistical
relationship between PAAP87 and T is not significant (t - 0.94; p - 0.35).
The left panel in Figure 6.4 illustrates that the variation around the regression line, that is, the error variance, increases with the level of the predictor. For small Tanner scores we notice a
relatively small error variance, that is, the residual scatter is very close to the regression line. As Tanner scores increase, the error variance increases as well, in other words, scatter of the
residuals from the regression line increases. The residual plot for this regression analysis appears in the right- hand panel of Figure 6.4. Section 5.2 described measures for remedying problems of
this type.
Residuals and Variable Relationships
The two panels of Figure 6.1 illustrate that any type of regression function can depict only a certain type of variable relationship. In Figure 6.1 the portion of variance that can be captured using
straight regression lines is minimal (3.6%), although it is statistically significant. The residual plot (predictor x residuals) looks almost like the original data plot. It suggests that there is a
substantial portion of systematic variation that can be explained using curvilinear regression. We make two attempts to capture the curvilinear portion of the variation of the Dart Throw data. The
first is to fit a quadratic polynomial. Results of this attempt appear in Figure 6.5. The curve in Figure 6.5 suggests a relatively good fit. However, there is still a systematic portion of the data
unexplained. The figure suggests that, through Stress Level 4, data points are about evenly distributed to both sides of the quadratic curve. Between Stress Levels 4 and 6, the majority of the data
points are located below the regression line. Beyond Stress Level 6, most of the data points are located above the regression line. Only for the highest stress levels are data points below the
regression line again. Now, one way to further improve the model is to include a thirdor even a higher degree polynomial, as will be explained in Section 7.2. Alternatively, one can look for a
transformation of the predictor variable that gives a good model fit. For example, the scatter of the points for this example bears some similarity to a segment of the sine (or possibly a cosine)
curve. Therefore, a sine or a cosine transformation of the predictor
6.2. RESIDUALS AND VARIABLE RELATIONSHIPS I
5 o
4 ~
~-x" "A" ~
Stress Level Figure 6.5: Quadratic regression of number of misses onto stress level.
may be a sensible alternative to fitting a higher degree polynomial. To capture this portion of the data also, we substituted a sine function for the quadratic curve. Specifically, we fit the
function Number of Misses - Constant + sin(Stress Level). The following estimates resulted: Number of Misses - 4.44 + 0.99 9sin (Stress Level) + Residual. The fit for this function is as perfect as
can be. The significance tests show the following results: Fl,17s = 961.83,p < 0.01. The residual plot for this function appears in Figure 6.6. Figure 6.6 suggests that, across the entire range of
sin (Stress Level), residuals are about evenly distributed to both sides of the regression line. There is no systematic portion of the variation of the Number of Misses left to be detected (cf.
Figure 6.2, right panel). One word of caution before concluding this section. What we have performed here is often criticized as data fitting, that is, fitting curves to data regardless of theory or
earlier results. Indeed, this is what we did. However, it was our goal to illustrate (1) how to explain data and
CHAPTER6. I
r o ,,...~
0rar ra~ o ~...1
Z~ -1
-2 -3
0 1 Stress Level
Figure 6.6" Residual misses after sine transformation of the predictor, Stress Level. (2) what criteria to use when discussing residual plots. In real life data analysis researchers are well advised
to use theory as a guide for selecting functions to fit to d a t a The following paragraphs present three methods for analysis of the distribution of residuals: 1. Statistical analysis of the
distribution of standardized residuals. 2. Comparison of calculated vs. expected residuals. 3. Normal probability plots. A n a l y s i s of t h e D i s t r i b u t i o n of S t a n d a r d i z e d R
e s i d u a l s In addition to asking whether a particular type of function best fits the data, one can analyze the distribution of residuals. If the systematic portion of the criterion variation was
covered by a regression model, there is only random variation left. As a result, residuals are expected to be normally distributed. Any deviation from a normal distribution may suggest
either the existence of variance t h a t needs to be explained or the presence of outliers or both. In the following sections we describe methods for analyzing the distribution of outliers. Consider
the following regression equation:
yi - bo + 2_., bjxij + el. j>0 For the analysis of the distribution of residuals we ask whether the appropriate portion of residuals, el, is located at normally distributed distances from the E x p e
c t a n c y of ei, E(ei) = O. For instance, we ask whether 9 68% of the residuals fall between z - - 1 and z - + 1 of the s t a n d a r d normal distribution of the residuals; 9 90% fall between z =
- 1 . 6 4 and z = +1.64; 9 95% fall between z = - 1 . 9 6 and z = +1.96; 9 99% fall between z = - 2 . 5 8 and z = +2.58; and so forth. To be able to answer this question, we need to standardize the
residuals. The standardized residual, ze~, can be expressed as
ei Zei - - - - , 8e
i Eie2
Using these terms, determining whether residuals are normally dist r i b u t e d can be performed via the following three steps: 9 Calculate for each case i the standardized residual, ze~; 9 Count
the number of cases t h a t lie within a priori specified boundaries; 9 Determine whether the number of cases within the a priori specified boundaries deviates from expectation; this step can be
performed using X2 analysis.
CHAPTER 6. RESIDUAL ANALYSIS 15
0 - , A. -A-5
-10 11
13 14 15 Tanner Stage
Figure 6.7: Residual plot for regression of Aggressive Acts on Tanner score. For the following example consider the n - 40 boys of the Finkelstein et al. (1994) study that were still available in
1987 for assessment of pubertal status, Tanner Stage (T87), and Frequency of Aggressive Acts against Peers (A87). Regressing A87 onto T87 yields the following parameter estimates: A87 - 1.59 + 1.10.
T87 + Residual. The relationship between these two variables is statistically not significant (t = 1.746; p - 0.09). The residual plot appears in Figure 6.7. The figure suggests that the size of
residuals increases with the level of the predictor, Tanner Stage (cf. Figure 6.4). For the present purposes, however, we focus on the distribution of the residuals.
6.2. R E S I D U A L S A N D V A R I A B L E R E L A T I O N S H I P S
Table 6.1: Raw Scores and Residuals for Aggression Study T87
16.97 16.97 16.97 16.97 15.87 16.97 16.97 13.68 16.97 15.87 12.58 16.97 16.97 16.97 16.97 16.97 12.58 16.97 16.97 15.87 11.48 16.97 14.77 16.97 16.97 16.97 16.97 16.97 15.87 15.87 15.87 16.97 16.97
16.97 16.97 12.58
Res 2
Std. Res
-8.97 80.49 -1.68 -8.97 80.49 -1.68 -7.97 63.54 -1.49 -6.97 48.60 -1.31 -6.87 47.24 -1.29 -5.97 35.66 -1.12 -4.97 24.72 -0.93 -4.68 21.86 -0.88 -3.97 15.77 -0.74 -3.87 15.00 -0.73 -3.58 12.79 -0.67
-2.97 8.83 -0.56 -2.97 8.83 -0.56 -2.97 8.83 -0.56 -1.97 3.89 -0.37 -1.97 3.89 -0.37 -1.58 2.49 -0.30 -0.97 0.94 -0.18 -0.97 0.94 -0.18 -0.87 0.76 -0.16 -0.48 0.23 -0.09 0.03 0.00 0.01 0.23 0.05 0.04
1.03 1.06 0.19 1.03 1.06 0.19 1.03 1.06 0.19 1.03 1.06 0.19 2.03 4.11 0.38 2.13 4.52 0.40 2.13 4.52 0.40 4.13 17.03 0.77 6.03 36.34 1.13 6.03 36.34 1.13 6.03 36.34 1.13 7.03 49.40 1.32 7.42 55.10
1.39 continued on next page
6. R E S I D U A L
Res 2
16.97 16.97 16.97 15.87
8.03 9.03 10.03 10.13
64.46 81.51 100.57 102.56
Std. Res 1.50 1.69 1.88 1.90
Table 6.1 contains the Tanner scores, the aggression frequencies, the estimates from the above regression equation, the residuals and their squares, and the standardized residuals, calculated using
(6.1) and (6.2). The n = 40 cases in Table 6.1 are rank ordered according to the size of their standardized residual, beginning with the largest negative residual. The counts are as follows: 9 There
are 25 z values less than -t-1; 9 There are 15 z values between
and ]1.96p.
To determine whether this distribution fits what one would expect for 40 normally distributed values, we apply the Pearson X 2 test. The arrays of observed and expected frequencies appear in Table
6.2. The X 2 for the frequencies in Table 6.2 is X 2 = 4.61 (dr = 2;p = 0.10). This value suggests that the observed distribution of standardized residuals does not deviate significantly from the
expected distribution. We therefore conclude that the linear regression model was appropriately applied to these data. Another motivation for standardization of raw residuals comes from the fact that
raw residuals do not have standard variance. Lack of constant variance is, in itself, not a problem. However, comparisons are easier when scores are expressed in the same units.
Table 6.2: Observed and Expected Frequencies of Residual z Values
Observed Expected
Frequencies 1<1z1<1.96
1.96 < Izl
25 27.2
15 10.4
0 2.4
As an alternative to standardizing, researchers often studentize raw residuals, that is, standardize with respect to the t distribution (for examples see, for instance, Neter et al., 1996).
Studentization is often preferred when sample sizes are relatively small. However, conclusions drawn from inspection of standardized residuals and studentized residuals rarely differ. Therefore,
studentization will not be presented here in more detail.
Comparison of Calculated and Expected Residuals A more detailed analysis of the distribution of residuals can be performed by comparing the calculated and the expected residuals. The expected
residuals are estimated under the assumption that residuals are normally distributed. The two arrays of calculated and expected residuals can be compared by correlating them, by employing an F test
of lack of fit (Neter et al., 1996) and by visual inspection (see below). Expected residuals can be estimated as follows:
E(ei) - V n--
n + 0.25
To estimate the expected residuals, the calculated residuals must be rank ordered in ascending order. Beginning with the smallest residual, i - 1, the E(e~) can be determined. For the sake of
efficiency notice that the number of expected residuals that needs to be calculated will never exceed n / 2 . The reason is that, for even sample sizes n, only the first half of the E ( e i ) need to
be calculated. The second half mirrors the first around 0. For odd sample sizes n one can set the middle E(ei), where i - n / 2 + 0.5, to E ( e i ) - 0 and proceed with the remaining residuals as
with even sample sizes. To evaluate the degree to which the expected residuals parallel the calculated ones, one can apply the standard Pearson correlation. If the correlation is r >_ 0.9, one can
assume that deviations from normality are not too damaging.
C H A P T E R 6. R E S I D U A L A N A L Y S I S
Table 6.3: Comparing Calculated and Expected Residuals for Data from Aggression Study T87
Exp. Res.
-8.97 -8.97 -7.97 -6.97 -6.87 -5.97 -4.97 -4.68 -3.97 -3.87 -3.58 -2.97 -2.97 -2.97 -1.97 -1.97 -1.58 -0.97 -0.97 -0.87 -0.48 0.03 0.23 1.03 1.03 1.03 1.03 2.03 2.13 2.13 4.13 6.03 6.03 6.03 7.03
-10.94 -9.34 -7.90 -7.15 -6.57 -5.77 -5.28 -4.70 -4.32 -3.79 -3.42 -2.94 -2.67 -2.19 -1.92 -1.49 -1.23 -0.80 -0.53 -0.16 0.16 0.53 0.80 1.23 1.49 1.92 2.19 2.67 2.94 3.42 3.79 4.32 4.70 5.28 5.77
i 1.00 2.00 3.00 4.00 5.00 6.00 7.00 8.00 9.00 10.00 11.00 12.00 13.00 14.00 15.00 16.00 17.00 18.00 19.00 20.00 21.00 22.00 23.00 24.00 25.00 26.00 27.00 28.00 29.00 30.00 31.00 32.00 33.00 34.00
35.00 continued
0.02 -2.05 0.04 -1.75 0.07 -1.48 0.09 -1.34 0.11 -1.23 0.14 -1.08 0.16 -0.99 0.19 -0.88 0.21 -0.81 0.24 -0.71 0.26 -0.64 0.29 -0.55 0.31 -0.50 0.34 -0.41 0.36 -0.36 0.39 -0.28 0.41 -0.23 0.44 -0.15
0.46 -0.10 0.49 -0.03 0.51 0.03 0.54 0.10 0.56 0.15 0.59 0.23 0.61 0.28 0.64 0.36 0.66 0.41 0.69 0.50 0.71 0.55 0.74 0.64 0.76 0.71 0.79 0.81 0.81 0.88 0.84 0.99 0.86 1.08 on next page
Exp. Res.
7.42 8.03 9.03 10.03 10.13
6.57 7.15 7.90 9.34 10.94
36.00 37.00 38.00 39.00 40.00
0.89 0.91 0.93 0.96 0.98
1.23 1.34 1.48 1.75 2.05
Table 6.3 displays results of the calculations for the expected residuals. The first three columns in this table are taken from Table 6.1. The last four columns were created to illustrate
calculations. The rows of Table 6.3 are sorted in ascending order with residual size as the only key. Counter i in Formula (6.3) appears in the second of the four right-hand columns. The third of
these columns lists the values of the expression in parentheses in Formula (6.3). These values are areas under the normal curve. The last of these columns lists the z values that correspond to these
areas. These are the values for the terms in parentheses in Formula (6.3). The values for the expected residuals, E(ei), appear in the first of these four columns. Correlating the calculated
residuals from Table 6.3 (termed "Res." in the header of the table) with the expected residuals yields a Pearson correlation of r - 0.989. 3 This seems high enough to stay with the assumption of
parallel calculated and expected residuals.
Normal Probability Plots Normal probability plots provide a graphical representation of how close the expected and the calculated residuals are located to each other. Part of many graphics packages
(for example, SYSTAT and SPSS for Windows), normal probability plots represent a scatterplot of expected and calculated residuals. The points in this scatterplot follow a straight diagonal line if
the calculated residuals are perfectly normal. Deviations
3For the present purposes we focus on the size of correlation r a t h e r t h a n on significance testing. R a w correlation coefficients can be squared and t h e n indicate t h e p o r t i o n of
variance shared in c o m m o n . This information seems more i m p o r t a n t here t h a n statistical significance.
C H A P T E R 6. RESIDUAL A N A L Y S I S
.j I
~ ~,,,I
0.001 -10
m -5
m m 0 5 Residual
I 10
Figure 6.8: Normal probability plot for data from aggression study. from normality result in deviations from this straight line. The plot allows researchers to exactly identify where in the
distribution of residuals deviations from normality are greatest. For the following example we use the same data as in the last example. Specifically, we plot the estimated residuals against the
calculated ones, both from Table 6.3. This plot appears in Figure 6.8. Figure 6.8 displays the normal probability plot for the Tanner Stage - Aggression residuals. The plot suggests that the
coordinates of the expected and calculated residuals very nearly follow a straight line. This line is inserted in the figure as the regression line for the regression of the calculated residuals on
the expected residuals.
Chapter 7
P 0 LYN 0 MIAL REGRESSION 7.1
In many instances, relationships between predictors and criteria are known to be nonlinear. For instance, there is a nonlinear relationship between pitch and audibility, between activation level and
performance (Yerkes & Dodson, 1908), between the amount of thrill and experienced pleasure, and between speed driven and fine handed. Figure 7.1 depicts the YerkesDodson Law. The law states that, for
a given task difficulty, medium activation levels are most inducive for performance. The law suggests that performance increases with activation level. However, as soon as activation increases past a
medium level, performance decreases again. Using one straight regression line, researchers will not be able to validly model this law. The regression line will be horizontal, as the thin line in the
figure, and not provide any information about the (strong) relationship between the predictor, Activation, and the criterion, Performance. There are two solutions to this problem. One is called
nonlinear regression. This approach involves fitting a nonlinear function f(x;/30, ~1) to the data, for instance, f(x; ~o,~1) = 1 - e x p ( - ~ l ( X - / 3 0 ) ) . Because of the nonlinearity of
these functions in the parameters, the procedures for parameter estimation, and hypothesis testing and the construction of 117
7. P O L Y N O M I A L
Activation Figure 7.1: Yerkes-Dodson Law of the dependence of performance on activation.
confidence intervals do not carry over in a simple way to this situation. P a r a m e t e r estimates are no longer given explicitly but must be determined iteratively, some distributional results
hold only asymptotically, and so on. However, polynomial regression is a way that allows one to fit to the d a t a all nonlinear functions in the predictor that can be expressed as polynomials while
retaining all that has been said about linear regression so far. Polynomials can be described, in general, as the product sum of parameters and x values, raised to some power, --
bo + blx + b2x 2 + . . . + b j x J J ~ bj x j , j=O
where the bj are the polynomial parameters, and the vectors x contain x values, termed polynomial coefficients. The reason for using polynomial coefficients is that while we usually consider the
response Y as a function
7.1. B A S I C S
0 Polynomials 0
>~ -5 0
-10 0
4 X
9 Quartic + Cubic • Quadratic 9 Linear
Figure 7.2: Forms of polynomials up to the fourth degree.
of the predictor X, when parameter estimation is concerned we consider the regression line as a function of the parameters. In this aspect, the previous polynomial has the same form as that of a
multiple linear regression equation. While with polynomial regression not all the freedom of fitting general nonlinear models is obtained the approach is flexible enough to model a wide range of
nonlinear relations without further technical complexities. The highest power to which an x value is raised determines the degree of the polynomial, also known as the order of the polynomial. For
example, if the highest power to which an x value is raised is 4, the polynomial is called a fourth-degree polynomial or fourth-order polynomial. If the highest power is J, the polynomial is a
Jth-order polynomial. The form of a polynomial depends on its degree. To give an impression of possible shapes of polynomials with degrees of two, three, and four, consider Figure 7.2. First- and
second-order polynomials do not have inflection points; that is, they do not change direction. "Changing direction" means that the curve changes from a curve to the right to a curve to the left and
vice versa. A look at Figure 7.2 suggests that neither the first-order polynomial nor the second-order polynomial changes direction in this sense. In contrast, third- and higher-order polynomials do
have inflection points.
C H A P T E R 7. P O L Y N O M I A L R E G R E S S I O N
For example, the third-order polynomial changes direction at X = 4 and Y=0. The size of the parameter of the third-order polynomial indicates how tight the curves of the polynomial are. Large
parameters correspond with tighter curves. Positive parameters indicate that the last "arm" of the polynomial goes upward. This is the case for all polynomials in Figure 7.2. Negative parameters
indicate that the last "arm" goes downward. The fourt-order polynomial has two inflection points. Its curve in Figure 7.2 has two inflection points, at X - 2.8 and X - 5.2, both at Y - 0. The
magnitude of the parameter of fourth- order polynomials indicates, as for the other polynomials of second and higher order, how tight the curves are. The sign of the parameter indicates the direction
of the last arm, with positive signs corresponding to an upward direction of the last arm. Consider again the example of the Yerkes-Dodson Law. Theory dictates that the relationship between
Activation and Performance is inversely U-shaped. Figure 7.2 suggests that with only six coordinates one can create a reasonable rendering of what the Yerkes-Dodson Law predicts: At the extremes,
Performance is weak. Closer to the middle of the Activation continuum, Performance increases. The simplest polynomial having such characteristics is one of second degree, that is, y = bo + blx + b2x
To estimate the unknown parameters using OLS we set up the design matrix
X12 X22
X22 X~2
1 X
and perform a multiple regression analysis. From R 2 it can be seen how well the model fits, and from the scatterplot of Y against X including the fitted second degree polynomial it can be seen
whether the selected polynomial gives a good description of the relation.
7.1. B A S I C S
Figure 7.3: Graphical representation of the data from Table 7.1. In the following data example we investigate whether the performance measures, obtained at six equidistant levels of Activation in a
Trivial Pursuit Task, can be fit using a quadratic polynomial. Figure 7.3 displays the empirical values. Table 7.1 presents the raw data. Taking one look at Figure 7.3 it is clear that a linear
regression model is not appropriate. Nevertheless, the results of this analysis are given for completeness in Table 7.2. The coefficient of determination is merely R 2 - 0.131. The slope coefficient
is far from being significant. This means that a simple linear regression model can virtually explain nothing.
Table 7.1: Raw Data for Test of Yerkes-Dodson Law Using Six Activation Levels Activation Level
3 6 6.5 5.4 3
Table 7.2: Results of Simple Linear Regression Analysis of Activation Data Coefficients
Std. Error
t Value
p Value
b0 bl
3.047 0.363
1.821 0.468
1.673 0.776
0.170 0.481
On the other hand, performing standard OLS regression analysis regressing Performance on a second-order polynomial in Activation level suggests a good model fit. Specifically, we calculate ~) - -2.52
+ 4.54x - 0.60x 2, where y denotes Performance, x Activation, and x 2 the squared Activation level. The results of the regression analysis are given in Table 7.3. The coefficient of determination is
R 2 -- 0.89. The one-sided t test for the quadratic term in the model is significant at the 5% level, that is, p = 0.0105. We have performed the one-sided test because we hypothesized an inversely
U-shaped relation and therefore expected the regression coefficient for the squared Activation level to be negative,//1 :/~2 < 0. This test can be interpreted as the test of whether allowing for
curvature in the regression equation improves the model fit above what would have been obtained by using only a simple linear regression model. Note that the test for the linear term, bl, has now
become statistically significant. When selecting the polynomial to fit to a given data set, researchers need to respond to two criteria. The first is that the polynomial should, whenever possible,
reflect theory. In other words, specification of shape of polynomial and number of inflection points should be based on theoretical
Table 7.3: Quadratic Polynomial Regression Analysis of Activation Data Coefficients
Std. Error
t Value
p Value
bo bl b2
-2.520 4.538 -0.596
1.469 0.961 0.134
1.715 4.721 -4.437
0.185 0.018 0.021
7.1. BASICS
Table 7.4: Cubic Polynomial Regression Analysis of Activation Data Coefficients
Std. Error
t Value
p Value
b0 bl b2 b3
0.700 0.487 0.745 -0.128
2.505 2.855 0.914 0.086
0.279 0.171 0.816 - 1.480
0.806 0.880 0.500 0.277
considerations or earlier results. 1 The second criterion concerns the order of the polynomial. Scientific parsimony requires that the polynomial be of lowest possible order. There is, in addition, a
formal constraint on the order of the polynomial to be fit to empirical data. This order cannot be greater than t - 1, where t is the number of different values of the predictor. In the last example
we had six different predictor values, so we could have fitted a fifth-order polynomial. As the scatterplot of Performance and Activation with the regression curve is quite acceptable, we do not
expect to improve the model fit by using a third- or even fourth-degree polynomial. Nevertheless, we fit a third-degree polynomial to see whether any improvement can be achieved; that is, we use the
~) - bo + bl x -t- b2x 2 + b3x 3. We solely append a column with the Activation level in the third power to the design matrix X and do a multiple linear regression. It is common practice that if a
kth order term is included in the model all lower order terms should be included as well and should stay in the model regardless of whether the t tests for the corresponding coefficients were
significant or not. Although there may be exceptions to this rule we do not further comment on this. For a discussion of this point, see McCullagh and Nelder (1989, pp. 69). The results are given in
Table 7.4. The coefficient of determination is now R 2 = 0.945. It has increased 1This does not mean that polynomial approximation is not applicable in exploratory research. However, testing whether
one particular polynomial provides a satisfactory rendering of a given data set presupposes theory-guided selection of polynomial.
C H A P T E R 7. P O L Y N O M I A L R E G R E S S I O N
6% over the model without the third-order term. Table 7.4 shows some very interesting facts. First of all, while the increase in R 2 suggests a third-order term may be valuable for the model, the t
test of its coefficient is far from being significant. As is well known from multiple regression analysis, when adding predictors to the equation, R 2 always increases. W h a t is perhaps the most
striking feature in Table 7.4 is that the t tests for the quadratic and for the linear coefficient are now far from significant. This contradicts the earlier finding that a quadratic term will
considerably improve model fit. The reason for this is that the estimated standard errors for these terms have dramatically increased. This is due to the high intercorrelations (or more generally to
the almost complete linear dependency) between the X values of the first-, second- and third-order terms in the regression equation. The only reliable test is the test for the highest order term. As
this test is not significant we conclude that a second-degree polynomial is adequate. For social science data it often suffices to fit a polynomial of second or third degree as there is usually
considerable variation in the data that does not allow one to determine the functional relation between Y and X more precisely. As in the example this can be done by fitting two or three different
regression models of various degrees and then using the t test to find the highest order term for which the model is adequate. But sometimes the situation is not so simple, that is, higher order
terms are needed to obtain an adequate model fit. In these cases the procedure suggested so far would be very laborious and time consuming and the fitting of many different regression models will
often lead very quickly to confusion concerning the results and their interpretation. In these cases orthogonal polynomials are a good alternative.
Orthogonal Polynomials
Orthogonal polynomials can be considered as a technical device to overcome the problems mentioned in the last paragraph. As before, regression using orthogonal polynomials tries to fit a polynomial
regression curve to the data and obtains exactly the same R 2 as using the polynomial regression approach of the last section. We will illustrate this using the example from the last section. As
should be recalled, the main reason for
7.2. O R T H O G O N A L P O L Y N O M I A L S
Table 7.5: Polynomial Coefficients .for First-, Second-, Third-, and FourthOrder Polynomials for Six Values of X Predictor Values
-3 -1 1 3 5
Polynomials (Order) Second Third Forth 5
-1 -4 -4 -1 5
7 4 -4 -7 5
-3 2 2 -3 1
the unreliability of the t tests other than that for the highest order term in the regression model is the high degree of linear dependency among the predictors. This phenomenon is also known as
multicollinearity (see Chapter 8). In the case where the predictor is random, this is equivalent to saying that the predictors are highly intercorrelated, thus containing nearly the same information.
Before going into more technical details we will analyze the foregoing example using orthogonal polynomials. If the values of the predictor are equally spaced and the number of observations belonging
to each predictor value is the same, the values of the orthogonal polynomials can simply be read off of a table. In our example the predictor values are equally spaced and to each predictor value
belongs a single observation. With six different predictor values the orthogonal polynomials up to the fourth degree are given in Table 7.5. Textbooks of analysis of variance contain tables with
polynomial coefficients for equally spaced predictors that typically cover polynomials up to fifth order and up to 10 different values of X (Fisher & Yates, 1963; Kirk, 1995). Instead of using the
design matrix X with the original predictor values in the second column (recall the column of ones in the design matrix) we replace the X values by the values of the column labeled "First" in Table
7.5. Likewise the squared predictor values in the third column of the design matrix are replaced by the column labeled "Second" in Table 7.5, and so on. This may seem quite surprising at first sight
since the values in the table have obviously nothing to do with the observed Ac-
tivation levels. That this is nevertheless possible is due to the fact that the predictor values are equally spaced. Note that the values of the column labeled "First" in Table 7.5 are equally spaced
as well. Thus, these values can be considered a linear transformation of the original Activation levels. Recall that linear transformations will possibly change the magnitude of the corresponding
regression coefficient, but its significance remains unchanged. While the second order column of Table 7.5 is not a linear transformation of the squared activation levels, the second-degree
polynomial regression model is equivalent to a model including a column of ones and the first- and second order columns of Table 7.5. This carries over to higher order polynomial models. To be more
specific, the design matrix that is equivalent to the third-degree polynomial model of the last section is
-5 -3 -1 1 3 5
5 -1 -4 -4 -1 5
-5 7 4 -4 -7 5
The tilde above the X indicates that it is different from the design matrix X used earlier. Now, we perform a multiple linear regression using this design matrix and obtain the results given in Table
7.6. First we note that the coefficient of determination is as before R 2 = 0.945. In addition, the t value for the third-order term and therefore its p value have not changed. This suggests that we
are actually doing the
Table 7.6: Cubic Regression Analysis of Activation Data Using Orthogonal
Polynomials Coefficients
Std. Error
t Value
p Value
bo bl b2 b3
4.317 0.181 -0.398 -0.077
0.284 0.083 0.076 0.052
15.217 2.185 -5.244 - 1.480
0.004 0.161 0.035 0.277
7.2. ORTHOGONAL POLYNOMIALS
same test as before with the conclusion that this term adds nothing to the model. What has changed is the significance test of the second order term. Now, this test suggests, as expected, that this
term contributes significantly to the model fit. In addition, the linear term is also not significant. Therefore, we have obtained the same conclusions as before but we have fitted only a single
regression model rather than three different models, that is, a simple linear regression and a second- and third-order polynomial regression model. Of course, what would be worthwhile after the
appropriate model has been selected is to fit this model to obtain, for example, the corresponding R 2. This can be done by dropping the third column from X . From this analysis it can be seen that
the nomials in regression analysis can considerably drawing all important conclusions from a single ularly useful when the degree of the polynomial
use of orthogonal polysimplify the analysis by analysis. This is particneeded gets higher.
Before doing another analysis using orthogonal polynomials in which all the restrictions on the predictor values are abandoned, for example, the predictor values need no longer be equally spaced, we
give an explanation as to why this simplification is achieved when using orthogonal polynomials. Recall from Appendix A that two vectors xl and x2 are said to be orthogonal if their inner product
xlx2~ is zero. As the name already suggests this is the case for orthogonal polynomials. Indeed every polynomial column in Table 7.5 is orthogonal to any other polynomial column in this table. This
includes the column of ones usually added as the first column in the design matrix. This can be checked by simply calculating the inner product of any two polynomial columns in Table 7.5. These
columns enter into the design matrix used in multiple linear regression. Therefore the matrix X ~ X is considerably simplified as it contains as its elements the inner products of all pairs of column
vectors of the design matrix. In this case all off-diagonal elements of X ~ X are zero and therefore the inverse of X ~ X is readily found when calculating the OLS estimators of the parameters.
Recall that
That OLS estimates can be easily calculated is important from a com-
putational viewpoint. But there is another important application of the inverse of X I X . As outlined in Chapter 3 on multiple regression, the parameter estimates have a multivariate normal
distribution with a covariance matrix given by V(b)-a2(X'X)
As this matrix is diagonal, all covariances between two different elements of the b vector are zero; hence they are statistically independent. Therefore, when we add further columns with higher order
terms to the design matrix, the parameter estimates obtained so far will not change as long as the added columns are orthogonal to all other columns already in the design matrix. This can be checked
by dropping the fourth column of X and fitting a multiple regression model with this new design matrix. The coefficients for the intercept, and the linear and second-order term will not change. The
general strategy when using orthogonal polynomials in regression analysis is therefore to fit a polynomial with a degree that is expected to describe the data a little bit too well and then look at
the t values of the regression coefficients to see which degree will be sufficient to describe the data. This is now outlined in the following section.
Example of Non-Equidistant Predictors
Again consider the Yerkes-Dodson Law. While in the last example we had only six different observations and the predictor variable was assumed to be under the control of the experimenter, we now have
data available from 100 persons without controlling the predictor X (Activation); that is, we observe 100 random values of the predictor. Of course these values are not equally spaced and we perhaps
observe certain predictor values more often then others. The data are plotted in Figure 7.4. The plot is again inversely U-shaped, but between the Activation levels of 2 and 6 the plot suggests that
Performance is nearly constant. If we fit a second-degree polynomial to the data, using orthogonal polynomials or the approach of the last section, and include the fitted values in the scatterplot,
we see that a second-degree polynomial is obviously not a good choice to model the data. This can be seen from Figure 7.5.
7.3. EXAMPLE OF NON-EQUIDISTANT PREDICTORS
7 6 ~5 ~4
3 2 I I I I I I I 0 0 1 2 3 4 5 678 Activation
Figure 7.4: Sample of 100 Activation and Performance pairs of scores, with both the Activation and Performance variables being random.
~ 4 ~ 3-g 2-, 1O
1 2 3 4 5 6 7 8 Activation
Figure 7.5: Quadratic polynomial for data in Figure 7.4.
C H A P T E R 7. P O L Y N O M I A L R E G R E S S I O N
Table 7.7: p Values for Polynomial Terms for Data in Figure 7.4 Coefficients
p Value
bo bl b2 b3 b4 b5 b6 b7 b8
0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.355 0.269
For these data to be adequately described a higher degree polynomial would be necessary. Assuming we have a computational device for obtaining orthogonal polynomials with these data, we decide to fit
a model including all terms up to the eighth degree. The result of the analysis is given in Table 7.7. Because the actual values of the regression coefficient are not of interest only the p values
are given. From Table 7.7 it can be seen that a sixth-order polynomial should be selected. Figure 7.6 shows the fitted values within the scatterplot of Performance and Activation level. As the plot
suggests, the model fit is excellent. The coefficient of determination using a sixth-order polynomial is R 2 = 0.986. If we had not used orthogonal polynomials it would have been considerable work to
find which polynomial would yielded an adequate fit. What was left open thus far was the problem of obtaining the coefficients for the orthogonal polynomials in the general case. While for a moderate
number of predictor values this could be done by hand (see, for example, Kirk, 1995, p. 761), this is not practical in the current example. For 100 observation points this would have required us to
calculate polynomial coefficients to the eighth degree, that is, about 800 different values. Of course this work should be done using a computer. In S-Plus software such a function is already
implemented. If the statistic software includes a matrix language, like SAS-IML, one can obtain the polynomial coefficients by implementing the Gram-Schmidt orthogonalization algo-
7.3. EXAMPLE OF NON-EQUIDISTANT PREDICTORS
1 2 3 4 5 6 7 Activation
Figure 7.6: Sixth-order polynomial for data in Figure 7.4. rithm. A description of the algorithm can usually be found in elementary textbooks of linear algebra. One starts with a design matrix, X,
for instance, Formula (7.1). That is, the design matrix contains a column of ones as the first column, the original values in the second column, the squared values in the third column, and so on. The
Gram-Schmidt algorithm takes the second column vector, that is, the values of the predictor, and orthogonalizes it to the first. This is done by simply centering the predictor values. Let 1 denote a
column vector of ones. The inner product of I and the centered predictor is, n
l ' ( x - ~1) - 2 ( z i
- ~) - 0.
After the second column vector is made orthogonal to the first, the third column of the design matrix is orthogonalized to both columns preceding it, that is, the first and the just obtained second
column vector. This is a bit more complex than just centering the third column, but the procedure stays straightforward.
This Page Intentionally Left Blank
Chapter 8
MULTI C O LLINEARITY One of the most important components of interpretation of regression parameters in multiple regression concerns the relative nature of weights. Each weight in a multiple
regression equation can be interpreted as the number of steps on Y that follow one step on X, given all the other predictors in the equation. In other words, the presence of other predictors can
change the parameter estimate for any given predictor. To illustrate this, we summarize parameter estimates from the three regression runs for the example from Section 3.4. In the example we
predicted Breadth (Dimensionality) of Cognitive Complexity, CC1, from Depth of Cognitive Complexity, CC2, Overlap of Concepts (Categories), OVC, and Level of Education, EDUC. Table 8.1 presents the
variable intercorrelations. All correlations in Table 8.1 are statistically significant (even after Bonferroni adjustment for experiment-wise error; n = 327). Correlations in Table 8.1 suggest that
Cognitive Complexity variables show high inter-
Table 8.1: Intercorrelations of CC1, CC2, 0 VC, and ED UC
CC2 EDUC OVC
CC1 0.154 0.301 -0.784
0.233 -0.527
C H A P T E R 8. M U L T I C O L L I N E A R I T Y
correlations. In addition, Level of Education is substantially correlated with each of the Cognitive Complexity variables. The sign of the correlations are as one would expect: 1. The positive
correlation between EDUC and CC1 suggests that individuals with more formal education display more Breadth of Cognitive Complexity. 2. The positive correlation between EDUC and CC2 suggests that
individuals with more formal education have more Depth in their concepts. 3. The negative correlation between EDUC and O VC suggests that individuals with more formal education have crisper, that is,
less overlapping, concepts. As one can imagine, correlations between predictors can have effects on the size of regression parameter estimates. Thus, the presence or elimination of variables can have
substantial effects on the appraisal of any /~. However, this is not necessarily the case. Table 8.2 contains examples of both. The variable whose b weight is most affected by the presence of the
other predictors is ED UC. In the unconstrained model the /~ estimate is 0.39. In the second constrained model, which contains only predictor ED UC, the estimate is 1.94. Suppose the significance
tests in the second to last column of Table 8.2 had been performed in their two-tailed form. Then, whereas the unconstrained model would have suggested that EDUC does not significantly contribute to
predicting CC1 (p = 0.057), the second constrained model would have suggested the opposite conclusion (p < 0.01). Largely unaffected in spite of their high correlations with each other and other
predictors (see Table 8.1), are variables CC2 and OVC. Table 8.2 suggests that parameter estimates for CC2 and O VC, their standard errors, and the t values for these variables remain, within very
tight limits, the same regardless of whether predictor ED UC is in the equation or not. In general, one calls intercorrelated predictors multicollinear. When these correlations affect estimation of
regression parameters, a problem of multicollinearity exists. If this problem exists, answering the following questions can become problematic (see Neter et al., 1996, p. 285):
135 Table 8.2: Three Regression Runs for Prediction of Breadth of Cognitive Complexity Variable Intercept CC2 OVC EDUC
I Coefficient
29.39 -0.21 22.40 0.39
Intercept CC2 OVC
31.40 -0.21 -22.83
Intercept EDUC
1.96 1.94
Std. Error
t Value
p Value (1 Tailed)
Unconstrained Model 1.52 19.28 0.02 -10.25 0.86 - 26.09 0.21 1.91 Constrained Model 1 1.10 28.45 0.02 - 10.10 0.83 -27.44 Constrained Model 2 1.67 1.18 0.34 5.69
< 0.01 < 0.01 < 0.01 0.03 < 0.01 < 0.01 < 0.01 0.12 < 0.01
1. What is the relative effect that each predictor has? 2. What is the magnitude of each predictor's unique effect? 3. Can predictors be eliminated because their contribution is too little? 4. Should
one include additional predictors in the equation? Answering these questions is straightforward only if the predictors in the equation are uncorrelated and, in addition, do not correlate with other
variables that could possibly be used as predictors. In this case, parameter estimates remain the same regardless of what other (uncorrelated) predictor is included in the equation. If, however, and
this is typical of social science empirical data, predictors are correlated, problems may occur. While multicollinearity does not, in general, prevent us from estimating models that provide good fit
(see Table 8.2), interpretation of a parameter estimate becomes largely dependent upon what other predictors are part of the regression equation.
Diagnosing Multicollinearity
Among the most important indicators of multicollinearity are the following: 1. Correlations exist among predictors (see Table 8.1). 2. Large changes occur in parameter estimates when a variable is
added or removed (see Table 8.2). 3. Predictors known to be important do not carry statistically significant prediction weights (see two-tailed test of EDUC in unconstrained model in Table 8.2). 4.
The sign of a predictor is counterintuitive or even illogical. 5. Surprisingly wide confidence intervals exist for parameter estimates for predictors known to be of importance. 6. There exists a
large variance inflation factor (VIF). While many of these indicators are based on prior knowledge and theory, others can be directly investigated. Specifically, Indicators 3 and 4 are fueled by
substantive knowledge and insights. All the others can be directly investigated. In the following we give a brief explanation of Indicator 6, the variance inflation factor. Consider an unconstrained
multiple regression model with p predictors (p > 1). Regressing predictor m onto the remaining predictors can be performed using the following regression equation:
Xm -- bo + E
bjXj + Residual,
for j # m.
Let R2m be the coefficient of multiple determination for this model. Then, the VIF for predictor m is defined as VIF m =
,~.. /~n
The VIF has a range of 1 _< VIF _< +co. The VIF increases with the severity of multicollinearity. Only when R~m - 0, that is, when predictors
8.1. DIAGNOSING M U L T I C O L L I N E A R I T Y
Table 8.3" Regressing Predictors CC2, O VC, and EDUC onto Each Other Variable
I Coefficient
Intercept EDUC OVC
46.25 0.68 -20.59
Std. Error
t Value
Dependent Variable CC2 3.25 14.21 0.56 1.21 2.04 -10.11 Dependent Variable OVC 1.24 0.068 18.86 -0.06 0.01 -4.90 -0.01 0.001 -10.11 Dependent Variable EDUC 5.14 0.30 17.30 0.01 0.01 1.21 1.10 0.22
Intercept EDUC CC2 Intercept CC2 OVC
p Value(1 Tailed) < 0.01 0.11 < 0.01 < 0.01 0.01 < 0.01 < 0.01 0.11 < 0.01
are uncorrelated, does one obtain the minimum value, VIF - 1. It is important to note that the VIF is variable-specific. As was indicated in the example at the beginning of this chapter, not all
variables are equally affected by multicollinearity. The VIF increases as the multiple correlation among predictors, predicting other predictors, increases. A rule of thumb is that when the VIF __
10, problems with multicollinearity are severe, that is, multicollinearity greatly influences the magnitude of parameter estimates. To illustrate the VIF we use the example again where we predict
Breadth of Cognitive Complexity (CC1), from Depth of Cognitive Complexity (CC2), Overlap of Concepts (OVC), and Level of Education (ED UC). In the following we estimate the VIF for each of the three
predictors, CC2, 0 VC, and ED UC. To do this, we have to estimate the following three regression models" CC2
b0 4. bl 9 OVC 4- b2 9EDUC + Residual,
b0 4- bl 9CC2 + b2 9EDUC + Residual,
bo + bl 9 CC2 + b2 9OVP + Residual.
Results of these analyses appear in Table 8.3.
The multiple correlations for the three regressions are 0.281, 0.328, and 0.119, respectively. Inserting the R 2 into Equation (8.1) yields
VIFcc2 VIFovc VIFEDuC
= =
= 1.391, 1 - 0.281 1 1.488, 1 - 0.328 1 -- 1.135. 1 --0.119 -
These values can be interpreted as the factor by which the expected sum of squared residuals in the OLS standardized regression coefficients inflates due to multicollinearity. In other words, because
the predictors are correlated, the expected sums of squared residuals are inflated by a factor of the VIF. For example, because the three predictors CC2, 0 VC, and ED UC are correlated, the expected
residual sum of squares for CC2 is inflated by a factor of 1.391. In the present example, none of the VIF values come even close to the critical value of 10. Therefore, we can conclude that analysis
of the present data does not face major multicollinearity problems. Nevertheless, notice how the sign of the regression coefficient for CC2 changes in the presence of EDUC, and how the sign of the
coefficient for EDUC changes depending on whether CC2 or O VC is in the equation (Table 8.3).
C o u n t e r m e a s u r e s to Multicollinearity
There is a number of countermeasures one can take in the presence of multicollinearity. In the following paragraphs we give a brief overview of countermeasures. Examples and more detailed
explanations of the effects of centering follow.
Centering Predictors Centering predictors often reduces multicollinearity substantially. For example, squaring coefficients of a linear trend to assess a quadratic trend generates vectors that are
strongly correlated with each other. Using
orthogonal polynomial parameters reduces this correlation to zero. Orthogonal polynomial parameters are centered (see Section 7.2; an example of the effect created by centering predictors is given
Dropping Predictors When parameter estimates for certain predictors severely suffer from multicollinearity one may drop one or more of these predictors from the multiple regression equation. As a
result, inflation of standard errors of parameter estimates will be reduced. There are, however, several problems associated with this strategy. Most importantly, dropping predictors prevents one
from estimating the contribution that these predictors could have made to predicting the criterion. In addition, estimates for the predictors remaining in the model may be affected by the absence of
the eliminated predictors.
Ridge Regression This method creates biased estimates for ~. However, these estimates tend to be more stable than the unbiased OLS estimates (for details see Neter et al., 1996, cf. Section 12.3).
Performing Regression with Principle Components This approach transforms predictors into principal component vectors which then are used as predictors for multiple regression. For details see
Rousseeuw and Leroy (1987); for a critical discussion of this method see Hadi and Ling (1998). 8.2.1
In the following paragraphs we illustrate the effect created by centering predictors. For an example, consider the case in which a researcher asks whether there exists a quadratic trend in a data set
of five measures. To test this hypothesis, the researcher creates a coding variable that tests the linear trend, and a second coding variable that is supposed to test the quadratic trend. Table 8.4
displays the coding variables created by the researcher.
C H A P T E R 8. M U L T I C O L L I N E A R I T Y
Table 8.4: Centering a Squared Variable Lin Trend I A
Lin Trend II B
Quad Trend I F
Quad Trend II G
-1 0 1 2
Table 8.4 displays four coding variables. Variable A was created to assess the linear trend. It shows a monotonic, linear increase. Variable B results from centering A, that is, from subtracting the
mean of A, fi~ = 3, from each value of A. Variable F was created to assess the expected quadratic trend. Variable F results from squaring the values of A. Variable G was also created to assess the
quadratic trend. It results from squaring B, the centered version of A. As a means to determine whether using variables from Table 8.4 could cause multicollinearity problems we correlate all
variables that could possibly serve as predictors in a multiple regression, that is, all variables in the table. The correlation matrix appears in Table 8.5. Table 8.5 shows a very interesting
correlation pattern. As expected, Variable A correlates perfectly with Variable B. This illustrates that the Pearson correlation coefficient is invariant against linear transformations. Both
Variables A and B correlate very highly with Variable F, that is, the square of A. This high correlation illustrates that squaring variables typically results in very high correlations between the
original and the
Table 8.5: Correlation Matrix of Centered and Uncentered Predictors
B F G
A 1 0.98 0
0.98 0
8.2. C O U N T E R M E A S U R E S T O M U L T I C O L L I N E A R I T Y
3 A
Legend After Centering 9 Before Centering
Figure 8.1" Relationships between linear and quadratic predictors before and after centering. squared variables. However, the square of B, called G in Tables 8.4 and 8.5, correlates neither with A
nor with B. It correlates only with F, the other squared variable. Figure 8.1 displays the relationships between Variables A and G and A and F. The results from Tables 8.4 and 8.5 and Figure 8.1 can
be summarized as follows: When predictors are highly correlated with other predictors because they result from multiplying predictors with themselves or each other, centering before multiplication
can greatly reduce variable intercorrelations. As the present example shows, variable intercorrelations can, under certain conditions, even be reduced to zero.
This Page Intentionally Left Blank
Chapter 9
MULTIPLE CURVILINEAR REGRESSION Chapter 8 introduced readers to simple curvilinear regression. This topic is taken up here again. Although researchers in the social sciences chiefly rely on
describing variable relationships using straight regression lines, it is generally acknowledged that there are many applications for curvilinear relationships. Examples of such relationships include
the Yerkes-Dodson Law (see Figure 7.1), learning curves, forgetting curves, position effects in learning, learning plateaus, item characteristics, and the concept of diminishing returns. In other
sciences, curvilinear relationships are quite common also. For example, in physics the relationship between temperature and vapor pressure is curvilinear, and so is the function relating heat
capacities and temperature to each other. Similarly, sone (loudness) scales are nonlinear. Parameters for many of these functions can be estimated using the least squares methods discussed in this
volume. The first two figures depict two scenarios of multiple regression. Figure 9.1 depicts standard linear multiple regression using the function Z - X + Y. The graph shows a straight plane, a
regression hyperplane, sloped at a 45 ~ angle. Figure 9.2 presents an example of a curvilinear regression hyperplane using the function Z = 0.5X 2 + 0.2Y 3. A function of 143
looo 5OO
Figure 9.2" Regression hyperplane for Z - 0.5X 2 + 0.2Y 3
are as before. The minimization process produces 0 0b--~.SSR(b) - O, for all bi where i = 0, ..., p, and with p + 1 denoting the number of vectors in X. The following are examples of curvilinear
1. Exponential Function
y -- bo + bl exp(b2x)
2. Polynomial p
Y- E j-O
bJ xj
C H A P T E R 9. M U L T I P L E C U R V I L I N E A R R E G R E S S I O N
3. Trigonometric Function y = b0 + bl sin x + b2 cos x 4. Vapor Pressure Function bl
y - b0 + - - + b2 log x + bax + . . . , X
where x is the vapor temperature (in Kelvin) and the dependent variable is y - log P, the logarithm of vapor pressure 5. Heat Capacity Function y=bo+blx+
b3 m X2
6. Negatively Accelerated Function of Practice Trials (Hilgard & Bower, 1975) Pn = 1 - (1 - pl)(1 - O)n - l ,
where n denotes the trials, 0 is the flatness parameter indicating how steep the increase is in response probability (pn), and pl is the probability of the first response; Figure 9.3 gives three
examples of such curves. The following example uses data from the project by Spiel (1998) on the development of performance in elementary school. We analyze the variables Fluid Intelligence F,
Crystallized Intelligence C, and Grades in German G in a sample of n - 93 second graders. The hypotheses for the analyses are: 1. Grades in German can be predicted from a linear combination of Fluid
Intelligence, that is, training-dependent Intelligence, and Crystallized Intelligence; 2. The effects of Fluid Intelligence taper off, so that the relationship between Fluid Intelligence and Grades
is nonlinear; and 3. The relationship between Crystallized Intelligence and Grades is linear.
147 1.2
m theta = 0.05 theta - 0.15
~- theta = 0.25 ~0.4
26 Trials
Figure 9.3: Three examples of functions of practice trials
To test this set of hypotheses we perform multiple, nonlinear regression as follows. We create a nonlinear version of variable Fluid Intelligence. For starters, we square the raw scores and obtain
F2. The second predictor is Crystallized Intelligence. We use this variable without transforming it. From the data we estimate the following regression parameters G - 4.94 + 0.11 9F2 + 0.02 9C +
Residual. The multiple R 2 for this equation is 0.43. The statistical analyses indicate that whereas F2 makes a statistically significant contribution (t = 9.613;p < 0.01), C does not (t = 1.19;p =
0.239). Figure 9.4 displays the relationship between F2, C, and G. The following second example analyzes data using polynomial regression. In an experiment on the effectiveness of an antidepressive
drug, it was asked whether the proportion of patients that recovered from reactive depression could be predicted from the dosage of the drug. The researchers used five doses of the drug: 1/2 a unit,
1 unit, 3/2 units, 2 units, and 5/2 units. Figure 9.5 displays the proportions of patients that recovered after taking the drug for four weeks (line with triangles, PROP) and the curves of the
parameters of the first (line with circles ; P 1), second (line with • signs, P2), and third (line with + signs, P3) order polynomials used to approximate the observed proportions. Proportions were
f 3
Figure 9.4" Multiple nonlinear relationships between the predictors, Crystallized Intelligence and Fluid Intelligence, and the criterion, Performance in German.
/x + • o
-2 -3
,, 0
I, 1
PROP P3 P2 P1
Figure 9.5: Raw scores of drug effects and indicators used to predict the raw data curve.
149 Table 9.1: Raw Scores and Polynomial Coe]ficients for Drug E O%iency Example
Observed Proportion
1/2 1 3/2 2 5/2
0.05 0.25 0.4 0.45 0.43
Polynomial Coefficients Linear Quadratic Cubic -2 -1 0 1 2
2 -1 -2 -1 2
-1 2 0 -2 1
calculated using independent samples. The raw scores the polynomial coefficients used to smooth the polynomials I appear in Table 9.1. Using these values to estimate parameters for a multiple
regression equation yields Proportion = 0.32 + 0.096 9P1 - 0.039 9P2 + 0.002 9P3 + Residual. Whereas the parameters for the linear and the quadratic component of this equation are statistically
significant, the parameter for the thirdorder polynomial is not. More specifically, we calculate the following t values and tail probabilities: for PI: t = 31.75, p = 0.02; for P2: t = -15.09, p =
0.04; and for P3: t = -0.661, p = 0.628. The multiple R 2 is 0.999. We thus conclude that the curve of proportions can be approximated almost perfectly from the first-, second-, and third-order
orthogonal polynomials. Because the third-order polynomial did not contribute significantly to the m o d e l - d a t a fit, we now consider a reduced model. Specifically, we consider the model that
only includes the first- and second-order polynomials. The polynomials included in the first model are orthogonal (see Section 7.2). Therefore, there is no need to recalculate the above equation to
obtain parameter estimates for the reduced model. One can simply drop the term of the predictor not included in the reduced model. We 1Smoothing in Figure 9.5 was performed using the spline smoother
in SYSTAT's GRAPH module.
C H A P T E R 9. MULTIPLE CURVILINEAR R E G R E S S I O N 0.5
I 1
I 2
>. = 0.2(.9
Dosage Figure 9.6" Second-order polynomial describing dosage effects (R 2 0.999).
obtain in the present example Proportion - 0.32 + 0.096 9P1 - 0.039 9P2 + Residual. The portion of variance accounted for by this model is still R 2 - 0.999. The test statistics for the slope
parameters are: for PI: t = 37.45, p = 0.001; and for P2: t -- -17.80, p = 0.003. We thus conclude that the reduced model also provides excellent fit. To illustrate how good the fit is, consider
Figure 9.6. Figure 9.6 displays the observed proportions and the fitting curve from the first- and second-order polynomials. Obviously, the fit is close to perfect. Only for the 1 and 3/2 doses are
there minor deviations. The figure illustrates also, however, that extrapolation can suggest very misleading conclusions. Administering a dose greater than 2 units of the drug is displayed as having
a lesser effect than doses just around 2 units. While this trend may be plausible within a certain range, for larger doses this would mean that proportions are negative. Readers are invited to
interpret negative proportions in this particular example.
Chapter 10
INTERACTION TERMS IN REGRESSION Regression interaction is one of the hotly and widely discussed topics of regression analysis. This chapter provides an overview of the following three topics: First,
it presents a definition, along with examples, of interaction in regression. Second, it discusses the meaning of multiplicative terms in regression models. Third, it introduces the distinction
between multiplicative terms and interaction terms in regression models and illustrates the use of interaction models.
To define interaction in regression, one needs at least three variables. Two of these are predictors and the third is the criterion in multiple regression. Consider the case where researchers predict
the criterion, Y, from the predictors, X1 and X2, using the following regression equation:
Y = Z0 + Z~XI + Z2X2 + e.
CHAPTER 10. I N T E R A C T I O N TERMS IN R E G R E SSIO N
In this model, parameters /~1 and /~2 represent the regression main effects 1 of the predictors, X1 and X2. Interpretation of regression main effects was discussed in Section 2.3 and Section 3.2. In
brief,/~1 indicates how many steps on Y follow from one step on X1, given X2. This applies accordingly to/~2. Application of Equation (10.1) implies that the effects of X1 and X2 are independent in
the sense that neither/31 nor/32 change with X2 or X1, respectively; that is, neither/31 or/32 are functionally related to X2 or X1, respectively. In other words, suppose that the first predictor is
changed by one unit; then the change of the criterion is exactly/31 regardless of the value that the second predictor currently has. An example of a regression hyperplane (also termed a response
surface) for this situation is given in Figure 3.1. Noninteracting variables are also termed "additive". There are many instances, however, where the assumption of independent regression main effects
does not apply. Consider, for example, the effects of drugs taken together with alcohol. Drug effects can vary dramatically depending on blood alcohol level. In technical terms, the regression slope
of drug on behavior varies with blood alcohol level. The same may apply to the effects of alcohol. 2 Thus, regression interaction can be defined as follows: When the estimate of the slope parameter
for one predictor depends on the value of another predictor, the two predictors are said to interact. There are two subtypes of regression interactions. 1. Symmetrical Regression Interaction: Two
predictors mutually affect each other's slopes; 2. Asymmetrical Regression Interaction: One predictor affects the slope of the other predictor but its own slope is not affected. When predictors
interact, regression hyperplanes display curvature, indicating the changes in regression slopes. The curvature of regressions l In the current context, we use the term "effect" in a very broad sense.
When, in earlier and later chapters we say a variable allows one to predict some other variable, we associate the same meaning as when saying a variable has an effect. We use the term "effect" here
for brevity of notation and because of the association to analysis of variance. 2In analysis-of-variance contexts, where predictor levels are categorical or categorized, this type of interaction is
termed treatment-contrast interaction (see Kirk, 1995).
10.1. DEFINITION AND ILL USTRATIONS
". "-. ~\
'... ".. ~x',kk..",." " "" "
2 ~ - ~ ~ "
-0.5I: ',, '. ". ". ",, .... "~ '
'~" "" " """" 1
-1.0 -0.5 "~ "
" I".
9 iX
Figure 10.1: Response surface and its contours when no interaction is present.
with interactions are of a certain kind: the contour lines 3 are not parallel. In contrast, when there exists no interaction, contour lines are parallel. In order to give a visual impression of these
three- dimensional surfaces we suppose for the moment that the true regression coefficients are known. Three pairs of figures illustrate this relationship. The left panel of Figure 10.1 displays the
hyperplane of the multiple regression function, Y = 15 + 5X1 + 3X2. Interactions are not part of the model. Therefore, the plane is without curvature. The contour lines for this hyperplane appear in
the right panel. The lines are perfectly parallel. They indicate t h a t the hyperplane slopes upward as the values of both X1 and X2 increase. This happens in a perfectly regular fashion. In
contrast, Figure 10.2 displays a response surface with an interaction. Specifically, the left panel of Figure 10.2 displays the surface for the model Y = 15 4- 5X1 4- 3X2 - 15XIX2, that is, the same
model as in Figure 10.1, but with an added multiplicative term. The figure indicates that Y increases with X1 and X2, and that the increase is negatively accelerated, t h a t is, becomes smaller. In
addition, the response surface displays smaller values as the difference between X1 and X2 becomes smaller. The right panel of Figure 10.2 displays the contour lines for this re3Contour lines are
projections of elevation on the Z-axis onto the X-Y plane. Contour lines indicate where the elevation is the same.
C H A P T E R 10. I N T E R A C T I O N T E R M S IN R E G R E S S I O N
1.011I:I....I..... "" " ~.......; .." :"" '""..i '
.20 %. ~~
_,.oz".... ....;..,..,....\ -1.0
0.0 X
..ii 0.5
Figure 10.2: Response surface and its contours when multiplicative interaction is present.
gression model. No doubt, these lines are not parallel. This can be seen from the elevations indicated by the lines. It is important to notice that nonlinear response surfaces do not necessarily
suggest the presence of interactions. This is illustrated in Figure 10.3. The left panel of this figure displays the response surface for the model Y - 7 0 - 2X~ - 3X22. The contour plot for this
model appears in the right panel. Obviously, the contour lines are not linear, but they are parallel. We conclude from these examples that it may not always be obvious from the Y • X1 x X2 plot
whether there exists an interaction. The contour plot can reveal the existence of interaction. When there are more than two predictors, it may not be possible to create a visualization of variable
relationships. Therefore, researchers typically focus on using statistical and conceptual approaches to discuss possible interactions. The following section introduces readers to concepts of
multiplicative terms and interactions.
Multiplicative Terms
This section explains and illustrates two situations in which a regression model contains a multiplicative term. First, this chapter addresses the problem of asymmetric regression interaction, that
is, the case where one predictor in multiple regression determines the slope of the second. Sec-
10.2. MULTIPLICATIVETERMS
3o0 ;oo
-"::!~;~-:'-:-.%: :A!.LL:::::: .'-'.:-:.:~2~/,;.. 5 k.....,.-.. " . ' ,................. .. 9,',...... "
- 10
., ..
" "
" -\"
. ~.
9" : 2 ~
Figure 10.3: Nonlinear response surface and its contours when no interaction is present.
ond, the c h a p t e r addresses the case where one predictor determines b o t h the intercept and the slope of a second predictor (for more i n - d e p t h coverage of these topics see Cohen, 1978;
Fisher, 1988; B r y k & R a u d e n b u s h , 1992; Rovine & von Eye, 1996).
P r e d i c t i n g the Slope P a r a m e t e r from a Second Predictor
Consider the case of a simple regression t h a t relates a predictor, X1, and a criterion, Y, to each other, or Y = r
+ r X~ + e.
If one adds a second predictor, X2, to this model, one can create a s t a n d a r d multiple regression equation, t h a t is, Y =/30 + ~1 X1 + ~2X2 + e. In this case, one would assume t h a t the
predictors X1 and X2 have i n d e p e n d e n t effects upon the criterion Y. In contrast, we now assume t h a t X2 has an effect on the regression slope of X1, ~1. This can be expressed using the
following system of two
equations Y
130 +/~IX1 + e
/~2 +/~3X2.
Note that Equation (10.3) describes a functional relationship between the regression slope of the first predictor and the values of the second predictor that is linear. This should not be confused
with the usual regression equation that relates a criterion to a predictor, allowing for a residual term that is supposed to be stochastic. Substituting Equation (10.3) into Equation (10.2) yields
the multiple regression equation Y =/30 + (/~2 +/~3X2)X1 + e, which has the form of a simple regression of Y on X1, but the slope now depends on X2. Solving the middle expression yields Y =/~0 +/32X1
+/~3X~ X~ + e.
This equation does include a multiplicative (or product) term, X1X2. The model described by this equation assumes that the slope of the regression of Y on X1 depends on the value of X2. More
specifically, for a discrete subset of values of X2, this equation assumes that the slope/31 varies monotonically with X2. For example, it could be that the slope increases proportionally with X2. In
addition, Equation (10.4) implies that the criterion, Y, does not vary with X2 when X1 = 0. Equation (10.4) can be estimated. If parameter estimate b3 is statistically significant, one can assume
that the effect of X1 on Y depends on X2, or is mediated by X2. In other words, if b3 ~ 0, then the effect of X1 on Y is not constant across the values of Y2. Equation (10.4) illustrates that the
assumption that the regression of one variable onto a second depends on a second predictor yields a regression equation with one main effect and one product term. Treating the multiple regression
model given in (10.4) as the unconstrained model and the simple regression model given in (10.2) as the constrained model, one can test whether including the product term results in a significant
improvement over the simple model. If the interaction accounts for a
10.2. MULTIPLICATIVETERMS
significant portion of variance, parameter estimate b3 will be different than zero, and one can conclude that the parameter estimate, bl, is not constant across the observed range of values of
predictor X1. The following numerical examples use data from an experiment on recall of short narratives that differed in concreteness, TG (von Eye, SSrensen, & Wills, 1996). A sample of n = 327
adults, aged between 18 and 70, read two short narratives each. The instruction was to read and memorize the texts. Before reading the texts, each participant solved a task that allowed researchers
to determine cognitive complexity, CC1. The dependent variable was the number of text propositions recalled,
REg. In the first example we illustrate use of Formulas (10.2), (10.3), and (10.4). We regress recall performance, REC, onto text concreteness, TG, and obtain the following regression equation: REC =
1 1 3 . 3 9 - 25.54. TG + Residual. The negative sign for TG in this equation suggests that more concrete texts are better recalled than more abstract texts. The slope coefficient is statistically
significant (t = - 6 . 2 7 ; p < 0.01), thus suggesting that recall depends on text concreteness. The portion of criterion variance accounted for is R 2 - 0.11. For the following analysis we assume
that the slope parameter for text concreteness is mediated by subject cognitive complexity. Specifically, we assume that subjects differing in cognitive complexity differ in the way they exploit the
recall advantage of text concreteness. From this assumption we create an equation of the form given in (10.4) and obtain the following parameter estimates: REC -- 112.60 - 30.24 9TG + 0.47 9 (TG 9
CC1) + Residual. Both parameters in this equation are 0.01; t = 2.11,p - 0.04, respectively), thus pothesis of no mediation can be rejected. counted for by the model with the product
significant (t = - 6 . 5 3 , p < indicating that the null hyThe portion of variance acterm is R 2 - 0.12.
The left panel of Figure 10.4 displays the response surface for the
O. 40.00
0.2 .~
0.1 0.0
40. %
30"20 .
0.00 0.00
Figure 10.4: Relationship between the predictors, Text Concreteness and interaction with Cognitive Complexity, and a contour plot.
regression model that includes the interaction term. 4 The surface suggests a bimodal distribution with a higher peak for concrete texts than for abstract texts. (Intermediate texts were not used in
this experiment.) This suggests that recall is better for concrete texts than for abstract texts. The figure also suggests that the variance of (TG 9CC1) values is greater for TG = 2. Thus, there may
be a problem with heteroscedasticity in these data. The right panel of Figure 10.4 displays the contour plot for these data. We now ask whether the model with the product term is statistically better
than the model without. To do this, we consider the model with the product term the unconstrained model. The model without the product term is the constrained one. Inserting into Formula (3.24) of
Section 3.3.1 we obtain 0.120--0.108 F
1-0.120 327-2-1
- 4.418,
a value that is identical to the t value given above (t = 2.112 = 4.45) which is, within rounding, equal to F = 4.418 (this relationship between t and F always applies, when dfl = 1 for the F test).
For dfl = 1 and
4The surface was created using SYSTAT's kernel estimation option, available in the CONTOUR routine of the GRAPH module.
10.2. MULTIPLICATIVE TERMS
Table 10.1: Intercorrelations of Text Concreteness, TG, Recall, REC, Cognitive Complexity, CC1, and the Product Term, REC. CC1 CC1 -0.071 0.119 0.792
TG REC TG 9 CC1
TG 1.000 -0.328 0.483
REC 1.000 -0.062
dr2 = 324 this F value has a tail probability of p = 0.036. Smaller than a - 0.05, this value allows us to conclude that the more complex model, the one with the multiplicative term, explains the
data better than the more parsimonious model without the multiplicative term. Table 10.1 contains the correlations between the variables used for these analyses. Readers are invited to discuss
possible multicollinearity problems present in these analyses.
Both Slope and Intercept
Consider the case where not only the slope but also the intercept depends on the level of a second predictor. This case can be described by the following system of three equations:
Zo + Z , x , + ~
/~2 +/~3X2
f~4 +/~5X2.
Inserting (10.6) and (10.7) into (10.5) yields
Y -"/~4 ~- ]~5X2 ~-/~2X1 ~-/~3XIX2 -~- s
Obviously, this is a multiple regression equation that includes both main effects and the multiplicative term. In this equation, the tests concerning the parameters/~4, ;35,/~2, and f~3 have a
meaning different than the meaning usually associated with these tests. 5 More specifically, the 5It should be noted that one of the main differences between the model given in
C H A P T E R 10. I N T E R A C T I O N T E R M S IN R E G R E S S I O N
following tests are involved: 9 Test of ~5: is the intercept, ~0, dependent on X2? 9 Test of/~a: is the slope,/~1, dependent upon X2? Results of these tests can be interpreted as follows: 1. If/~5 is
not zero, then the intercept of the regression of Y on X1 depends on X2 2. If ~5 = 0, then the relationship between /~0 and ~4 is constant, regardless of what value is assumed by X2, and the
intercept, /~0, remains the same for all values of X2 3. If/~3 is zero, the slope of the regression of Y on X1 is constant and unequal to zero across all values of X2 4. If both/~2 and ~3 are zero,
there is no relationship between Y and X1
5. If ~3 is not zero, then the slope of the regression of Y on X1 depends on X 2. The following example uses data from the von Eye et al. (1996) experiment again. We now regress the adults' recall on
their educational background ED UC. For the n = 327 participants we estimate REC = 46.07 § 6.03 9EDUC § Residual, which indicates that subjects with a higher education (measured as number of years of
formal school training) tend to have higher recall rates. While only accounting for R 2 - 0.020 of the criterion variance, this relationship is statistically significant (t - 2.60,p = 0.010). One may
wonder whether the regression line and intercept of this relationship depend on the subjects' cognitive complexity. If this is the
(10.8) and the model given in (10.4) is that in (10.8) the criterion, Y, can depend on X2 even if X1 assumes the value 0.
10.2. M U L T I P L I C A T I V E T E R M S
Table 10.2: Significance Tests for Prediction of Intercept and Slope Parameters
of Regression of Recall on Education Variable
Std. Error
t value
p value
Intercept CC1 EDUC CC1 * EDUC
b4 - - 1 0 . 7 8 b5 - 5.33 b2 - 16.60 b3 - - 0 . 9 7
26.24 2.06 5.50 0.41
-0.41 2.58 3.02 -2.35
0.682 0.010 0.003 0.019
case, the intercept of the slope and the slope itself may vary with subjects' cognitive complexity. We estimate the following regression parameters: REC = -10.78 + 5.34 9 CC1 + 16.60 9EDUC - 0.97
9CC1 9EDUC + Residual. While still only explaining 4.3% of the criterion variance, this equation contains only significant slope parameters. The intercept parameter is not significant. Table 10.2
gives an overview of significance tests. These results can be interpreted as follows: 9 The intercept of the regression of R E C on EDUC is zero (p(bs) >
0.05); 9 However, this intercept varies with CC1 (p(b4) < 0.05) 9 The slope of the regression of R E C on EDUC depends on CC1 (p(b2) < 0.05 and p(b3) < 0.05).
Caveats and P r o b l e m s There are two major issues that need to be considered when including multiplicative terms in multiple regression. The first of these issues concerns model specification.
The second concerns characteristics of the multiplicative terms. The former issue is addressed in the following paragraphs, and the latter in Section 10.3.
CHAPTER 10. I N T E R A C T I O N TERMS IN REGRESSION
Model Specification Section 10.2 illustrated that hypotheses concerning interactions between two predictors can lead to two different types of models. The first type of model involves one main effect
term and one product term. This model is asymmetric in nature in that it considers the effect of predictor P2 on the slope of the regression line for C on P1. Investigating the inverse effect, that
is, the effect of predictor P1 on the slope of the regression line of C on P2, implies a different model. This model involves the same multiplicative term as the first. However, it involves the main
effect of the other predictor. In contrast, investigating a multiple regression model of the type given in Formulas (10.5), (10.6) and (10.7) implies a symmetrical model in the sense that (10.8) does
not allow one to discriminate between the models that investigate 1. the effects that predictor P2 has on the slope and intercept of the regression line for C on P1, and 2. the effects that predictor
P1 has on the slope and intercept of the regression line for C on P2. Thus, researchers should be aware that always using one particular regression model to treat interaction hypotheses can lead to
misspecified models and, therefore, to wrong accounts of data characteristics. Researchers should also be aware that the models discussed in the last sections are only two of many possible models
that lead to a multiplicative term. Although these models can differ widely in meaning, they are often treated as equivalent in that the model that involves both main effects and the product term is
suggested for testing all interactions (see also Aiken & West, 1991).
Variable C h a r a c t e r i s t i c s
This section addresses three issues that arise when including multiplicative terms in multiple regression: 1. Multicollinearity
2. Leverage points 3. Specific problems concerning ANOVA-type interactions 10.3.1
of Multiplicative
Terms Table 10.1 indicated that the intercorrelations between the variable T G , C C 1 and its constituents were very high. Specifically, the correlation between T G 9 CC1 and TG was r = 0.483, and
the correlation between T G 9 CC1 and CC1 was r - 0.792. Another example appears in Table 10.3. In a fashion analogous to Table 10.1, the variable that resulted from element-wise multiplying two
other variables with each other, E D U C 9 CC1, is highly correlated with each of its constituents, E D U C and CC1. These examples are indicative of a ubiquitous problem with multiplicative terms in
regression analysis: Variables that result from element-wise multiplying two variables with each other tend to be highly correlated with their constituents. As a result, researchers face possibly
severe multicollinearity problems. It is easy to imagine that models that involve both CC1 and E D U C , CC1 can suffer from severe multicollinearity problems. Readers may wish to go through Section
8.2 again for countermeasures to multicollinearity. A method often used since a paper by Cohen (1978) is termed "partialling regression terms." It involves the following three steps: 1. Calculating
multiple regression including only the main effect terms, and not the multiplicative term;
Table 10.3: Intercorrelations of Cognitive Complexity, CC1, Educational Level, ED UC, the Multiplicative Term, ED UC, CC1, and Recall, REC
EDUC EDUC * CC1 REC
CC1 0.301 0.941 0.119
EDUC 1.000 0.574 0.143
EDUC 9 CC1 1.000
C H A P T E R 10. I N T E R A C T I O N T E R M S IN R E G R E S S I O N
2. Saving residuals from Step 1; and 3. Calculating simple regression only including the multiplicative term. The reasoning that justifies these three steps is as follows: Step 1 allows one to
estimate the contribution made by the predictors. There are no more than the usual multicollinearity problems, because the multiplicative term is not part of the analysis. Step 2 saves that portion
of the criterion variance that was not accounted for by the main effect terms. That portion consists of two parts. The first of these is that portion that can be explained by the multiplicative term.
The second is that portion that cannot by covered by the complete regression model. Step 3 attempts to cover that portion of criterion variance that the main effects cannot account for. Again, there
is no multicollinearity problem, because the predictor main effect terms, highly correlated with the interaction term, are not part of the regression equation. Cohen's procedure allows one to
consider the distinction between regression interaction and the multiplicative or product term in the regression equation. The product term carries the regression interaction. However, because of the
possible multicollinearity with the main effect terms it must not be confounded with the regression interaction itself. It should be noted that residuals are always model-specific. This applies also
to residuals as specified by Cohen's procedure. If some other model is specified, residuals from this model should be used to represent the interaction.
Leverage Points as Results of Multiplicative Regression Terms
In addition to multicollinearity, the presence of leverage points is often a problem in the analysis of multiple regression with product terms. Multiplying the elements of two predictors with each
other can create the following problems concerning the size of predictors: 1. Scale values are created that cannot be interpreted. Consider the following example. A researcher multiplies the values
from two 5point scales with each other. The resulting products can range from 1 to 25. While numerically correct, values greater than 5 are often hard to interpret. They exceed the range of scale
2. Leverage cases are created. This problem goes hand in hand with the first. The larger the products are, the more likely there are leverage cases. The following numerical example illustrates the
problem with leverage cases. The example involves two predictors, X1 and X2, and criterion Y. It is assumed that X2 affects the slope of the regression of Y on X1. Data are available for five cases.
Table 10.4 displays these data, along with estimates, residuals, leverage values, and studentized residuals. Regressing Y on X1 under the assumption that X2 affects the slope parameter yields Y =
2.36 + 1.48X1 + 0.06 9XIX2 + Residual. Because of high predictor intercorrelation and low power, none of the parameters are statistically significant. However, there are two leverage cases. These are
Cases 2 and 5 in Rows 2 and 5 of Table 10.4. In addition, there is one outlier, Case 4. Figure 10.5 displays the Estimate by Residual plot. Size of data points varies with leverage value. The figure
suggests that the two leverage points are located near the regression line. This is to be expected from the definition of leverage points (see Section 5.1). In the present example, the two leverage
points are located at the ends of the distribution of estimates, thus illustrating the two problems listed above. Section 5.2 presents remedial methods to deal with outlier problems.
Table 10.4: Raw Data and Results of Residual Analysis
Y 7 3 4 19 20
Raw Data X1 X2 X1X2
Results of Residual Analysis Estimate Residual Leverage Student 10.61 2.36 8.50 10.89 20.64
-3.61 0.64 -4.50 8.11 -0.64
0.38 0.98 0.31 0.33 0.99
-0.51 0.53 -0.65 9.39 -1.50
CHAPTER 10. I N T E R A C T I O N TERMS IN REGRESSION
1.0 ~0.9
9 O _
* " " "
0.8 0.7 0.6 0.5 0.4 0.3 0.2
Estimate Figure 10.5: Leverage cases in regression with product term.
This section deals with specific problems with multiplicative interaction terms that can be best illustrated using ANOVA-like interactions (cf. Rovine & v o n Eye, 1996). In this type of interaction
one does not assume that slope or intercept parameters vary as some monotonic or polynomial function of some other variable. Rather, one assumes that the regression relationship is constant over some
range of variable values, is constant again but with different parameters over some other range of variable values, and so on. The ranges are non-overlapping but not necessarily exhaustive; that is,
they do not always cover the entire range of observed criterion or predictor values. Ranges can result from two procedures. One is to categorize a variable. Examples of splits include the median
(dichotomization) or the 33 percentile splits. Another way to specify ranges is theory-guided. One can, for instance, define the range of geniuses on the IQ scale, the range of accident-prone car
drivers, or the range of binge-drinking alcoholics. For the purposes of this chapter, we define a range as a segment of a variable where we assume the regression relationship to be constant.
10.3. VARIABLE CHARACTERISTICS
regression interactions refer to this type of range. While the statistical power of detecting regression interaction is generally low, it can be shown (Rovine &von Eye, 1996) that it is, in addition,
not equal across the ranges of some variable. Specifically, the power is particularly low when the location of an interaction is in the interior of the product distribution. It is proposed that
transforming original predictor values into a set of effect coding vectors (as one would do in ANOVA interaction testing) gives one the best chance of showing an interaction when it indeed exists.
Example Consider the situation displayed by Table 10.5. This table crosses the two predictors Cognitive Complexity, CC, and Educational Level, EL. CC is split in the three levels of low, medium, and
high. EL is split in the three levels of no high school diploma, high school diploma, and higher. Both variables, Cognitive Complexity and Educational Level are predictive of recall performance. The
product of CC and EL forms a bivariate distribution. An ANOVA-like interaction would exist if for some combination of ranges of these two variables a higher (or lower) value on the criterion variable
would be measured than one would expect if there were a uniform, monotonic relationship. In the example, consider the range of subjects with high Cognitive Complexity and higher than high school
education (lowest right-hand cell in Table 10.5). If for these subjects recall rates are higher (or lower) than one would expect, then there exists an ANOVA-like interaction. An interaction term in
regression would have to explain this unique portion of variance.
Table 10.5: Cross-Classification of the Categorized Variables Cognitive Complexity (CC), and Educational Level Educational Level No high school High school Low CC Medium CC High CC
C H A P T E R 10. I N T E R A C T I O N T E R M S IN R E G R E S S I O N
This interaction (and the one in the top left cell) occurs at the extreme of the bivariate distribution. It could, however, have occurred in other cells as well. Interactions can occur anywhere in
the joint distribution of two predictors. If interactions occur in other cells than the extreme ones, they are said to occur within the interior of the product distribution. Table 10.5 displays
possible locations. In any case, one can describe interactions by the portion of variance of a particular combination of predictor values that is not accounted for by the main effect regression
terms. For the following considerations suppose that each of the variables that were crossed to form Table 10.5 has categories with values 1, 2, and 3. Table 10.6 displays the values that the
multiplicative term of the two predictors, Educational Level and Cognitive Complexity, assumes for each cell. The values in the cells of Table 10.6 result from multiplying values of variable
categories with each other. Table 10.6 illustrates two important points: 9 Element-wise multiplication of predictor values increases the range of values. In the example the range increases from 1 - 3
to 1 - 9. As a result, the variance of the multiplicative variable typically is greater than the variance of its constituents. 9 Cases that differ in predictor patterns can be indistinguishable in
the value of the product variable. If product terms are created as was done in Table 10.6, cases in cells with inverted indexes have the same product variable value. In the example of Table 10.6,
this applies to the cases in cell pairs 12 & 21, 13 & 31, and 23 & 32.
Table 10.6: Cross-Classification of the Categorized Variables Cognitive Complexity and Educational Level: Values of Product Term
Low CC (1) Medium CC (2) High CC (3)
Educational Level No high school (1) High school (2) 1 2 2 4 3 6
Higher (3) 3 6 9
10.3. VARIABLE CHARACTERISTICS
From this second characteristic of product variables a problem results. The problem is redundancy. Cells in the off-diagonal are redundant in the way described. Cells in the diagonal can be unique.
The extreme cells at both ends of the diagonal are always unique. Diagonal cells between the extremes are not necessarily unique in their values. 6 Because of uniqueness and leverage, cells with
unique values tend to have particular influence on the size of the regression coefficient. Contributions made by interactions in the interior of the product distribution will be relatively smaller,
that is, harder to detect. Centering has often been proposed as a means to reduce problems with the interaction of multiplicative variables. Table 10.7 shows the values of the product term after
centering the main effect variables. The values in the cells of Table 10.7 result from performing two steps. The first is centering the main effect variables. The second is creating the
multiplicative variable's values by element-wise multiplying the centered main effect variables. Table 10.7 illustrates two characteristics of multiplicative interaction variables that are created
from centered main effect variables: 9 The rank order of variable category values changes; 9 The redundancies are more severe than before: there are only three different values in the product term;
five cells have the value 0. Performing regression analysis involving simultaneously centered main effect variables and the resulting multiplicative interaction term typically yields the following
results: The parameter estimates and the significance values for the centered and the noncentered multiplicative interaction variable will be the same; The parameters for the centered main effect
variables tend to be larger.
6Readers are invited to demonstrate this using a 4 x 4 Table with variable values ranging from 1 to 4. Only three of the four diagonal cells will have unique values.
C H A P T E R 10. I N T E R A C T I O N T E R M S IN R E G R E S S I O N
Table 10.7: Cross-Classification of the Categorized Variables Cognitive Complexity and Educational Level: Values of Product Term
Low CC (-1) Medium CC (0) High CC (1)
Educational Level No high school (-1) High school (0) 1 0 0 0 -1 0
Higher (1) 1 0 1
Repeatedly proposed (Afifi & Clark, 1990; Cronbach, 1987; Rovine & von Eye, 1996), the partial interaction strategy models that part of the residual that is systematically related to the independent
variables. The strategy involves the following three steps: Categorization of each variable. This step can be performed in an exploratory and explanatory fashion. The former searches for segments on
both the dependent and the independent variable sides that allow one to establish (partial) relationships. This search may involve more than one set of splits. The latter derives splits from theory
or earlier results. In either case, researchers should be aware that categorizing typically reduces the power to detect statistically significant relationships: The fewer the segments, the lower the
power. 2. Creating a coding variable that identifies cases in cells. The typical coding variable assigns a 1 to each case in the cell assumed to be the location of a partial interaction and a-1 to
each case in other cells. Alternatively, specific assumptions can be modeled for particular cells. 3. Estimating regression parameters for the model that involves the residuals of the main effect
regression model as criterion variable and the coding variables that specify the partial interactions as predictors.
Q~ QD
Figure 10.6: Smooth response surface for the artificial data example of partial interaction.
Data Example The following data example was created to illustrate the partial interaction strategy. The example involves the two predictors, P1 and P2, and a criterion, C, measured on eight cases.
The analysis of the predictorcriterion relationship involves the following three steps: Data Description. To obtain a first impression of the predictor-criterion relationship, we create a 3D
representation of the data. Rather than plotting the raw scores, we smooth the response surface using the kernel estimator provided in SYSTAT's GRAPH module. The resulting rendering appears in Figure
10.6. The figure suggests that the criterion has two peaks. The first peak is located where P1 assumes low values and P2 assumes high values. This is the left peak in Figure 10.6. The second peak, on
the right-hand side in Figure 10.6, is located where P1 is high and P2 is low. Between the two peaks, criterion values are small. Raw data appear in Table 10.8. Estimation of a Main Effect Multiple
Regression Model. This model
C H A P T E R 10. I N T E R A C T I O N T E R M S IN R E G R E S S I O N Table 10.8: Raw Data .for Illustration of Partial Interaction Strategy
Case Number
Data C h 5 1 6 -1 3 -1 4 1 33 0 36 0 49 0 55 0
0 0 0 0 1 -1 -1 1
7.35 -4.75 -4.90 0.97 -0.26 -3.47 2.64 2.43
involves only the two predictors P1 and P2. No interaction term is part of this model. For the data in Table 10.8 we estimate the following parameters: C - 5.46 + 0.04 9P1 - 0.16 9P2 + Residual. The
linear response surface from this equation appears in Figure 10.7. The residuals from this equation are listed in the last column of Table 10.8. As is obvious from Figure 10.7, the linear regression
model accounts for a large portion of the criterion variance (R 2 = 0.95). However, none of the predictors have a significant slope parameter (tl -- 1.19,pl = 0.29; t2 = -2.02,p2 = 0.10). The
residuals from this analysis appear in the last column of 10.8. Specification of Partial Interaction Terms. The data were constructed so that there is a strong negative correlation between P1 and P2,
7 and t h a t P1 and P2 each explains a large portion of the criterion variance. In addition, the data show nonlinear variation for values high in P1 and low in P2, and for values low in P1 and high
in P2. The cutoff is after the fourth case in Table 10.8. Therefore, after extracting that portion of the criterion
7Readers are invited to determine whether there are multicollinearity problems in these data.
10.3. VARIABLE CHARACTERISTICS
60.0t3 50.0( 40.0t3 30.0(3 20.0C 10.0( 30.00
q 20.00
10.00 8.00
Figure 10.7: Multiple linear regression with no interaction terms for the data in Table 10.8.
variance that can be accounted for by the main effects of P1 and P2, we specified two interaction vectors,/1 a n d / 2 , each designed to explain nonlinear variation in the residuals. The first
vector,/1, was designed to explain nonlinear variance for the Pl-lOw and P2-high sector. The second vector,/2, was designed to explain nonlinear variance in the Pl-high and P2-1ow sector. For the
sake of simplicity, we designed both interaction vectors so that they represent quadratic trends. Both interaction vectors appear in Table 10.8. Regressing the residuals from the main effect model o
n t o / 1 a n d / 2 yields the following equation: Residual - 0.00 + 4.49 911 + 0.75 9I2 + Residual2.
The two interaction vectors allow one to explain the R 2 = 0.66 of the variance of the residuals from the main effect model. Only the first of the interaction vectors has a significant regression
slope (t - 3.05, p -- 0.03). The second has a slope not different than zero (t = 0.51, p = 0.63).
C H A P T E R 10. I N T E R A C T I O N T E R M S IN R E G R E S S I O N
This example illustrates that custom-tailored specification of regression interaction terms allows one to explain substantial portions of variance that cannot be explained from the multiple
regression main effect model. It is important to realize, however, that without guidance by theory, the search for interaction sectors can carry researchers to data fitting, that is, description of
sample-specific data characteristics. Such characteristics are often unique to samples. Therefore, unless theory dictates where and what type of interaction to expect, replications are strongly
recommended before publishing results.
Chapter 11
This chapter introduces readers to the concepts of robust regression. Specifically, after a brief description of the concept of robustness (Section 11.1) and after presenting ridge regression
(Section 11.2.1), Least Median of Squares (LMS) regression (Rousseeuw, 1984) (Section 11.2.2), and Least Trimmed Squares (LTS) regression (Section 11.2.3), we briefly describe M-estimators. Section
11.3 covers computational issues.
The Concept of R o b u s t n e s s
Robustness can be defined as "insensitivity to underlying assumptions - for example, about the shape of the distribution of measurements" (Hoaglin, Mosteller, & Tukey, 1983, p. 283). Similarly,
Hartigan (1983, p. 119) calls a statistical procedure robust "if its behavior is not very sensitive to the assumptions which justify it." Most investigations of robustness focus on parametric or
distributional assumptions. These are assumptions about a probability model for the observations under study and about a loss function connecting statistical decision and an unknown parameter value.
1 Accordingly, a large portion of robustness investigations examine the robustness of statistical tests. A statistical test is considered robust if its "significance level ... and 1Notice that in
Bayesian statistics an additional prior distribution needs to be considered. 175
C H A P T E R 11. R O B U S T R E G R E S S I O N
power ... are insensitive to departures from the assumptions on which it is derived" (Ito, 1980, p. 199). Results of such investigations suggest, for example, that the F test used in regression
analysis and analysis of variance is remarkably robust against heterogeneity of variance and nonnormality, in particular when group sizes are equal, that is, in balanced designs. When groups differ
in size and variances differ across groups, robustness is clearly less pronounced (Ito, 1980). In a similar fashion, there have been investigations of the effects of other types of violations of
assumptions. For example, von Eye (1983) performed a simulation study on the effects of autocorrelation on the performance of the t test. Results suggested that positive autocorrelations lead to
inflated values of the t statistic, and negative autocorrelations lead to deflated values of the t statistic. These biases increase with the magnitude of correlation. More recent examples of
simulation studies concerning the robustness of estimators include the work by de Shon and Alexander (1996). The authors examined the following six tests of regression slope homogeneity: the F test,
the X2 test, James's test, the normalized t test, the Welch-Aspin F* test, and Alexander's normalized t approximation. Various violations of the conditions for proper application of the standard F
test were simulated, for instance, nonnormality of the dependent variable, Y, in one or both populations, heterogeneity of error variances, and nonorthogonal designs. Results suggest that none of the
tests perform well under all conditions. However, when the ratios of Y variance to X variance are approximately equal, the X2 test seems to perform well. It has more power than the F test or any of
the approximations. When the ratio of the largest group error variance to the smallest group error variance is smaller than 1.50, Alexander's normalized t approximation should be used (Alexander &
Govern, 1994). Ito (1980) notes that, in general, robustness cannot be exhaustively investigated because there are more ways to violate assumptions than to satisfy them. Yet, it is important to know
effects of the most frequent and most important violations. One very important aspect of robustness has to do with outliers, specifically, leverage outliers. The question that arises in this context
concerns the extent to which estimation of regression parameters is affected by such outliers. It is well known that the OLS method is sensitive to such outliers. Consider the following example. A
newscaster tries to answer the
T H E C O N C E P T OF R O B U S T N E S S
question whether the number of years a shooting guard has spent in the National Basketball Association league allows one to predict the average number of points per game scored. The newscaster draws
a random sample of n = 13 guards, at different points in their careers, from different cohorts. Table 11.1 shows the number of years a guard had spent in the league, and the average number of points
scored in the last of these years. Regressing the number of points scored onto the number of years yields the following regression equation: Points = 7.61 + 0.67 9Year + Residual. The slope parameter
estimate of this equation is not significant (t = 1.03;p = 0.33). Figure 11.1 shows the raw data and the thick regression line. The graph in Figure 11.1 suggests that the thick line is a poor
representative of the relationship between career length and scoring performance. The main reason is that there is an outlier. One player, nine years in the league, had a scoring average of 29 points
per game, far above the crowd. The thin regression line depicts the relationship between number of years in the league and points scored, under exclusion of the outlier. In contrast to the thick
line, the slope of the thin line is negative, the regression equation being Points = 11.61 - 0.55 9Years + Residual. In addition, the relationship now is statistically significant (t = -2.46; p =
0.03). The comparison of these two results illustrates the effects an outlier can have on an OLS estimate of a regression slope. Estimators are robust
Table 11.1: Number of Years Spent in the Basketball League and Average Number of Points Scored in Last Year
Year Pts
C H A P T E R 11. R O B U S T R E G R E S S I O N 30
.=. 0
.< 5
Years in League
Figure 11.1" Regression of number of points scored on number of years in the league before (thick line) and after (thin line) elimination of outlier.
if they are insensitive to the presence of outliers or extreme values. This chapter focuses on approaches to creating robust estimates of regression slopes. The next section reviews sample models of
robust linear regression. It includes the Least Median of Squares and the Least Trimmed Squared approaches in more detail, and a brief introduction to M-estimators.
Models of Robust Regression
This section provides a brief review of models for robust regression. Models are reviewed in two groups. The first includes ridge regression. The second includes so-called M-estimators for regression
parameters (Goodall, 1983; Hettmansperger & Sheather, 1992; Marazzi, 1980).
The following approach of ridge regression is typically discussed in the context of solving multicollinearity problems (e.g., Afifi & Clark, 1990). However, estimators from ridge regression tend to
be robust. Therefore, we review ridge regression in the present context (see also Fennessey & D'Amico, 1980; Gruber, 1989).
C H A P T E R 11. R O B U S T R E G R E S S I O N
To estimate parameters for ridge regression from standardized variables, one introduces a biasing constant c _> 0 as b R-
(RRx)- 1Ryx
with Rx~ - Rxx + cI,
or, more specifically, 1+ Rxtt x _
r21 .
. 1 -t- .c
r 2. 3
1+ c
The constant c represents the magnitude of bias introduced in ridge regression. When c = 0, the solution is OLS. When c > 0, there is a bias. However, solutions from c > 0 are usually more stable,
that is, robust, than OLS solutions. From an applied perspective most important is the determination of a value for the constant, c. The optimum value of c may be specific to data sets and,
therefore, needs to be found for each data set. One method to find the optimum value of c uses information from two sources: the Variance Inflation Factor (VIF; see Section 8.1) and the ridge trace.
The ridge trace is the curve of the values assumed by a regression slope estimate when c increases. Typically, after fluctuations, the estimate changes its value only slightly when c further
increases. Simultaneously, the VIF falls rapidly as c first moves away from 0, and changes only slightly when c increases further. The value of c finally chosen balances (1) the VIF, which must be
sufficiently small, (2) the slope coefficient(s) which must change only minimally when c is increased further, and (3) c itself, which, in order to avoid excessive bias, should be as small as
possible. Alternatively or in addition to the VIF, one inspects the mean squared error of the biased estimator, which is defined as the variance of the estimator plus the square of the bias.
The following two artificial data examples illustrate the use of the ridge trace (Afifi & Clark, 1990, pp. 240ff). Consider a criterion, Y, that is predicted from two independent variables, X1 and
X2. Suppose all three variables are standardized. Then, the OLS estimator of the standardized regression coefficient for X1 is bl = rly - r12r2y 1 - r22 '
and the OLS estimator of the standardized regression coefficient for X2 is b2
rl2rly 1-r22 " -
The ridge estimators for these two regression coefficients are "12
blR _
l+cr2y 2
z~)l+c ( l + c )
and "12
For the next numerical example we proceed as follows: First we specify values for the three variable intercorrelations. Second, we insert into (11.5) and (11.6) varying values of c. For the variable
intercorrelations we select the following values: r12 - 0.8, fly - 0.5, and r2y - 0.1. For c we specify the range of 0 <_ c _< 8 and an increment of 0.1. Thus, we run 81 iterations. For each
iteration we determine the values of the two regression coefficients, bl and b2. The curve that depicts these values is the ridge trace. Figure 11.2 displays the ridge trace for the present
correlations for the first 50 iterations. Before discussing the figure we insert the correlations into (11.3) and (11.4). We obtain the standard OLS estimates bl - 1.17 and b2 - -0.83. These are also
the first results in the iteration ( c - 0). The figure suggests that with each increase of c the values of the regression coefficients change. Specifically, b2 decreases and bl increases. In
Figure 11.2: Ridge traces for artificial data example.
this example, coefficient b2 assumes positive values after the 30th iteration. It keeps increasing until the 68th iteration. After that, it decreases again and both coefficients asymptotically
approach zero. When selecting a ridge constant one selects that value after which there is no longer any substantial change in the regression coefficients. In the present example, this seems to be
the case after c - 0.7. Notice that, even if the VIF and the mean squared error are also used for selection of c, the selection is still a matter of subjective decision making, because there are no
objective criteria that could be used to guide decisions. As a rule of thumb, one can say that values of c between 0 and 1 are often the most interesting and promising ones. Figure 11.2 presents a
graph that is typical of ridge traces. The shape of traces, however, depends only on the pattern of variable intercorrelations. This is exemplified in a second numerical example. This example uses
the following correlations- r12 - 0.30, fly - 0.25, and r2y - 0.10. The standardized regression coefficients for these correlations are bl = 0.24 and b2 - 0.03. Figure 11.3 displays the ridge trace
for the first 50 values of c, beginning with c - 0 and using an increment of 0.1. As in the first example, the two regression coefficients approach zero as c increases. After about c - 0.6, there are
no substantial changes in the coefficients when considering small to moderate changes in c. Thus,
11.2. M O D E L S OF R O B [/ST R E G R E S S I O N
Figure 11.3: Ridge traces for two positive slope coefficients.
we m a y select c -- 0.6 as the ridge constant for this example. T h e following d a t a example investigates two predictors, B r e a d t h of Cognitive Complexity (CC1) and D e p t h of Cognitive
Complexity (CC2), a n d a criterion, Text Recall (REC). In a sample of n - 66 adult females, these three variables correlated as shown in Table 11.2. 2 . Table 11.2: Intercorrelations of the
Predictors, CC1 and CC2, and the crite-
rion, REC
CC2 REC
CC1 0.066 -0.074
CC2 0.136
Table 11.2 suggests t h a t the variable intercorrelations are low. Thus, we cannot expect to explain large portions of the criterion variance. T h e highest correlation is the one between the two
predictors. S t a n d a r d OLS 2The following calculations are performed using the correlation matrix method described in Section 11.3.1. This method estimates the slope parameter from a correlation
matrix. Correlations do not contain information on the means of variables. Therefore, regression models from correlation matrices do not include the intercept parameter.
C H A P T E R 11. R O B U S T R E G R E S S I O N 0.2 0.15
0J 9cJ -.
~ 0
Constant coefficient
ss coefficient
Figure 11.4: Ridge traces for cognitive data.
regression of R E C onto CC1 and CC2 explains no more than 2.5% of the variance, and the regression model fails to be statistically significant (F2,64 = 0.83;p > 0.05). Accordingly, neither
regression slope is significantly different than zero. The following OLS regression equation was calculated3: REC
-0.08 9 CC1 + 0.14 9 CC2 + Residual.
To perform a ridge regression we first create a ridge trace. In this example, we use values for the constant that vary between 0 and 1 in steps of 0.1. Figure 11.4 displays the resulting ridge trace.
In addition to the ridge trace we consider the standard error of the estimates. These estimates are the same for the two predictors. Therefore, only one needs to be graphically displayed. Figure 11.5
contains the standard error for the 11 ridge regression runs.
3Notice that in the present context we estimate regression slopes that go through the origin (no intercept parameter is estimated). This procedure is put in context in the section on computational
issues (Section 11.3).
$ o.11 k
0.05 0
0~ 0.6 Rid~ Co.rant
Figure 11.5: Change in standard error of regression estimates in cognition data.
Figure 11.4 suggests that values greater than c = 0.4 do not lead to a substantial decrease in parameter estimates. Figure 11.5 shows an almost linear decrease in standard error with slightly bigger
decreases for smaller c values than for larger c values. From these two sources we select a ridge constant of c = 0.4. The regression coefficients for this solution are b ~ c I - - 0 . 0 5 7 and bRc2
-- 0.100. Both coefficients are smaller than the original O LS coefficients for which c = 0.
Problems with Ridge Regression The following paragraphs review three problems with ridge regression. The first is the bias that comes with ridge regression solutions with c > 0. It is one of the
virtues of ordinary least squares that its solutions are unbiased. Unless there are strong reasons why a (slightly) biased solution is preferable, this advantage should not be abandoned. Second, it
has to be emphasized again that the decision for a constant, c, is a subjective one. There are still no methods for objectively determining a constant from a sample. The main argument supporting
ridge regression is that ridge estimates will perform better than OLS estimates in the population or, more specifically, will provide better predictions in
C H A P T E R 11. R O B U S T R E G R E S S I O N
the population. However, ridge regression estimates do not explain as much of the sample criterion variance as do OLS estimates. This can be illustrated using the previous example. As was indicated
before, the standard O LS solution with CC1 and CC2 as predictors and R E C as criterion has an R 2 of 0.025. The ridge regression solution only has an R 2 of 0.018. With increasing c the portion of
variance accounted for decreases even more. For instance, when c = 1, we calculate R 2 = 0.012. Third, there are no statistical significance tests for solutions from ridge regression. The tests
printed out when simulating ridge regression using standard OLS regression programs must not be trusted (one reason why is explained in Section 11.3 on computational issues).
of Squares
Introduced by Rousseeuw (1984), the method of Least Median Squares (LMS) allows one to estimate most robust estimates of regression parameters (see also Rousseeuw & Leroy, 1987). As the median, the
LMS method has a breakdown point of 50%. That is, up to 50% of the data can be corrupt before parameter estimates are substantially affected. To introduce readers to LMS regression we draw on the
concepts of OLS regression. We start from the regression model
y = X/3+e, where y is the vector of observed values, X is the matrix of predictor values (the design matrix), ~ is the vector of parameters, and e is the residual vector. To obtain an estimate b of/
3, OLS minimizes the sum of squared residuals, ( y - X b ) ' ( y - Xb) ---4 rain.
or, in other terms,
ei2 --+ min,
In contrast, Rousseeuw (1984) proposes minimizing the median of the
11.2. MODELS OF ROB UST REGRESSION
squared residuals rather than their sum,
md(e 2)
) min,
where md(e 2) is the median of the squared residuals. Solutions for (11.7) are well known (see Section 3.1). These are closedform solutions, that is, they can be calculated by the one-time
application of a set of formulas. In contrast, there is no closed form that can be used to solve (11.8). Therefore, iterative procedures have been devised that typically proceed as follows (Rousseeuw
& Leroy, 1987): 1. Select a subsample of size nj and calculate the median of the squared residuals along with the regression parameter estimates; save median and parameter estimates; 2. Repeat step
(1) until smallest median has been found. The following example uses data from the statistical software package S-Plus (Venables & Ripley, 1994). The data were first published by Atkinson (1986). The
data describe the three variables Distance, Climb, and Time from Scottish hill races. Distance is the length of a race, Climb is the height difference that must be covered in a race (in feet), and
Time is the record time for a race. In this example we regress Time on Climb. In order to be able to compare LMS regression with O LS regression we first perform a standard OLS regression. We obtain
the regression equation Time = 12.70 + 0.025 9Climb + Residual. The slope estimate is significant (t = 7.80;p < 0.01). Figure 11.6 displays the raw data and the OLS regression line. The figure
suggests that, although there clearly is a trend that higher climbs require longer times to complete the race, there are exceptions. Some of the shorter races seem to take longer to complete than one
would predict from the regression relationship. In particular, one of the races that covers a climb of about 2000 feet is far more time consuming than one would expect. One may suspect that this data
point (and a few others) exert undue leverage on the regression slope. Therefore, we perform, in a second run,
CHAPTER 11. ROBUST REGRESSION
o o
IB -
~ o I
Figure 11.6: OLS and two robust methods of regression solutions for hill data. LMS regression. The slope estimated for LMS regression should not be affected by the presence of outliers. Up to 50% of
the data points may be outliers before the LMS regression slope is affected. The LMS regression equation is Time - 16.863 + 0.015 9Climb + Residual. Obviously, the slope of the LMS regression line is
less steep than the slope of the O LS regression line. There is no significance test for LMS regression. However, the program (see Chapter 5) identifies outliers. Six of the 35 race track data points
are outliers. Figure 11.6 also presents the LMS regression line. This line is, in the present example, virtually identical to the LTS regression line (to be explained in the next section). Recently,
LMS regression has met with criticism (Hettmansperger & Sheather, 1992). The reason for this criticism is that whereas LMS regression is clearly one of the most robust methods available when it comes
to not being affected by bad outliers, it seems to be overly sensitive to bad inliers. These are data points that lie within the data cloud. They look un-
11.2. MODELS OF ROB UST R E G R E S S I O N
suspicious and are hard to diagnose as problematic data points. Yet, they have the leverage to change the slope of LMS regression. Hettmansperger and Sheather (1992) present an artificial data set in
which moving one of the bad inlier data points by a very small amount results in a change of the LMS regression slope by 90 ~ 11.2.3
In response to these criticisms, Venables and Ripley (1994) and Rousseeuw and Leroy (1987) recommend using Least Trimmed Squares (LTS) regression. This approach estimates parameters after trimming,
that is, excluding observations at both tails of the distribution. The number of observations to be excluded is controlled by choosing q in the following formula, q
min. i--1
LTS regression is more efficient than LMS regression. In addition, it has the same extreme resistance, that is, high breakdown point. There has been a discussion as to what value is best specified
for q, that is, the number of observations to be included. Most frequently, one finds the following two definitions: n
and n
q-~+l. In the examples in this book we use the first definition of q. The regression line estimated using LTS regression is Time - 16.863 + 0.015 9Climb + Residual. As for the LMS regression, there
is no significance test except for jackknife procedures. Figure 11.6 also displays the LTS regression line. Both, the formula
CHAPTER 11. ROBUST REGRESSION
and the figure suggest that the LMS and the LTS regression estimates are very close to each other. However, it still needs to be investigated in detail whether LTS regression is generally better that
LMS regression, in particular, in regard to bad inliers.
When discussing M-estimators of location, Goodall (1983) defines the M-estimate T n ( x l , . . . ,Xn) for t, given some function p(x;t) and sample x 1 , . . . , xn, as that value of t that minimizes
the objective function n
EP(xi;t), i--1
where the xi are the observed values and t is the location estimate. The characteristics of p determine the properties of the estimator. Let the first derivative of the objective function be called r
Then, the minimum of the function is n
The best known M-estimate is the sample mean. For this estimate of location, estimated by least squares, p is the square of the residual, p(:; t) =
- t)
The expression 1
t-- -Exi n
gives the minimum of (11.9). It is the sample mean. The r function of a robust M-estimator has a number of very desirable properties (Goodall, 1983), the most important of which is that the breakdown
point of the estimator is very large. Here, the breakdown point of an estimator is the smallest portion of the data that can be replaced by arbitrary values to cause the estimator to move an
arbitrary amount.
Thus, the breakdown point is a measure of an estimator's robustness. The higher the breakdown point, the more robust is an estimator. The highest possible breakdown point is 50%. If more than 50% of
the data are contaminated, one may wonder whether the remaining portion that is less than 50% is the contaminated one. The median is an M-estimator with a breakdown point of 50% for the location
problem. The approach of Marazzi (1980, 1993) to describing M-estimators in regression is based on the standard regression model, y = X~+e,
where X is the design matrix and the residuals are normally distributed with expectancy 0 and variance a 2. Marazzi solves the following system of equations for ~ and a:
i • 1 r ri
0, for j -
1 , . . . ,p
El X where
w i2
is defined as
ri - Yi - x i ~ ,
where x i is the ith column of the design matrix X. X and r are userdefined functions, the w i are given weights, and the constant is a given number. Depending on what values are chosen for function
parameters, special cases result. For example, for constant = 1 one obtains the approach of Huber (1981).
Computational Issues
This chapter gives two computational examples of robust regression. The first example applies ridge regression. For the illustration of ridge regression we use SYSTAT for Windows, Release 5.02. The
second example illustrates M-estimators, specifically, LMS and LTS regression. It uses the statistical software package S-Plus (Venables & Ripley, 1994).
CHAPTER 11. ROBUST REGRESSION
192 11.3.1
SYSTAT does not include a module for ridge regression. Therefore, we present two equivalent ways to estimate parameters in the absence of such a module. The first involves translating the procedures
outlined by Afifi and Clark (1990) for the BMDP statistical software package 4 into the SYSTAT environment. We term this method the Dummy Observation Method. The second option works with the
correlation matrix of predictors and criteria. We term it the Correlation Matrix Method.
Afifi and Clark's Dummy Observation Method The Dummy Observation Method can be employed for estimating ridge regression parameters using practically any computer program that performs ordinary least
squares regression. The only requirement is that the program use raw data. To introduce the method consider a multiple regression problem with p :> 1 predictors, X1, X 2 , . . . , Xp, and criterion
Y. The Dummy Observation Method involves the following two steps: 1. Standardization of All Variables. As a result, all variables have a mean of 0 and a standard deviation of 1; no intercept needs to
be estimated. 2. Addition of Dummy Observations. For each of the p predictors, one dummy observation is appended to the standardized raw data. These observations are specified as follows: Y = 0, and
the j t h predictor assumes the value
V/c(n- 1) i f k = j where c is the ridge constant. In other words, the appended dummy observations are all zero for Y. For the p predictors, one appends a p x p matrix with (c(n- 1)) 1/2 in the main
diagonal and 0 in the off-diagonal cells. 4A number of statistical software packages do provide modules for ridge regression. Examples include the BMDP4R and the SAS-RIDGEREG modules.
11.3. C O M P U T A T I O N A L ISSUES
For an illustration consider a data set with the two predictors X1 and X2, criterion Y, and n subjects. After appending the p dummy observations, the regression equation appears as follows: (' Yl
I 1 1 9
\ 0
1 1 ~, 1
V/c(n - 1) 0
0 ~/c(n- 1))
el e2
+ en enq-1 \ en+2
Obviously, this equation involves p observations more than the original regression equation. Thus, estimating regression parameters from this approach artificially inflates the sample size by p. This
is one reason why significance tests for ridge regression from this approach must not be trusted. Another reason is that appending dummy observations leads to predictors and criteria with nonmeasured
values. Including these values distorts estimates of the portion of variance accounted for. The estimates from this approach are automatically the ridge regression parameter estimates. A ridge trace
can be created by iterating through a series of constants, c (see Figure 11.4). The following sections illustrate this variant of ridge regression using the MGLH regression module in SYSTAT. We use
the cognitive complexity data already employed for the example in Figure 11.4. The illustration involves three steps. In the first we append the dummy observations. In the second step we estimate
regression slope parameters. In the third step we iterate using a series of constants, c. For the first step we issue the following commands. The sample size is n - 66. The value to be inserted in
the main diagonal of the dummy observation predictor matrix is ( c ( 6 6 - 1)) 1/2. For c - 0.1 this value is 2.55; for c - 1, this value is 8.062. Since we have two predictors, we have to append two
dummy observations.
C H A P T E R 11. R O B U S T R E G R E S S I O N Command
Use Memsort (CC1, CC2, REC)
Reads variables CC1, CC2, and REC from file "Memsort.sys"; SYSTAT presents list of variables on screen Raw data are pulled into a window on the screen Prepares standardization of variables Specifies
variables to be standardized
Click Window, Worksheet Click Editor, Standardize Highlight CC1, CC2, and REC; click Add each time a variable is highlighted Click Ok
Type "CCRECS" Click Ok Back in the Worksheet, hit the END key Enter "0" for REC, "2.55" for CC1, and "0" for CC2 Enter "0" for REC, "0" for CC1, and "2.55" for CC2 Click File, Save
Click File, Close
SYSTAT responds by asking for the name for a file where it saves the standardized variables Specifies file name; data will be saved in file "CCRECS.SYS" Standardizes and saves selected variables
Carries us one field past the last data entry Specifies values for the first dummy observation Specifies values for the second dummy observation Saves data with appended dummy observations in file
"CCRECS.SYS" Concludes data editing; carries us back to the SYSTAT command mode window
After these operations we have a data file that is different than a conventional raw data file in two respects. First, the variables that we use in the ridge regression runs are standardized. Second,
there are p -- 2 appended dummy observations that are needed in order to obtain ridge regression parameter estimates from employing the standard OLS regres-
11.3. COMPUTATIONAL ISSUES
sion program. The dummy observations must not be part of the standardization. Therefore, standardization must be performed before appending dummy observations. Using this new data set we now can
estimate a first set of ridge regression parameters. We do this with the following commands: Command
Use ccrecs Click Stats, MGLH, Regression Highlight CC1 and CC2 and assign them to Independent; highlight REC and assign it to Dependent Click Include Constant
Reads file "CCRECS.SYS" Initiates the O LS regression module Specifies predictors and criterion for regression
Click Ok
Results in Regression Model with no constant (disinvokes inclusion of constant which is default) Concludes model specification, performs estimation, and presents results on screen
The following output displays the results for this run: SELECT (sex= 2) AND (age> 25) >MODEL REC = CCI+CC2 >ESTIMATE Model contains no constant Dep Var: REC N: 67 Multiple R: 0.127 Squared multiple
R: 0.016 Adjusted squared multiple R: 0.001 Standard error of estimate: 1.050
C H A P T E R 11. R O B U S T R E G R E S S I O N
Effect CCl CC2
Std E r r o r
O. 062 O. 101
O. 125 O. 120
P(2 T a i l )
O. 490 O. 839
O. 626 O. 405
The output suggests that the sample size is n - 67. We know that this is incorrect, because we had added two dummy observations. The correct sample size is 65. Therefore, with the exception of the
slope parameter estimates, none of the statistical measures and tests provided in the output can be trusted. The regression equation is REC - 0.062 9CC1 + 0.101 9 CC2 + Residual. In order to create
information for the ridge trace, we repeat this run for a total of 10 times, with increasing c. Table 11.3 contains the values that must be inserted as values for the dummy observations for 0 < c _ 1
in increments of c - 0.1 and a sample size of n - 65. The ridge trace for this example appears in Figure 11.4.
The Correlation Matrix Method The Correlation Matrix Method uses the correlation matrix given in Table 11.2, that is, the correlation matrix that includes the biasing constant, c, as the summand for
the diagonal elements of the predictor intercorrelation matrix (see Equation (11.2)). In analogy to the Dummy Observation Method, the Correlation Matrix Method performs the following two steps: 1.
Calculation of correlation matrix of all variables that includes the predictor variables and the criterion variables (see below) 2. Addition of ridge constant to each of the diagonal elements of R.
Table 11.3- Values for Dummy Observations for n = 65 and 0 < c < 1 c (65c) 1/2
0.1 2.5
0.2 3.6
0.3 4.4
0.4 5.1
0.5 5.7
0.6 6.2
0.7 6.7
0.8 7.2
0.9 7.6
1.0 8.0
11.3. COMPUTATIONAL ISSUES
Unlike in the Dummy Observation Method, the ridge constant is added as is. No transformation is necessary. The following sections illustrate this variant of ridge regression using the CORRelation and
the MGLH modules in SYSTAT. We use the same data as in the last section, that is, the data that relate two measures of cognitive complexity, CC1 and CC2, to prose recall, REC, in a sample of n = 65
adult females. The illustration involves the following steps: First, we estimate the predictor x predictor correlation matrix and the predictors x criterion intercorrelation vector. Second, we
estimate parameters for OLS and ridge regression. First, we create and store the matrix of variable intercorrelations. We create the complete correlation matrix of the variables, CC1, CC2, and REC,
because the last row of the lower triangular of this matrix contains the vector of predictor-criterion intercorrelations. The following commands yield the correlation matrix: Command
Use Memsort (CC1, CC2, REC)
Reads variables CC1, CC2, and REC from file "Memsort.sys"; SYSTAT presents list of variables on screen Selects the Pearson correlation from the correlation module Includes all three variables in
computation of correlation matrix Indicates that correlation matrix is to be stored; SYSTAT requests name of file Specifies file name for correlation matrix Performs correlations; saves correlation
matrix in file with double precision; presents correlation matrix on screen; and indicates that correlation matrix has been saved
Click Stats, Correlation, Pearson Highlight all three variables, click Add Click the Save File square and Ok Type "CORRCC" Click Save
C H A P T E R 11. R O B U S T R E G R E S S I O N
The following output reproduces the result of these operations: PEARSON CCl CC2 REC Pearson correlation matrix CCl I. 000 O. 132 O. 077
CCl CC2 REC
Number of observations:
I. 000 O. 120
1. 000
To make sure the same results can be achieved from the correlation matrix as from the raw data, we issue the following commands: Command
Use Corrcc
Reads triangular file "Corrcc.sys" which contains correlation matrix of variables CC1, CC2, and REC Opens window for specification of regression model Specifies which variables are independent and
which is dependent Specifies number of cases (required when analyzing correlation matrices) Starts regression run and presents results on screen; notice that the program automatically calculates a
model with no intercept when processing correlation matrices
Click Stats, MGLH, and Regression Assign CC1 and CC2 to Independent and REC to Dependent Type "66" in number of cases rectangle Click Ok
The following output reproduces the result of this regression run:
11.3. COMPUTATIONAL ISSUES
REGRESS >MODEL REC = CC1+CC2 /N=65 >ESTIMATE Dep Var" REC N" 65 Multiple R" O. 135 Squared multiple R" 0.018 Adjusted squared multiple R" 0.0 Standard error of estimate" 0.999 Effect
CCl CC2
O. 062 O. 112
Std Error O. 127 O. 127
P(2 Tail)
O. 488 O. 880
O. 628 O. 382
Analysis of Variance Source Regression Residual
SS 1. 144 61. 856
df 2 62
0.572 O. 998
For reasons of comparison we also present the o u t p u t from a s t a n d a r d regression run, t h a t is, a run t h a t uses raw d a t a and includes an intercept term. Results from this run a p p
e a r in the following output. SELECT (SEX= 2) AND (age> 25) >MODEL REC = CONSTANT+CCI+CC2 >ESTIMATE Dep Var" REC N- 65 Multiple R" 0.135 Squared multiple R" 0.018 Adjusted squared multiple R" 0.0
Standard error of estimate" 41.805
Effect CONSTANT CCl CC2
C H A P T E R 11. R O B U S T R E G R E S S I O N
Coefficient 51.952 0.431 0.432
Std E r r o r 21.341 0.885 0.491
P(2 T a i l )
2.434 0.488 0.880
0.018 0.628 0.382
Analysis of Variance Source
Regression Residual
2003. 206 108355. 779
1001. 603 1747. 674
0. 573
0. 567
The comparison of the results from the two runs suggests that the approaches are equivalent. They result in the same R 2 = 0.016, the same standardized coefficients for the slope parameters, the same
tolerance values, the same t values for the two slope parameters, and the same tail probabilities. Only the unstandardized coefficients, their standard errors, and the standard error of estimate are
different, and so are the sum-of-squares values. The ANOVA results themselves have the same degrees of freedom, F values, and tail probabilities. The next step involves inserting the ridge constant
into the main diagonal of the correlation matrix. This can be performed using the following commands:
11.3. C O M P U T A T I O N A L ISSUES Command
Use Corrcc
Reads correlation matrix of variables CC1, CC2, and REC from file "Corrcc.sys" Correlation matrix is pulled into a window on the screen Substitutes correlations in diagonal of correlation matrix by 1
+ ridge constants (see Table 11.2)
Click Window, Worksheet Replace the first diagonal element, which currently is equal to 1, with 1.1; proceed until all diagonal elements are replaced Click File, Save
Correlation matrix with altered entries in main diagonal is saved in file "CORRCC.SYS"
Using the altered correlation matrix we now can estimate the first set of ridge regression parameters. The ridge constant is c = 0.1. Parameters are estimated using the following commands: Command
Use Corrcc
Reads correlation matrix of the variables CC1, CC2, and REC in which diagonal elements now contain ridge constant Selects regression module from MGLH Specifies which variables are independent and
which is dependent Specifies sample size
Click Stats, MGLH, and Regression Assign CC1 and CC2 to Independent and REC to Dependent Type "66" in rectangle for number of cases Click Ok
Starts estimation of ridge regression parameters (notice again that there is no provision for an intercept); results of regression run appear on screen
CHAPTER 11. R O B U S T R E G R E SSIO N
The following output displays the results of this regression run: MODEL REC = CCI+CC2 /N=65 >ESTIMATE Dep Var" REC N- 65 Multiple R" O. 123 Squared multiple R" 0.015 Adjusted squared multiple R" 0.0
Standard error of estimate" 1.049 Effect CCl CC2
Std Error
O. 057' O. 102
O. 127 O. 127
O. 452 O. 805
O. 653 O. 424
Analysis of Variance Source
Regression Residual
1.049 68.251
0.525 1.101
This output gives the results of the estimation of ridge regression parameters for a ridge constant of c - 0.1. These results are equivalent to the ones in the first output, for which we had used
Afifi and Clark's Dummy Observation Method. Specifically, the coefficients are the same. All other results differ because, in order to obtain a correct parameter estimate from the Dummy Observation
Method, we had to artificially increase the sample size. As a consequence, the R 2 and all significance tests, including the AN OVA, cannot be interpreted in the first output. It still needs to be
determined whether they can be interpreted here. Using the correlation matrix method one can also create a series of estimates and chart a ridge trace. All that needs to be done is to replace the
diagonal elements in the correlation matrix. Readers are invited to perform these replacements using ridge constants 0.1 _ c _< 1 and to draw their own ridge trace for this example.
of Squares
Squares Regression To illustrate the use of LMS and LTS regression we use the software package S-Plus (Venables & Ripley, 1994). SYSTAT does not contain a module that would easily lead to solutions
for these robust regression models. S-Plus is an object-oriented system that provides a wide array of modules for robust estimation. Two ways of applying LMS and LTS regression using S-Plus will be
illustrated. First, we show how to estimate and print regression parameters. Second, we show how to estimate regression parameters and to simultaneously draw the figure displayed in 11.6. To estimate
regression parameters we issue the following commands. Each command is typed at the ">"-prompt, and is followed by striking the ENTER key. It is important to note that S-Plus does distinguish between
upper and lower case characters. Therefore, we use capitals only when needed.
Invokes library MASS which contains the data file HILLS that we are using Makes variables in HILLS available by name Estimates O LS regression of TIME on CLIMB; writes results to file FM1 Estimates
LMS regression parameters; writes results to file FM2. Notice difference in variable order: The LM module expects the predictor first and then the criterion, separated by a tilde. The LMS and the LTS
modules expect the criterion first and then the predictor, separated by a comma
attach(hills) fml < - summary(lm(climb ,-~ time)) fm2 < - lmsreg(time, climb)
continued on next page
C H A P T E R 11. R O B U S T R E G R E S S I O N
fm3 <-ltsreg(time, climb)
Click File, Print (repeat as needed) fm2 Click File, Print (repeat as needed) fm3 Click File, Print (repeat as needed)
Estimates LTS regression parameters; writes results to file FM3 Sends contents of FM1 to screen; the screen is Windows' Notepad. Print commands have the same effect as "print screen" commands in DOS.
Sends screen content to printer (page by page) Sends contents of file FM2 to screen Sends screen contents to printer (page by page) Sends contents of file FM3 to screen Sends screen contents to
printer (page by page)
The following output displays the content of "FMI":
Call: im(formula = climb ~ time) Residuals: Min 1Q Median 3Q Max -3227 -438.9 -76.41 549.6 1863 Coefficients:
Value Std. Error (Intercept) 307.3712 253. 9620 time 26. 0549 3.3398
t value Pr(>Itl) 1. 2103 O. 2348 7.8012 0.0000
Residual standard error: 974.5 on 33 degrees of freedom Multiple R-Squared: 0.6484 F-statistic: 60.86 on 1 and 33 degrees of freedom, the p-value is 5.452e-009 Correlation of Coefficients :
11.3. C O M P U T A T I O N A L ISSUES
(Intercept) time -0.7611
The following output displays parts of the content of "FM2": $coef: Intercept time 650 - 2 . 4 5 8 9 2 9 e - 0 1 5 $resid: [1]
[11] [16] [21] [26] [31]
1450 2350 -350 1350 3750
1350 1550 850 150 -50
1550 -300 1550 300 4550
-150 350 250 1100 200
850 -50 -50 -150 4350
Swt : [i] 1 0 1 1 0 0 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 0 1 0
$rsquared: [1] 0.51
The following output displays parts of the content of "FM3": $coefficients" (Intercept) x 841.0253 - 4 . 3 3 6 7 7 $residuals: [1] - 1 2 1 . 2 8 [7] 7546.35 [13] 1640.86 [19] 234.51 [25] - 1 6 0 . 0
0 [31] 3930.13
1868.66 116.69 -149.63 -99.79 1272.67 -100.59
204.91 87.99 775.78 -471.85 108.30 5097.31
156.73 -18.64 2472.31 779.97 232.86 130.84
2499.01 2094.53 1785.79 1565.55 1127.98 4852.13
2342.50 1345.67 -149.94 136.75 -250.17
C H A P T E R 11. R O B UST R E G R E S S I O N
206 $fitted.values: [1] 771.28 [8] 683.31 [15] 724.22 [22] 720.03 [29] 622.02
631.34 712.01 527.69 634.45 750.17
695.09 668.64 414.21 763.25 469.87
643.27 5.47 499.94 760.00 700.59
570.99 654.33 765.49 727.33 102.69
523.50 559.14 699.79 691.70 719.16
-46.35 649.63 771.85 717.14 147.87
The first of these three outputs (FMI) presents the OLS results that we created using the LM module. The protocol shows first an overview of the residual distribution. It gives the smallest and the
largest residual, and the three quartile points. In the center of the protocol there is the usual regression-ANOVAtype table. It shows parameter estimates, their standard errors, the t values
(parameter estimates divided by their standard error), and the two-sided tail probabilities for the t values. This is followed by information on the residual standard error, the multiple R 2, and the
F statistic for the ANOVA around the regression line. The last piece of information provided is the matrix of parameter intercorre' lations. In the present example, this matrix contains no more than
one correlation, for there are only two parameter estimates. The second of these outputs (FM2) first presents information on the regression equation: It provides estimates of the intercept and the
slope coefficients. What follows are the residuals for each of the races, presented in scale units of the dependent variable. Most interesting is the string of weights that is printed below the
residuals. A "I" indicates that a case can be considered relatively problem-free. In contrast, a "0" suggests that a case may be an outlier. The present example seems to contain six outliers. In
analogous fashion, the third output (FM3) first presents the parameter estimates. These are, as we indicated before, very close to the estimates provided by LMS regression. Residuals follow, also in
units of the dependent scale. Because the parameters are so similar, the residuals are very similar also. The next block of information contains the fitted values. The following example uses the same
data and the same regression models. The goal is to show how to create a plot as displayed in Figure 11.6 using S-Plus. For the following commands to work, a Windows system and printer are required.
Note again that S-Plus does distinguish
11.3. C O M P U T A T I O N A L ISSUES between lower case and upper case letters.
Invokes Windows graphics devices; presents a graph window Carries you back to the command window Makes files in library MASS available; these files contain the data file HILLS that we are using Makes
variables in data file HILLS available by name Creates a scatterplot with variable Climb on the abscissa and variable Time on the ordinate; the string in quotation marks after "main" is the title of
the graph Creates a line whose parameters are provided by "lm", the OLS linear regression module; predictor is Climb, criterion is Time; lwd=l specifies that the line be as wide as default Same for
LMS regression
Click inside the command window library(mass)
attach(hills) plot(climb, time, main = "OLS, LMS, and LTS Regression Lines for Hill Race Data") abline(lm(time ,-~ climb), lwd = 1)
abline(lmsreg(climb, time), lwd = 1) abline(ltsreg(climb, time), lwd = 1) Click inside the graph window Click File, Print, Ok
Same for LTS regression Displays graph on screen Prints figure
This Page Intentionally Left Blank
Chapter 12
SYMMETRIC REGRESSION This chapter presents various approaches to symmetric regression. This variant of regression has been discussed since Pearson (1901a) described a first symmetric regression model
in a paper entitled "On Lines and Planes of Closest Fit to Systems of Points in Space." Symmetric regression models are beneficial for the following reasons: 1. The problems with inverse regression
listed in Section 12.1.1 do not exist. Specifically, inverse regression, that is, regression of X on Y that starts from a point estimated from regressing Y on X, yields the original starting point.
The inverse regression problem is solved because there is only one regression line. 2. Symmetric regression accounts for errors in the predictors. While there are many approaches to considering
errors in predictors (the so-called errors-in-predictors models), none of these approaches also solve the inverse regression problem. Thus, symmetric regression is useful because it solves both of
these problems simultaneously. In addition, symmetric regression allows researchers to both estimate and test when there is no natural classification of variables in predictors and criteria. 209
This chapter first presents the Pearson (1901a) orthogonal regression solution. This is followed by two alternative solutions that have aroused interest particularly in biology and astrophysics, the
bisector solution and the reduced major axis solution (see Isobe, Feigelson, Akritas, & Babu, 1990). A general model for OLS regression is presented next (von Eye & Rovine, 1995). The fourth section
in this chapter introduces robust symmetrical regression (von Eye & Rovine, 1993). The last section discusses computational issues and presents computer applications.
Pearson's Orthogonal Regression
In many applications researchers are interested in describing the relationship between two variables, X and Y, by a regression line. In most of these applications there is a natural grouping of X and
Y into predictor and criterion. For instance, when relating effort and outcome, effort is naturally the predictor and outcome is the criterion. However, in many other applications this grouping is
either not plausible or researchers wish to be able to predict both Y from X and X from Y. Consider the relationship between weight and height that is used in many textbooks as an example for
regression (e.g., Welkowitz, Ewen, & Cohen, 1990). It can be meaningful to predict weight from height; but it can also be meaningful to predict height from weight. Other examples include the
prediction of the price for a car from the number of worker-hours needed to produce a car. This is part of a manufacturer's cost calculation. It can be equally meaningful to estimate worker-hours
needed for production from the price of a car. This is part of the calculations performed by the competition. The problems with inverse regression render these estimations problematic. Predictors
measured with error make these problems even worse. Therefore, the problem that is addressed in this chapter is different than the standard regression problem addressed in the first chapters of this
book. Rather than estimating parameters for the functional relationship E(YIX), we attempt to specify an intrinsic functional relationship between X and Y. As we pointed out before, this is important
particularly when the choice between X and Y as predictor is unclear, arbitrary, or ambiguous, and when both variables are measured with error. Pearson (1901a) proposed to still use ordinary least
squares methods
12.1. P E A R S O N ' S O R T H O G O N A L R E G R E S S I O N
~X Figure 12.1: Illustration of residuals expressed in units of Y (gray) and perpendicular to the regression line (black).
for parameter estimation, but to employ an optimization criterion that is different than the standard sum of squared residuals. More specifically, Pearson proposed minimizing
where pi is the ith case's perpendicular distance to the regression line, instead of the usual ei -
Z i
To illustrate the difference between (12.1) and (12.2) consider the four X - Y coordinate pairs (=data points) presented in Figure 12.1. The figure
displays the four coordinate pairs, an imaginary regression line, and the residuals, defined using (12.1) in black, and defined using (12.2) in gray. As is obvious from Figure 12.1, the black,
perpendicular residuals are always shorter than the gray residuals parallel to the Y -axis. When
the regression slope is 0 < bl < ~ , there is no exception to this for residuals greater than zero. More important than this difference, however, is the difference in interpretation between the two
definitions. In order to measure the magnitude of residuals, ei, that are defined in units of Y, one only needs Y. In contrast, when measuring the magnitude of residuals pi from orthogonal
regression, one needs information about both X and Y. More specifically, residuals in standard, asymmetric regression are defined by (12.2). Residuals in Pearson's orthogonal regression are defined
as the perpendicular distance of point (x, y) to the regression line (see Pearson, 1901a, 1901b; von Eye & Rovine, 1995), Pi = (Y - tan 0(x - 2) - ~)cos 0,
where 0 is the angle of the regression line with the Y -axis. The sum of the squared distances that is to be minimized is E
p2 = E ( y i
_ tan ~(x - 2) - ~)2 cos 2 ~9.
Setting the first partial derivative with respect to 0 equal to 0 yields the following solution for 0: 2e
(12.4) -
This is identical to the Pearson (1901a) solution (see (12.5)). Section 12.3 relates this solution to the standard, asymmetric OLS solution. As is obvious from intuition and Figure 12.1, the minimum
of (12.3) gives the smallest total of all possible distance lines. Another way of arriving at the solution given in (12.4) was proposed by Pearson (1901a). Pearson starts from considering an ellipse
of the contours of the correlation surface of a two-dimensional data cloud. Using the centroid of the ellipse as the origin, the ellipse can be described as follows: X 2 a2
y2 a2y
2rxyxy axay =1.
Pearson showed that the ellipse of the contours of the correlation surface
12.1. PEARSON'S ORTHOGONAL REGRESSION
and the ellipse that describes the cloud of residuals have main axes that are orthogonal to each other. From this relation Pearson concluded that the best fitting line in the sense of (12.3)
coincides in direction with the major axis of the ellipse of the correlation surface. The tangent of the angle, 20, of this axis is given by tan 20 -- 2rxyaxay
This is identical to (12.4). The mean squared of the residuals, Pi, is MSE 2 =
X2y 2 cot 2 0
The following data example (von Eye & Rovine, 1995) relates psychometric intelligence and performance in school to each other. In a sample of n = 7 children, the following IQ scores were observed: IQ
= (90, 92, 93, 95, 97, 98,100). The performance scores are, in the same order, P = (39, 42, 36, 45, 39, 45, 42). The Pearson correlation between these two variables is r = 0.421. Regressing
Performance on IQ yields the following OLS solution using (12.2) as the criterion to be minimized: Performance - 3.642 + 0.395 9IQ + Residual. Substituting predictor for criterion and criterion for
predictor yields IQ = 76.538 + 0.449. Performance + Residual. The data points and these two regression lines appear in Figure 12.2, where the regression for Performance on IQ is depicted by the
thicker line, and the regression of IQ on Performance is depicted by the thinner line. The following paragraphs illustrate problems with inverse regression that are typical of standard, asymmetric
regression where one estimates parameters for two regression lines.
45 qJ r
E ~
Figure 12.2: Regression lines for Performance on IQ (thicker line) and IQ on Performance (thinner line).
S y m m e t r y and Inverse Prediction
One of the most striking and counterintuitive results from employing two asymmetric regression lines for prediction and inverse prediction is that back-prediction does not carry one back to the point
where the predictions originated. This result is illustrated using the data in Figure 12.2. Consider the following sequence of predictions and inverse predictions: (1)
xi --+ ~)i , and
~ ~ xi.
This sequence describes s y m m e t r i c prediction if xi - xi.
This applies accordingly if predictions start from some value yi. If, however, xi ~ xi.
this sequence describes a s y m m e t r i c prediction. Table 12.1 presents point predictions and inverse point predictions from the above regression equa-
12.1. PEARSON'S ORTHOGONAL REGRESSION
Table 12.1: Point Predictions and Inverse Point Predictions Using Asymmetric OLS Regression Predictor Value
Back Estimate
IQ 70 80 90 100 110 120 130 Performance 24 30 36 42 48 54 60
Performance 31.29 35.24 39.12 43.12 47.09 51.04 54.99 IQ 87.31 90.01 92.70 95.39 98.09 100.78 103.48
IQ 90.59 92.36 94.14 95.90 97.68 99.46 101.23 Performance 38.13 39.20 40.26 41.32 42.39 43.45 44.52
Difference 20.59 12.36 4.14 -4.1 -12.32 -20.54 -28.77 Difference 14.13 9.2 4.26 -0.68 -5.61 -10.55 -15.48
tions. The top panel in Table 12.1 displays estimates of Performance (second column) as predicted from IQ (first column) using the above regression equation. The third column of this panel contains
the back estimates of IQ. These estimates were calculated using the regression equation (12.6), with the estimates of Per/ormance as starting values. The fourth column displays the differences
between back estimates and starting values. The bottom panel of Table 12.1 displays results for the prediction of IQ from Per/ormance (Columns 1 and 2). Column 3 of the bottom panel displays the
back-estimated Performance scores, where the IQ estimates served as predictor scores. The last column contains the differences between back-estimated and original Performance scores. The results in
Table 12.1 and other considerations suggest that 1. The differences between starting values and back-estimated starting values increase with the distance from the predictor mean;
2. Differences increase as the correlation between predictor and criterion decreases; and 3. Differences increase as regression slopes become flatter. When researchers only wish to predict Y values,
this asymmetry may not matter. However, there are many applications in which this characteristic can become a problem. Consider the following example. A teacher has established the number of learning
trials needed for a student to reach a criterion. A new student moves to town. This student performs below criterion. Using inverse regression the teacher estimates the number of make-up sessions
needed for this student to reach the criterion. Depending on the strength of the relationship, the inverse prediction may not only be far away from the number of sessions the new student may actually
need to reach the criterion. The estimate of lessons needed may indicate a number of sessions that is impossible to implement and, thus, makes it very hard for the student to be considered an
adequate performer. On another note, it should be noticed that the above example includes discrepancies that are so large that, in other contexts, they would qualify as statistically significant.
Inserting into (12.4) creates an estimate for the slope parameter of Pearson's orthogonal regression. More specifically, we estimate the tangent of two times the angle 0 of the orthogonal regression
line as tan 2~ -
2 90.421 93.559 93.338 = 6.54. 3.5592 - 3.3382
From this value we calculate 2 , ~ = 81.31 ~ and ~ = 40.65 ~ Thus, the angle of the symmetrical regression line is 40.65 ~. As in standard, asymmetric OLS regression, the symmetric regression line
goes through the centroid (center of gravity, the mean) of the data cloud. The centroid has for coordinates the arithmetic means of predictor and criterion. In the present example, we obtain for the
c (9500) 41.13
12.1. PEARSON'S ORTHOGONAL REGRESSION
Straight lines can be sufficiently described by a point and the angle of the slope. Thus, we have a complete solution. This solution has the following characteristics: 1. It is an OLS solution in
that it minimizes the sum of the squared perpendicular distances of data points from the regression line. 2. It is symmetric in the sense specified in Equations (12.6) and (12.7). 3. As a result of
2, a prediction of Y from X, followed by an inverse prediction that originates from the predicted y value, carries one back to the original X value. This applies accordingly when the prediction
starts from Y. 4. This method is applicable in particular when both predictor and criterion are measured with error. 5. This method is applicable in particular when there is no natural grouping of
variables into predictors and criteria. 6. The Pearson orthogonal symmetrical regression and the OLS asymmetric regression lines coincide when Ir[ - 1.0. 7. The Pearson orthogonal symmetrical
regression line is identical to the first principal component in principal component analysis (see, e.g., Pearson, 1901a, 1901b; Morrison, 1990). The second principal component has the same
orientation as the main axis of the ellipse for the residuals, pi, in orthogonal regression, and so forth. Figure 12.3 displays the scatterplot of the IQ-Performance data, the two asymmetric
regression lines, and the symmetric regression line (thick gray). Note that the symmetric regression line does not half the angle between the two asymmetric regression lines. 12.1.3
in Orthogonal
Based on the assumption that predictor and criterion are jointly normally distributed, Jolicoeur (1973) and Jolicoeur and Mosiman (1968) proposed an approximate method for estimating a confidence
interval around the
C H A P T E R 12. S Y M M E T R I C 48
45 I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9 r
.z:::-::~ . . . . . . . . . . . . . . .
.......... _:~
. .......................
I~" ....... :::::::::::::::::::::::::::: 9 9. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ~j .........
Figure 12.3: Three regression lines for Performance- IQ data.
slope coefficient of orthogonal regression. Let ~ be the slope parameter and kl _
s 2y - s . +2 kl
and s 2~ - s2~ - r
where 2 Q _ Fl_~;~;~_2(s~s~ - s~u), n-2
where sx and sy are the variable standard deviations and sx~ is the covariance of X and Y. This approximation is reasonably close to the nominal a even if the
12.2. OTHER SOLUTIONS
correlation is as low as p - 0.4 and the sample size is as small as n = 10. Notice, however, that the values of kl and k2 can be imaginary if the expression under the square root in the numerator
becomes negative. This is most likely when the correlation between X and Y is small. Notice in addition that, as is usual with confidence intervals, the values of kl and k2 are always real and assume
the same sign as the correlation if the correlation is statistically significant.
The following two problems are apparent for orthogonal regression: 1. Lack of invariance against specific types of linear transformation. When X and Y are subjected to linear transformations with
different parameters, the parameter estimates for orthogonal regression change. This is the case if, for example, X is replaced by bxX and Y is replaced by byY, with bx r by. This is not the case in
standard, asymmetric regression. 2. Dependence of estimates on variance and mean. Parameter estimates of orthogonal regression tend to be determined to a large extent by the variable with the larger
variance and the greater mean. Logarithmic transformation and standardization have been proposed as solutions for both problems (Jolicoeur, 1991) (see the reduced major axis solution in the following
Other Solutions
Fleury (1991) distinguishes between two forms of regression: Model I and Model II. Model I Regression of Y on X is asymmetric and minimizes (12.2), that is, the sum of the squared vertical
differences between data points and the regression line. If X is regressed on Y, Model I Regression minimizes the sum of the squared horizontal differences between data points and the regression line
(see Figure 12.4, below). Model II Regression is what we term symmetric regression. It has a number of variants, three of which are presented in this volume (for additional least squares
CHAPTER 12. S Y M M E T R I C REGRESSION
Figure 12.4: Illustration of goal functions of orthogonal regression and reduced major axis regression.
solutions see Isobe et al., 1990). The first is Pearson's orthogonal regression, also called major axis regression (see Section 12.1). The second is reduced major axis regression (Kermak & Haldane,
1950), also called impartial regression, (see StrSmberg, 1940). The third is bisector regression (Rubin, Burstein, & Thonnerd, 1980), also called double regression, see Pierce and Tulley (1988). The
second and the third solutions are reviewed in this section. As was illustrated in Figure 12.1, Pearson's major axis solution minimizes the sum of the squared perpendicular distances between the data
points and the regression line. In contrast, the reduced major axis regression minimizes the sum of the areas of the right triangles created by the data points and the regression line. The two
approaches of major and reduced major regression are illustrated in Figure 12.4. Standard, asymmetric regressions of Y on X and X on Y are also illustrated in the figure. The black arrows in Figure
12.4 illustrate the standard residual definition for the regression of Y on X. The estimated y value (regression line) is subtracted from the observed y value (squares) (see (12.2)). The
12.2. O T H E R S O L U T I O N S
sum of these squared differences is minimized. In an analogous fashion, the light gray arrows illustrate the regression of X on Y, where residuals are expressed in units of X. Pearson's orthogonal
regression minimizes the sum of the squared dark gray differences in Figure 12.4. The dark gray arrows are perpendicular to the regression line. Thus, they represent the shortest possible connection
between an observed data point and the regression line. The two asymmetric regression lines and the symmetric orthogonal regression line share in common that they minimize lines that connect data
points and the regression line. The reduced major axis solution reduces an area. Specifically, it reduces the area of the right triangle spanned by the black, vertical arrow; the light gray,
horizontal arrow; and the regression line. This triangle is a right triangle because the angle between the vertical and the horizontal arrow is a right angle. The equation for the reduced major axis
regression is
Yi - fl + sgn(r) 8y(Xi -- X), 8x
where sgn(r) is the sign of the correlation between X and Y. (A definition of the area of the triangle is given in Section 12.5.1 on computational issues.) The slope parameter for the reduced major
axis solution is
bl - 8 g ~ t ( S x y ) ~ l ~ 2 ,
where ~1 and ~2 are the estimates of the coefficients for the regression of Y on X and X on Y, respectively. The approximate limits of the confidence interval for the slope coefficient in (12.8) are
k2 - sgn(r)SY(v/B + 1 + v ~ ) 8x
C H A P T E R 12. S Y M M E T R I C R E G R E S S I O N
where F1-~;1;.-2(1
- r 2)
The main advantage of the reduced major axis solution over the major axis solution is that it is scale-independent. Nevertheless, there are recommendations to use this method only if 1. the sample
size is n > 20 2. the correlation between X and Y is assumed to be strong p >_ 0.6 3. the joint distribution of X and Y is bivariate normal (Jolicoeur, 1991). Isobe et al. (1990) urge similar
cautions. These cautions, however, are not proposed specifically for the reduced major axis solution, but in general, for all regression solutions, including all asymmetric and symmetric solutions
included in this volume. The last method for symmetric regression to be mentioned here is called the bisector solution. The bisector solution proceeds in two steps. First, it estimates the parameters
for the two asymmetric regressions of Y on X and X on Y. Second, it halves the angle between these two regression lines; that is, it halves the area between the regression lines. The slope
coefficient for the bisector solution is
/~1/~2 1 + v/(i +/~12)(1 +/~), -
31 +
where, as in (12.8), the/~ are the estimates of the standard OLS asymmetric regression slope parameters.
Choice of M e t h o d for Symmetric Regression
Isobe et al. (1990) performed simulation runs to investigate the behavior of the two asymmetric regression solutions, the bisector solution, the reduced major axis solution, and the major axis
solution. They conclude that the bisector solution can be recommended because its standard deviations
12.2. O T H E R SOLUTIONS
are the smallest. The Pearson orthogonal regression solution displays, on average, the largest discrepancies between theoretical and estimated values. When applying regression, one goal is to create
an optimal solution. Minimal residuals and minimal standard deviations are examples of criteria for optimal solutions. In addition, significance testing and confidence intervals are of importance.
There are solutions for significance testing for the Pearson orthogonal regression and the reduced major axis solution. These solutions were described earlier in this chapter. Therefore, the
following recommendations can be given as to the selection of methods for symmetric regression: When estimation of scores is of interest rather than statistical significance of parameters, the
bisector solution may be the best, because it creates solutions with the smallest standard deviations. Examples of applications in which predominantly estimation is important include the estimation
of time, amount of work needed to reach a criterion, and distance. All this applies in particular when there is no obvious classification of variables in predictors and criteria, and when both
predictors and criteria are measured with error. When estimation of confidence intervals and significance testing are important, the pragmatic selection is either Pearson's orthogonal regression or
the reduced major axis solution. Either solution can be calculated using standard statistical software. Section 12.5 illustrates this using the statistical software package SYSTAT. These solutions
may not produce the smallest standard deviations. However, they do not require time-consuming bootstrapping or other procedures that yield estimates for standard errors of parameter estimates. Thus,
whenever researchers have available software packages that allow them to perform steps of the type illustrated in Section 12.5, they can create a complete solution that (1) differs from the solution
with the smallest standard deviation only minimally, (2) allows for significance testing and estimation, and (3) carries all the benefits of symmetric regression. The following data example uses data
from Finkelstein et al. (1994). A sample of n -- 70 adolescent boys and girls answered a questionnaire
C H A P T E R 12. S Y M M E T R I C R E G R E S S I O N 60
55 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
: *
- - 9 ........
, - - . - , - $ . . . . . . . :-
25 . . . . . . . . . . . :. . . . . . . . . . . ' .<
* " . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
,~ 9
"~ 5
15 Aggression
Figure 12.5: Five regression solutions for aggression data. that included, among others, the scales Aggressive Impulses, A/, and Aggression Inhibition, AIR. An Aggressive Impulse is defined as the
urge to commit an aggressive act against another person. Aggression Inhibition is defined as a mental block that prevents one from committing aggressive acts. We analyze these two scales using the
five regression models 1. A I R on A I 2. A I on A I R
3. major axis (orthogonal) 4. reduced major axis 5. bisector Figure 12.5 displays the data and the five regression lines. The correlation 1 between Aggressive Impulses and Aggression Inhibition is r
= 0.205. The figure suggests accordingly that the angle between the regression line of A I R on A I and the regression line of A I on A I R 1 T h i s c o r r e l a t i o n is s t a t i s t i c a l l
y n o t s i g n i f i c a n t ( t - - 1 . 7 2 7 ; p - - 0 . 0 8 9 ) . T h u s , a p p l i c a t i o n o f s y m m e t r i c r e g r e s s i o n is c o u n t e r t h e r e c o m m e n d a t i o n s
given by several a u t h o r s . H o w e v e r , o n e of t h e m a i n p u r p o s e s of t h e p r e s e n t e x a m p l e is t o d r a m a t i z e t h e d i f f e r e n c e s b e t w e e n a s y m
m e t r i c a n d s y m m e t r i c r e g r e s s i o n . T h e r e is n o i n t e n t i o n to make a substantive statement concerning the relationship between AI and AIR.
12.3. A G E N E R A L MODEL FOR OLS R E G R E S S I O N
is wide. These are the most extreme regression lines in the figure, that is, the lines with the steepest and the flattest slopes. As a result, inverse regression that starts from a predicted value
can carry us far away from where the predictions originated. All symmetric regression solutions provide regression lines between the two asymmetric lines. The differences between the symmetric lines
are small. Each of the differences is smaller than any of the standard errors (for more technical detail see Isobe et al. (1990); for an application to astronomy data see Feigelson and Babu (1992)
and von Eye and Rovine (1993)). The slopes of the five solutions are as follows: 1. 2. 3. 4. 5.
0.276 6.579 0.307 0.742 0.889
All of these regression lines go through the center of gravity (centroid) of the bivariate distribution. Thus, the centroid and the slopes give a complete description of the regression lines.
A General M o d e l for OLS Regression
This section presents a general model for OLS regression (Jolicoeur, 1991; von Eye & Rovine, 1995). This model unifies the approaches presented in the previous sections and in Figure 12.4. The two
asymmetric approaches and the major axis approach will be identified as special cases of this unified model. The nature of this section differs from the others in that it does not present a method
and its applications. Rather, it presents and explains this unified model. The benefit from reading this section is, therefore, that readers gain a deeper understanding of the methods of symmetric
regression. There is no immediate benefit in the sense that calculations become easier, faster, or possible. The General Model is presented using means of planimetry, that is, two-dimensional
geometry. Figure 12.6 gives a representation of the components of the model. Figure 12.6 provides a more general representation of the problem
CHAPTER 12. S Y M M E T R I C REGRESSION
h .
X ....
X t
Figure 12.6: Illustration of a general model for OLS regression.
already represented in Figure 12.4. The three solutions depicted in the figure share in common that they determine parameters of a regression line by minimizing the distances, d, of the residuals
from this regression line. Figures 12.4 and 12.6 show three examples of such distances. There is first the light gray, horizontal arrows that depict distances defined in units of the X-axis. This
distance or residual definition is used when regressing X on Y. Second, there are black, vertical arrows. This distance or residual definition is used when regressing Y on X. Both of these approaches
are asymmetric. Third, there are the dark gray arrows that are perpendicular to the regression line. This approach is used in Pearson's orthogonal regression solution. From the unified model
perspective, one can state a more general regression problem as follows (von Eye & Rovine, 1995): what is the regression line obtained by minimizing the sum of the squared distances between observed
data points (x, y) and an arbitrary data point (x', y') at some angle c~ from the perpendicular intersection of the regression line? In Figure 12.6 a regression line is drawn that forms an angle, ~,
with the X-axis. This regression line goes through the centroid of the bivariate distribution of X and Y. The angle c~ represents the deviation from the perpendicular of the distance to be minimized.
The figure also shows data point (x, y) and an arbitrary data point (x', y'). It shows the angles
and ~ and three sample definitions of the distance between (x, y) and the regression line: d for the perpendicular distance (dark gray arrow), dhorizontal for the horizontal distance (light gray
arrow), and dvertical for the vertical distance (black arrow). Distance a is an arbitrary distance. It lies within the triangle minimized by the reduced major axis solution. However, it does not
coincide with any of the distances minimized for the two asymmetric and the major axis solutions. Now, to answer the above question consider, as in Section 12.1.2, the regression line that, at angle
~, goes through the centroid, Yi - Y - tan O(xi - ~ ) .
The distance to be minimized is denoted by the black solid line labeled a in Figure 12.6. When minimizing this distance consider the triangle formed by the three lines a, w, and l, where 1 is a part
of dhorizontal with 1 ~__ dhorizontal. The distance a can be expressed as a-
~ w 2 ~-l 2.
The vertical distance from the point (x, y) to the regression line is di,verticat -
(y - tan O(xi - ~)) - ~.
The perpendicular distance (dark gray line) of (x, y) to the regression line is di -
(y - tan O(xi - ~) - ~ ) c o s O .
Setting 2 a - dsec ~, and 1 w
acos(90-a-O) asin(90-a-O)
2In the equation "sec" is short for secant. In a right triangle a secant can be defined as the ratio of the hypotenuse to the side adjacent to a given angle.
we can describe the coordinates of point (x ~, y~) as x'
Using these terms, we can give a more explicit specification of the distance to be minimized. It is
ai -
(y - tan 0 ( x i - ~ - ~)2 (cos2 ~ sec 2 c~))
9 (cos 2 (90 - a - 0) + s i n 2 (90 - a - 0)). O LS minimizes the sum of the squared distances, t h a t is, - t a n O(xi - 2 ) - / ~ ) 2 c o s 2 ~9 s e c 2 a .
In the following paragraphs we show that the two asymmetric regression solutions and Pearson's orthogonal solution are special cases of Equation (12.9). First, we consider the regression of Y onto X.
For this case, a = t? and the function to be minimized is E(yi
- tan 0 ( x i - 2 ) - y)2
Taking the first partial derivative with respect to ~ and setting t h a t to 0 yields the solution for ~ in the form of tan0 - Ei(Yi
- fl)(xi - 2) Ei(xi - 2)2
This is identical to the solution for the regression of Y onto X. Second, we consider the orthogonal, symmetrical solution, t h a t is, the major axis solution. Here, a - 0, and the equation to be
minimized is E(yi
- tan 0(xi - 2) - ~)2 cos 2 ~.
The first partial derivative of this expression with respect to ~, set to 0,
12.3. A G E N E R A L MODEL FOR OLS R E G R E S S I O N
2 ~-~i(Yi- ~)(xi - ~)
tan 2~ =
-- ~)2 __ ~ i ( X
i __ ~ ) 2 "
This is Pearson's solution. Third, we consider the regression of X onto Y. For this asymmetric regression a = 90 ~ - 0 , and the equation to be minimized is ~(Yi
- tan O ( x i - "2) - ~)2 cot 2 0.
The first partial derivative of this expression with respect to 0, set to 0, is cot ~ -
~ i ( Y i - ~ ) ( x i - ~) ~(~/_ ~)2
The derivations for these three results are given in von Eye and Rovine (1995).
From a d a t a analyst's perspective, the three solutions presented here can be viewed as different hypotheses about the nature of residuals. The solutions discussed here cover all regression lines
between a = ~ and - 9 0 - ~. As the regression line sweeps from one extreme to the other, the degree to which X and Y contribute to the slope of the line changes. l moves from dhorizontal to 0, and w
moves from 0 to dvertical (see Figure
12.6). The concept of multiple symmetrical regression, also introduced by Pearson (1901a), is analogous to the concept of simple symmetrical regression. Consider the three variables A, B, and C. The
data cloud, that is, the joint distribution of these variables, can be described by a three-dimensional ellipsoid. This ellipsoid has three axes. As in simple, that is, bivariate regression, the
solution for three-dimensional symmetrical regression is a single regression line. In three-dimensional asymmetric regression the solution is no longer a line, it is a (hyper)plane. The best fitting
symmetrical regression line goes through the centroid and has the
C H A P T E R 12. S Y M M E T R I C R E G R E S S I O N
same orientation, that is, slope, as the main axis of the ellipsoid. This applies accordingly to more than three dimensions (see Rollett, 1996).
Robust Symmetrical Regression
Outliers tend to bias parameter estimates. This applies in particular to estimates from O LS. The above symmetric regression methods use OLS methods. Therefore, they are as sensitive to outliers as
standard asymmetric regression methods that also use O LS methods. To create a regression solution that is both symmetric and robust, von Eye and Rovine (1993, 1995) proposed combining Pearson's
orthogonal regression method with Rousseeouw's LMS regression. This method proceeds as follows 1. Select a subsample of size nj _ n and calculate the slope of the orthogonal regression line and the
median of the squared residuals where residuals are defined as in Pearson's solution; save the parameter estimates and median. 2. Repeat Step 1 until the smallest median has been found. The method
thus described, robust symmetric orthogonal regression, shares all the virtues of both orthogonal and LMS regression. It also shares the shortcomings of both (see Section 12.1.4).
Computational Issues
In spite of its obvious benefits and its recently increased use in biology and astrophysics, symmetric regression is not part of standard statistical software packages. However, for some of the
solutions discussed in this chapter there either are special programs available or solutions can be easily implemented using commercial software. For instance, Isobe and Feigelson (Isobe et al.,
1990) make available copies of a FORTRAN 77 program that calculates the five regression models discussed in their 1990 paper. The present section illustrates the calculation of the major axis and the
reduced major axis solutions using SYSTAT (Fleury, 1991).
As in Section 12.5 we assume the reader is familiar with standard data and file manipulation routines in SYSTAT. Therefore, we focus on the methods of symmetric regression, using a data file already
in existence as a SYSTAT system file. Specifically, we use file AGGR.SYS, which contains the data for the aggression data example (see Figure 12.5). First, we show how to create a major axis
solution. Then, we show how to create a reduced major axis solution (Section 12.2.1). Both SYSTAT solutions are based on Fleury (1991). 12.5.1
a Major
The NONLIN module in SYSTAT allows one to estimate parameters in nonlinear estimation problems. It is important for the present context that this module allows one to specify both the functions to be
minimized (loss function) and the minimization criterion. The default criterion is least squares. Therefore, we only need to specify the loss function. For the major axis solution the loss function
is LOSS - (y - (b0 + 51x))2 l+bl 2 9 Parameter estimates for b0 and bl can be obtained as follows.
Use Aggr (AI, AIR)
Reads variables AIR and AI from file "Aggr.sys;" SYSTAT presents list of variables on screen Invokes NONLIN module in SYSTAT Opens the window in which one can specify Loss Function Specifies loss
function for major axis solution; parameters to be estimated are b0 (intercept) and bl (slope)
Click Stats, Nonlin Click Loss Function Type into the field for the Loss function the expression "(AI87 - (B0 +BI, AIR87) ) "2/(1 + B1"2)"
continued on next page
CHAPTER 12. S Y M M E T R I C REGRESSION
Click OK, Stats, NONLIN, Model Type in the field for the regression model "AI87 = B0 + Bl,AIR87" Click OK
Invokes the NONLIN module again; opens the window in which we specify the regression model Specifies the regression model and its parameters Starts iterations; SYSTAT presents overview of iteration
process on screen, followed by the parameter estimates; carries us back to the SYSTAT command mode window
After these operations we have the iteration protocol and the result of the iterations on screen. Highlighting all this and invoking the Print command of the pull-down File menu gives us the
following printout: LOSS
= (ai87-(bO+bl*air87)) ^2/(I+b1^2)
>ESTIMATE Iteration No. Loss 0 .177607D+04 1 .173618D+04 2 .173611D+04 3 .173611D+04 4 .173611D+04
BO .I11633D+02 .642059D+01 .620045D+01 .619920D+01 .619920D+01
BI .152146D+00 .300026D+00 .306890D+00 .306929D+00 .306929D+00
Dependent variable is AI87 Final value of loss function is 1736.106 Zero weights, missing data or estimates reduced degrees of freedom Wald Conf. Interval Parameter Estimate A.S.E. Param/ASE Lower <
95~,> Upper BO 6. 199 4. 152 I. 493 -2. 085 14.484 B1 O. 307 O. 128 2. 399 O. 052 O. 562
The first line in this output contains the specification of the loss function. The second line shows the command "estimate." What follows is an overview of results from the iteration process. The
program lists the number of iterations, the value assumed by the loss function, and the parameter values calculated at each iteration step. The values given for Iteration 0 are starting values for
the iteration process. It should be noticed that numerical accuracy of the program goes beyond the six decimals printed. This can be concluded from the last three iteration steps. Although the first
six decimals of the loss function do not change, the parameter values do change from the third to the fourth iteration step. After this protocol the program names the dependent variable (which, in
some applications of symmetrical regression, may be a misnomer) and the final value of the loss function. This value is comparable to the residual sum of squares. The following line indicates that
cases had been eliminated due to missing data. The final element of the output is the listing of the parameter estimates. The slope parameter estimate is the same as the one listed at the end of
Section 12.2.1.
Computing Reduced Major Axis Solution
The major axis and the reduced major axis OLS solutions differ only in their loss function. The loss function for the reduced major axis solution
LOSS - (y - (bo * b l x ) ) 2
Lb l
The following commands yield parameter estimates for the reduced major axis solution.
Command Use Aggr (AI, AIR)
Click Stats, Nonlin
Effect Reads variables AIR and AI from file "Aggr.sys". SYSTAT presents list of variables on screen Invokes NONLIN module in SYSTAT c o n t i n u e d on n e x t page
C H A P T E R 12. S Y M M E T R I C R E G R E S S I O N Click Loss Function Type into the field for the Loss function the expression "(AI87 - (B0 + B1,
Opens the window in which one can specify Loss Function Specifies Loss Function for Reduced Major Axis Solution; parameters to be estimated are b0 (intercept) and bl (slope)
AIR87) ) ^ 2/abs (BI)"
Click OK, Stats, NONLIN, Model Type in the field for the regression model "AI87 = B0 + B1,AIRS7" Click OK
Invokes the NONLIN module again; opens the window in which we specify the regression model Specifies the regression model and its parameters Starts iterations; SYSTAT presents overview of iteration
process on screen, followed by the parameter estimates; carries us back to the SYSTAT command mode window
The following output presents the protocol created from these commands: LOSS
= (ai87-(bO+bl,air87))^2/abs(b1)
>ESTIMATE Iteration No. Loss BO 0 .618924D+04 .619920D+01 I .481618D+04 .211888D+01 2 .422637D+O4-.246146D+OI 3 .407632D+O4-.612368D+OI 4 .406342D+O4-.759789D+OI 5 .406330D+O4-.776169D+OI 6
.406330D+O4-.776341D+OI 7 .406330D+O4-.776341D+OI Dependent variable is AI87
B1 .306929D+00 .434155D+00 .576972D+00 .691162D+00 .737128D+00 .742235D+00 .742289D+00 .742289D+00
12.5. COMPUTATIONAL ISSUES
Final value of loss function is 4063.300 Zero weights, missing data or estimates reduced degrees of freedom Parameter BO B1
Estimate -7. 763 0.742
A.S.E. 3. 726 0.114
Param/ASE -2. 083 6.540
Wald Conf. Interval Lower < 95~,> Upper -15. 199 -0. 328 0.516 0.969
This protocol has the same form as the one for the major axis solution. One obvious difference is that the iteration took longer to find the minimum of the loss function than that for the major axis
solution. Parameter estimates are b0 = -7.763 and bl - 0.742. These values are identical to the ones reported at the end of Section 12.2.1. It should be noticed that whereas the value assumed by the
loss function is not invariant against linear transformations, the parameter estimates are. This can be illustrated by dividing the above loss function by 2 (Fleury, 1991). This transformation yields
a final value of the loss function that is half that in the above output. The parameter estimates, however, remain unchanged. To enable readers to compare the symmetric regression solutions with the
asymmetric ones we include the protocols from the two asymmetric regression runs in the following output: >MODEL
A I 8 Z = CONSTANT+AIR8Z >ESTIMATE 44 case(s) deleted due to missing data.
Dep Var: AI87 N:
Multiple R: 0.205 Squared multiple R" 0.042 Adjusted squared multiple R" 0.028 Standard error of estimate: 5.169 Effect CONSTANT
Std Error
P(2 Tail)
1 i. 163
2. 892
3. 860
O. 000
C H A P T E R 12. S Y M M E T R I C R E G R E S S I O N
236 AIR87
O. 152
O. 088
i. 727
O. 089
Dep Var: AIR87 N: 70 Multiple R" 0.205 Squared multiple R" 0.042 Adjusted squared multiple R" 0.028 Standard error of estimate: 6.964 Effect CONSTANT AI87
Coefficient 27. 641 O. 276
Std Error 2. 697 O. 160
t i0. 249 i. 727
P(2 Tail) O. 000 O. 089
Chapter 13
VARIABLE SELECTION TECHNIQUES In this chapter we deal with the question of how to select from a pool of independent variables a subset which explains or predicts the dependent variable well enough so
that the contribution of the variables not selected can be neglected or perhaps considered pure error. This topic is also known as "subset selection techniques". The two aims, explanation and
prediction, are distinct in that an obtained regression equation which gives a good prediction might, from a theoretical viewpoint be not very plausible. As the techniques used for variable selection
and prediction are the same, we will not further consider this distinction. The terms predictor and explanatory variable are therefore used interchangeably. However, the following remark should be
kept in mind: If prediction is the focus, one can base variable selection predominantly on statistical arguments. In contrast, if explanation is the focus, theoretical arguments guide the variable
selection process. The reason for this is that the so-called F-toenter statistic, testing whether a particular regression coefficient is zero, does not have an F distribution if the entered variable
is selected according to some optimality criterion (Draper, Guttman, & Kanemasu, 1971; Pope 85 Webster, 1972). That variable selection poses problems is obvious if the regression model is used for
explanatory purposes, as predictors are virtually always 237
intercorrelated. Therefore, values of parameter estimates change when including or eliminating predictors. As a consequence, the interpretation of the regression model can change. Yet, there is
another argument for not fitting a regression model with more variables than are actually needed. To see this, we first have to establish notation. Let
E(y) =/30 + 131xl + . . . + ~axa + " " + 13a+bXa+b be the regression equation for a single observation where the set of a + b predictors is divided into two non-overlapping subsets, A and B, which
contain the indices from 1 to a and from a + 1 to a + b, respectively. The O LS estimate for the/3 vector is given in matrix notation by -- ( X ' X ) - i x ' y , where X is the design matrix of all
the A + B predictors including a vector for the intercept, and y now denotes the vector of observations of the dependent variable. The reason for subdividing the predictors into two sets is that we
can now write the design matrix as X - (XA, XB) and can express the O LS estimate of the regression coefficients after selecting a subset of predictors for the regression as f~A - - ( X k X A ) - - I
X A Y , tacitly assuming, of course, that the selected variables are reordered so that they are contained in A. With this notation the argument for not fitting a regression model with too many
variables is validated sin it can be shown that the following inequality holds (for a proof see Miller, 1990): >
In words, this inequality states, that if we base our prediction on a subset of all available predictors the variability of the predicted value, :gA - X~A~A, is generally reduced compared to the
variability of a prediction from the complete set, ~ = x~/3. Hence, the precision of our prediction is increased. In particular, the variance of each regression coefficient in A is increased. This
can be illustrated by choosing x as a ^
239 vector of zeros with a one in the pth place, p _ a, and XA identical to x except that the last b elements in x are eliminated. Such a vector merely selects the pth element of f~ as well as ~A.
However, including too few predictors results in what is known as omission bias. Suppose now that at least one predictor in set B is nonredundant. Having only selected the predictors in set A we
could calculate E(~A) as
E(~A) - ~A + ( X k X A ) - l X k X s ~ B , which results in biased prediction. The second term in the above equation gives the amount of shift between the true value of ~A and the expected value of
its estimator. The bias, that is, the difference between the true value and the expected value of our prediction, is bias(~A) -- b i a s ( x ~ A ) -- x k -- x k ( X ~ X A ) - I X ~ X B ~ B
A derivation of these results can be found in Miller (1990). A bias in an estimator is not a problem as long as it is not 'too' large. For example, the usual estimate of the standard error s where, s
2 --- n -1I Ei----l n (Xi -- ~)2, is biased as well. For an unbiased estimator of the standard deviation see, for example, (Arnold, 1990, p. 266). (However, s 2 is an unbiased estimator for the
population variance.) There are techniques available for detecting and reducing the omission bias (Miller, 1990). The crucial point here is to see that the aim of variable selection lies in selecting
just enough variables so that the omission bias is small and, at the same time, increasing the variance of the prediction or, equivalently, of the regression coefficients not more than necessary. In
Miller's words, "we are trading off reduced bias against increased variance" (1990, p. 6). A few words about the variables that enter into the regression equation are in order. Of course, the
dependent variable should, perhaps after a suitably chosen transformation, be approximately normally distributed. If there is evidence that the relation between the dependent and an independent
variable is curved, quadratic or even higher order terms should be included in the set of all predictors. The same holds true for any interaction terms, like XpXq, between any predictors. Note also
that it is usually not meaningful to include, for example a quadratic term, say x p~ 2 in the equation without Xp or an interaction term without at least one
of the variables contained in the interaction. All of the techniques from regression diagnostics can be used to make sure that the usual regression assumptions are at least approximately satisfied.
It should be noted that this examination is, although recommended, incomplete, because the final, and hopefully adequate, regression model is not yet determined. Model checking is very useful after a
final model has been selected. Further, it should be noted that a regression model can only be fit to data if there are at least as many observations as there are predictors. If we have more
predictors than observations some of the variable selection techniques discussed in the following will not work, for instance, backward elimination. Often researchers wish to include a variable
merely on theoretical, rather than statistical, grounds. Or, it may make sense to include or exclude an entire set of variables, as is often the case with dummy variables used for coding a factor.
These options should be kept in mind when interpreting results of a variable selection technique. Variable selection techniques can be divided into "cheap" ones and others. The first group enters or
removes variables only one at a time. Therefore they can be performed with large numbers of independent variables. However, they often miss good subsets of predictors. On the other hand, the other
techniques virtually guarantee to find the "best" subsets for each number of predictors but can be performed only if the number of predictors is not too large. We discuss both groups in different
sections, starting with the best subset regression technique. But first, we present an example in order to be able to illustrate the formulas given below.
A Data Example
The data are taken from a study by yon Eye et al. (1996) which investigated the dependence of recall on a number of cognitive as well as demographic variables. We use a subset of the data. This
subset contains 183 observations, 10 predictors, and the dependent variable. The data are given in Appendix E.1. Each subject in the study was required to read two texts. Recall performance for each
text was measured as the number of correctly recalled text propositions. The two recall measures were added to yield a single performance measure. There were two types of text in the experiment,
concrete texts and abstract texts. The texts had
13.1. A D A T A E X A M P L E
Table 13.1: List of Variables Used to Illustrate Best Subset Selection Techniques
1. AGE 2. EG1 3. SEX 4. HEALTH 5. READ 6. EDUC 7. CC1 8. CC2 9. OVC 10. TG 11. REC
subject age in years dummy variable for experimental group dummy variable for sex 4-level rating scale from very good to poor reading habits in hours per week 7-level rating scale indicating highest
degree of formal schooling completed measure of cognitive complexity: breadth of concepts measure of cognitive complexity: depth of concepts measure of cognitive overlap of concepts dummy variable
for type of text: abstract vs concrete dependent variable recall performance
been created to tap cohort-specific memories. For instance, it is assumed that cohorts have good memories of music fashionable when cohort members were in their teens. Later, many cohort members
spend less time listening to music. Therefore, the music of their teens stays prominently with them as cohort-specific memory content. The same applies to such abstract concepts as heros and
educational goals, and to such concrete concepts as clothing styles. Consider the following example. Individuals that were middle-aged around 1985 grew up listening to Elvis Presley music.
Individuals twenty years older grew up listening to Frank Sinatra music, and individuals twenty years younger grew up listening to Bruce Springsteen music. The texts had been constructed to be
identical in grammatical structure and length, but differed in the name of musician mentioned. This was done in an analogous fashion for the other topics. The hypothesis for this part of the
experiment was that at least a part of the ubiquitous age differences in memory performance can be explained when differential, cohort-specific familiarity with contents is considered. The 10
explanatory variables and the dependent variable R E C are given in Table 13.1. First of all, we draw a histogram of the dependent variable which
oi I
Figure 13,1: Histogram of Recall, REC, in cohort memory experiment. is shown in Figure 13.1. This plot shows clearly some skewness in the distribution of the variable REC. The normal probability
plot, Figure 13.2, shows a clear departure from normality. Therefore, a transformation of REC may be worthwhile. As this variable is a number of counts and counts often follow a Poisson distribution
a square root transformation is recommended to stabilize the variance and to obtain an approximately normally distributed variable. Let Y be defined as Y = ~/REC. After this transformation, the
normal probability plot for Y now shows no systematic departures of Y from normality (see Figure 13.3). For the following analyses we use Y as the dependent variable. All scatterplots of each
potential explanatory variable against Y (not shown here) show wide scatters with no curvature that would require higher order terms of the explanatory variables. No correlation between
13.1. A DATA E X A M P L E
150 -
o oo
Quantiles of Standard Normal
Figure 13.2" Normal probability plot of raw frequencies of Recall.
ol 12
10 o ILl rr 1E 0" O0 II
Quantiles of Standard Normal
Figure 13.3" Normal probability plot of Recall rates after square root transformation.
C H A P T E R 13. VARIABLE S E L E C T I O N TECHNIQUES
Y and one of the continuous predictors is higher in absolute value than r - 0.25. For the three dummy variables in the data set the means of Y are calculated for the two groups belonging to each
dummy variable, showing a considerable shift in the mean for TG (8.04 vs. 6.36) and EG1 (8.68 vs. 6.84) but only a small difference for SEX (7.24 vs. 7.12). From this information we should probably
select TG and EG1 as predictors but it is not clear which of the continuous predictors to select in order to improve the model fit. Now we will show how variable selection techniques can be used as a
guide to answer this question.
Best Subset Regression
In principle, best subset regression is straightforward. One "merely" has to compute the regression equation for each possible subset of the, say k, available predictors and then use some
goodness-of-fit criterion, for instance, R 2, to decide which set of predictors yields a good or possibly best fit. Before discussing how "best" could be defined we should note how many possible
subsets can be produced from k predictors. The answer is 2 k as there are (k0) -- 1 possible ways to select no predictor (fitting only the intercept term), (k) _ k possible ways to select one
predictor, (k2) -- k ( k - 1)/2 ways to select two predictors, and so on. If one sums all the possibilities, that is, k
the result is 2 k. With 10 predictors there are 1024 possible regression models and we could easily spend a whole day fitting all the models, and with 20 predictors fitting all models will take (at
least) a whole lifetime, 1,048,576 regression runs. Thus, speedy algorithms are needed to reduce the amount of work. There are two types of algorithms; the first type calculates all possible
regressions (see Schatzoff, Tsao, &: Fienberg, 1968) whereas the second type only gives a few of the best subsets for each number of independent variables (see Furnival & Wilson, 1974). There has
been considerable work in this area and today some of these algorithms are implemented in
standard computer packages, for example, SAS and S-Plus. (See reference manuals for details.) How these algorithms work need not concern us here. We next explain how to judge the fit of a regression
model. There are several criteria in use for deciding how well a regression model fits the data. Therefore, when speaking of a "best" subset of predictors this always refers to the optimality
criterion chosen. We discuss the following criteria: R 2, adjusted R 2, and Mallow's Cp.
First recall the definition of R 2, the square of the multiple correlation coefficient, R2 = SSReg
SSRm This is just the sum of squares that can be explained by the regression model, SSReg, divided by the sum of squares that can be explained when only the intercept is fitted, SSRm; that is, no
sample information is used to explain the dependent variable. (SSRm is also known as the total sum of squares corrected for the mean.) As SSRm is a constant for given data, R 2 and SSReg as well as
SSRes = SSRm - SSReg are entirely equivalent ways of expressing model fit. Maximizing R 2 corresponds to minimizing SSRes, the residual sum of squares. All the following explanations are given in
terms of R 2. It could be argued that the higher R 2, the better the fit of the model. But this argument leads directly to selecting the model with all the predictors included, as R 2 could never
decrease with the addition of a further predictor. Formally, this is R 2(Model I) < R 2(Model II) if Model II contains all the predictors of Model I plus one additional predictor. For this reason, we
do not concentrate on the highest R 2 value but on large size changes in R 2 between models with different numbers of predictors. A program for best subset selection will print out the model or
perhaps a few models with the highest R 2 values for each number of independent variables if the R 2 measure is selected as the goodness-of-
fit criterion. From this output it can be seen when further addition of predictors will not increase R 2 considerably. If one is after explanation rather than prediction one should always request (if
possible) a few of the best models for each number of predictors, as there are usually some models with nearly identical fit, in particular if the number of predictors is high. To save space, Table
13.2 gives only the three best subsets of one to six predictors (taken from Table 13.1) in the equation according to the R 2 criterion for the present sample data. If we enter all 10 predictors in
the equation we have R 2 -- 45.16%, that is, we can explain about half of the variation of Y by our explanatory variables. It is hard to say whether this indicates that some important variables have
not been recorded since a relatively low R 2 does not imply a poor model fit. The low R 2 was expected anyway as the correlations found between the continuous predictors and Y are very low. One can
see from the table the outstanding role of the variables TG (10) and EG1 (2). TG accounts for 21% and EG1 for 16% of the variation in Y, and both variances add together to 37% if the model is fitted
with both variables in the equation. When there are three or more variables in the equation, all models in the table contain TG as well as EG1. First, this shows that the experimental condition
applied had a considerable effect. Also as expected, text group (abstract vs. concrete) has a high influence on recall. Looking at the models with three predictors we see that the cognitive measures
enter the model next but the increase in R 2 is relatively low and the model fits are very similar. The correlations of OVC with CC1 and CC2 are -0.78 and -0.54, respectively, indicating that there
is some overlap between these measures. The correlation between CC1 and CC2 is, although significant at the 5% level, virtually zero, r = 0.15. As the difference between the best model with three
variables and the best model with five or six variables in the equation is only about 3 0 VC is a reasonable choice, although prediction could be slightly improved when more variables are added. Note
also that from three variables in the equation onward there are no big differences between the model fits. Thus, theoretical considerations may have to guide researchers when selecting one of the
other models. In some sense the other two measures, the adjusted R 2 and Mallow's Cp, penalize the addition of further variables to the model, that is, with
Table 13.2: Best Three Subsets for One to Six Predictors According to the R 2, Adj R 2, and Cp criteria. R 2 and Adj R 2 are given in percent. The first column gives the number of predictors in the
equation in addition to the intercept. R2 Adj R 2 Cv No. of Var. in the
(10) (2)
(2,10) (5,10)
(9,10) (2,9,10)
(2,s,10) 4
(2,7,10) (1,2,9,10) (2,5,9,10) (2,8,9,10) (1,2,5,9,10) (1,2,6,9,10) (1,2,7,8,10) (1,2,5,8,9,10) (1,2,5,6,9,10) (1,2,5,7,8,10)
20.99 15.92 5.45 37.01 25.08 24.20 41.26 40.35 39.14 42.71 42.35 41.97 44.19 43.64 43.33 44.59 44.56 44.51
20.55 15.46 4.93 36.31 24.25 23.36 40.27 39.35 38.12 41.42 41.05 40.66 42.61 42.05 41.72 42.70 42.67 42.62
68.83 84.72 117.56 20.58 57.98 60.76 9.25 12.11 15.91 6.69 7.83 9.03 4.05 5.78 6.76 4.80 4.90 5.06
aFrom Table 13.1
these criteria the model fit can decrease when adding variables to the model.
Adjusted Squared Multiple Correlation
The adjusted R 2 is defined as Adj R 2 - 1 -
where sy2 is just the sample variance of the dependent variable. As this is a constant, for a given set of data, Adj R 2 and MSRes are again equivalent ways for expressing this criterion. Note that
Adj R 2 accounts for the
number of predictors through MSRes, as this is just S S R e s / ( n - p ) , where n is the number of observations and p the number of predictors in the equation, including the intercept. If a
predictor is added that explains nothing, it could not decrease SSRes, but the calculation of MSRes has lost one degree of freedom, that is, the denominator of MSRes is reduced by one. Thus, MSRes
increases. The model with the highest Adj R 2 should therefore be judged as the best model regardless of the number of predictors in the model. But this statement should be taken with a grain of
salt, as it can be shown that the Adj R 2 criterion used in this strict sense has the tendency to include too many variables.
The strategy in model selection should therefore be the same as with the R 2 criterion. Using a computer program obtain a few of the best subsets for each distinct number of predictors and select a
model considering (1) the relative increase of Adj R 2 as more variables are entered into the model and (2) theoretical arguments.
The second to last column of Table 13.2 shows the best three subsets of one to six variables in the model using the Adj R 2 criterion. Indeed, it is no accident that the best three subsets for a
given value of p are the same whether the R 2 or the Adj R 2 criterion is used. This also generalizes to the Cp criterion, to be discussed below, as it can be shown that, for a given value of p, the
R 2, Adj R 2, and Cp criteria all induce the same ranking of the models. Formally this is Rp2 (Model I) Adj R2p(Model I) Cp(Model I)
_< Rp2 (Model II) _ Adj R 2(Model II) _ Cp(Model II),
where Model I and Model II both contain a subset of p variables but the two subsets are different. By looking at Table 13.2 we come to the same conclusions as before. It is of interest that whereas
the Adj R 2 criterion penalizes the addition of further variables, Adj R 2 is slightly increased when adding a fourth and a fifth variable to the model. But recall that the Adj R 2 criterion has the
tendency to include more variables into the model than actually needed.
13.2. B E S T SUBSET REGRESSION 13.2.3
The derivation of Mallow's Cp can be found, for example, in Christensen (1996) or Neter et al. (1996). It is assumed that the model with all predictors is the correct model and thus we can estimate
the true residual variance (~2 by SSRes(k)
n-k Recall that k is the number of available predictors and n denotes the number of observations. SSRes(k) is the residual sum of squares with all the predictors in the model and by assumption the
true model as well. Let SSRes(p) be the residual sum of squares with only p of the k predictors in the model. Mallow's Cp is then given as C p -~
SSRes(p) 52
To better understand Mallow's Cp consider that, from a statistical viewpoint, it is desirable to minimize the expression 1
a---~E(S'(p) - tt)'(:~(p) - tt), where a 2, the residual variance, is just a scale parameter, :~(p) denotes the prediction using only p predictors, and tt denotes the true but unknown mean response.
Thus we evaluate the fit of a regression model by looking at the expected squared distance between the true value and the prediction given from some model. The formula is an expected value of a
quadratic form involving population parameters and thus it is typically unknown to us. Mallow's Cp is an estimator of this expression. If p predictors are sufficient to provide a good description of
our data, then Mallow's Cp is as small as the distance between S'(P) and tt is small. For this reason we are interested in finding regression models with small Cp values. If a subset of p predictors
can explain the dependent variable very well, the expected value of Cp can be shown to be
E(Cp) - p
2(k -p) n-k-2
If the sample is large relative to the number of predictors needed for a good description of the data, that is, n >> k and p, the second term in the above equation will be small, as n is in the
denominator, and E(Cp) ~ p. Hence, a good model yields a small Cp value that is near p. The Cp values for our example are again given in Table 13.2 for the three best subsets from one to six
predictors. First note that the differences in Cp within each subset of predictors are larger than the differences of the other criteria. If one is willing to select a model with only three
predictors in it, Cp suggests using the O VC variable instead of the other two cognitive measures, CC1 and CC2. With Cp, we should select models for which Cp ~ p, so a model with only three variables
in the equation may not be good enough. After five variables are entered, that is, p = 6 (remember the intercept), Cp is close to six, so each of the models given in the table may be a reasonable
choice. These models contain at least one of the three cognitive complexity variables. With six variables entered into the equation, Cp increases slightly compared to the best model with only five
variables in the equation and the value of Cp is slightly too low. Thus, according to Cp, we select one of the five-variable models. So, what we have obtained from best subset regression is not "the"
best model, but we have identified a few good models, which leaves it to us to decide on theoretical grounds which model we finally select. We should now investigate with regression diagnostic
techniques whether the variance of the residuals is approximately constant, whether the residuals are normally distributed, whether there are influential observations, and SO o n .
For illustrative purposes we select the third best model with five predictors plus the intercept. For this model we calculate Cp = 6.76. This is a relatively parsimonious model because apart from the
indicators for text group and experimental condition it contains CC1 and CC2, the two nearly uncorrelated measures of Depth and Breadth of Cognitive Complexity, and Age, which is reasonable as the
experimental conditions varied the cohort-specific content of the texts. From the plot of the standardized residuals against the fitted values in Figure 13.4 the variance looks reasonably stable.
While the bulk of the observations lie in the interval between -2 and 2, there is at least one extreme residual with a value of about 4. With this observation something unusual has happened. The
model had predicted a low value for Recall but the observation has a high
13.3. S T E P W I S E R E G R E S S I O N 4
o 0
9 O0
~176 ~ ~ 1 7 6~
u -2
o o
0 o
o0 o
~'a o
Fitted V a l u e s
Figure 13.4" Plot of fitted values against standardized residuals in cohort memory experiment.
positive residual, indicating that this person performed much better than expected from his or her predictor values. The normal probability plot given in Figure 13.5 shows that the standardized
residuals are approximately normally distributed, but the tails of the distribution are heavier than those of a normal distribution. While best subset regression can be easily determined if the
number of predictors is not too large, say, less than 40, other methods are needed that are computationally less intensive if the number of predictors is considerably higher. For this reason, these
are usually referred to as the "cheap" methods.
Stepwise Regression
There are three different methods that enter or delete variables to or from the model one at a time. These are forward selection, backward elimination, and the Efroymson algorithm. Often the name
"stepwise regression" is used for the algorithm proposed by Efroymson (1960).
CHAPTER 13. VARIABLE SELECTION TECHNIQUES _
oo -3
. I
Quantiles of Standard Normal
Figure 13.5: Normal probability plot in cohort memory experiment.
The idea behind forward selection is quite simple. Having k predictors we calculate k simple linear regression models, one for each predictor. The model with the highest F value should be a
reasonable choice for selecting a variable at this stage of the variable selection process. Recall that the F value for a simple linear regression is defined as MSReg(X~) SSReg(X~)/1 FI(Xr) - MSRes
(Xr) = MSRes(Xr) ' where the subscript 1 on F means that this is the F value for deciding upon the first variable, and MSRes(Xr) is the residual mean square with only the rth predictor (and of course
the intercept) in the equation, MSReg(Xr) denotes the sum of squares accounted for by the rth variable divided by its degree of freedom. Since the numerator df --1, MSRes(Xr) = SSReg(X~). The
variable with the highest F1 value is the variable selected by this procedure. Now, this procedure is repeated where the variables selected at earlier stages are always part of the model,
13.3. S T E P W I S E R E G R E S S I O N
that is, once a variable is selected by forward selection it remains in the equation until the procedure stops. To repeat the procedure means each of the k - 1 remaining variables, for instance, Xs,
is selected and entered into the equation and the increase in SSReg is observed, that is, SSReg(Xr, X s ) - SSReg(X~), and related to MSRes(Xr, X~). This means that for each of the remaining
variables F2 is now calculated as F:(x
) =
SSReg(X~, Xs) - SSReg(Xr) MSRes(X~, Xs)
The variable with the highest F2 value is selected by the procedure. Again, this variable stays in the model until the variable selection process terminates. Having selected two variables we select
the third out of the remaining k - 2 by finding that variable, say, X t , with the largest F3, where =
SSReg(Xr, X~, X t ) - SSReg(X~, X~) MSRes(Xr, X~, X t )
Of course, the procedure needs a stopping rule that terminates the variable selection process if none of the remaining variables can improve the model fit considerably. Typically, the highest F value
is compared to a predetermined value which can usually be specified as an option in computer programs. This is known as the F-to-enter test. As long as the maximal F value calculated at any stage is
higher than the critical value, a variable is selected. If no variable can fulfill this condition the procedure stops. There are a few equivalent criteria for terminating forward selection. These
include the highest partial correlation, the highest increase in R 2, or the highest t statistics for the coefficients instead of the highest F values deciding upon which variable, if any, should be
entered next. These procedures all yield the same r e s u l t s - assuming the termination criteria are accordingly modified. Before applying the forward selection method to our sample data we have
to specify a critical F value. As we have 183 observations, the F value for a regular F test at the 5% level would be about F(0.05, 1,183) = 3.9. Thus, for ease of presentation we take a critical
value of Fc = 4.0 to decide whether a variable should be entered or not. Table 13.3 gives all
C H A P T E R 13. V A R I A B L E S E L E C T I O N TECHNIQUES Table 13.3: Results .from the Forward Selection Procedure Step 1 2 3 4 5
Var. Entered
TG EG1 OVC AGE READ
128.67 226.89 252.94 261.84 270.91
2.67 2.14 2.01 1.97 1.93
48.08 45.89 12.96 4.52 4.70
the relevant information. Having entered these five variables, the next highest obtainable F value is 1.26, for variable CC2. The F values in the table can easily be recalculated. For instance, for
variable O VC we have F = ( 2 5 2 . 9 4 226.89)/2.01 - 12.96. The result of this variable selection corresponds to the best subsets containing one to five variables, as can be seen from Table 13.3.
The model selected is one of the most promising obtained by best subset selection. Indeed this is the model with the lowest Cp value. But it should also be noted that because the procedure ends with
one final model we get no information that there are other models with virtually the same fit. This is also true for the other two stepwise procedures which are discussed below.
Backward Elimination
Backward elimination is the opposite of forward selection. It starts with the full model, that is, all variables are entered in the equation (assuming that there are more observations than
variables). A reason for a variable, say Xr, to be eliminated is that there is only a small loss in model fit after that variable is removed. Hence, the Fk value, that is, the test whether X~ should
be removed if there are all k variables in the model, is small if X~ adds close to nothing to the model. For each of the variables in the full model Fk is calculated and the variable with the
smallest Fk value is removed from the model. Formally, Fk is
Fk(Xr) = S S R e g ( X l , . . . , Xk) -- S S R e g ( X l , . . . , X r - l , MSRes(X1,... , Xk)
. . . , Xk)
13.3. S T E P W I S E R E G R E S S I O N
The procedure is repeated after elimination of the first variable. Once a variable is removed from the model this variable is not reevaluated to see whether it could possibly at a later stage improve
the model fit. Having deleted, say, Xr, we check whether in the reduced model other variables should be deleted by calculating Fk-1, SSReg(X1,... , X r - 1 , Xr+l, 999, X k ) F~-I (X~IX~) - M S R e s
( X ~ , . . . , X~_~, X~+I,. 9 9 Xk) SSReg(X1,... , X ~ _ I , X ~ + I , . . . , X s - 1 , X s + I , . . . , X k ) M S R e s ( X 1 , . . . , Xr_ 1, X r + l , . . . , X k ) While the notation gets
rapidly messy, the idea behind it is quite simple. Again, we need a stopping rule and, as before, a critical F value is chosen and the minimal empirical F value at any stage is compared to the
critical one. This comparison is known as the F-to-delete or F-to-remove test. As long as we can find variables satisfying this condition the procedure continues to eliminate variables and stops
otherwise. The critical F value can usually be specified by the user of a computer program. Now we apply the backward elimination procedure to our data. Again, we use 4.0 as our critical F value,
meaning that we remove variables from the full model as long as we can find a variable with a smaller F value than 4.0. The results are given in Table 13.4. The F values are again easily obtained.
For example, in step four we calculate F = ( 2 7 5 . 5 0 - 273.35)/1.92 = 1.120. For the first F value we calculate F = ( 2 7 6 . 8 8 - 276.59)/1.95 = 0.149. After these five variables have been
removed from the full model a further removal of any variable
Table 13.4: Results of the Backward Elimination Procedure. In the full model SSReg = 276.88, MSRes = 1.95 Step 1 2 3 4 5
Var. Removed
CC1 SEX HEALTH EDUC CC2
276.59 276.06 275.50 273.35 270.91
1.94 1.94 1.92 1.93 1.93
0.149 0.273 0.289 1.120 1.264
13.4. DISCUSSION
F-to-delete test are of heuristic value only and should not be interpreted in a probabilistic manner. Typically computer programs provide default values for the critical F values, for example, 2.0
for the F-to-enter value. The idea behind this value is that the sum of squares accounted for by a variable should be at least twice as large as the corresponding mean squared error. Another option
is to select critical F values guided by the F distribution. If, for instance, 50 observations are available, a critical F value of 4 might be appropriate since F(0.95;1;50) = 4.03. The F value for
the F-to-delete test should always be less than the value for the Fto-enter test. Otherwise, Efroymson's procedure might not come to an end.
Variable selection techniques should be considered as tools of exploratory data analysis (EDA) (Tukey, 1977) since hypothesis tests and confidence intervals obtained from the same data on which the
variable selection process was based are invalidated because of the bias in the estimates. Typically the models fit the data better than they fit in the population. This can be explained as follows.
If one imagines a replication study, the parameter estimates for the same model as selected from the first data set will almost certainly be smaller for the replication data. If there are many
observations available it is possible to overcome this problem by dividing all observations into two different sets. The first set is then used for model selection, and parameter estimates are
obtained from the second set. Because stepwise regression techniques end up with a final model, it is tempting to think of the final model as "the" best model, which could be far from being true. As
can be seen from best subset regression there is usually a number of models with a nearly identical fit. As long as the amount of data permits one to use best subset regression techniques it should
be done. This is the case if there are not considerably more than 40 variables available. With more than 40 variables the stepwise procedures must be used. Another problem with variable selection has
to do with influential observations. Recall that observations might appear to be influential because one or a few important variables have been omitted from the model.
If these variables are added the observations are possibly no longer influential. Selecting a smaller model and deleting the influential observations can alter the model fit considerably because the
deleted observations are b y definition influential. As long as the problems of variable selection techniques are kept in mind and they are seen as tools for doing EDA, they can give insights into
the subject at hand, especially if no alternative data analysis procedure can be recommended.
Chapter 14
REGRESSION FOR LONGITUDINAL DATA There are situations where we have not only one measurement of a personal characteristic, but we have repeatedly observed a sample of individuals over time. Thus, we
have several measurements for each individual. While it is usually reasonable to assume that measurements made on different individuals are independent and hence uncorrelated, this assumption is
generally not accepted if an individual is measured on the same characteristic several times. The measurements within an individual are usually positively correlated over time. Often one observes
that the correlation decreases as the time interval between measurements increases; that is, measurements made close together in time are more related to each other than measurements farther apart.
But there are situations where it would be more sensible to think of the correlations between measurements within a person as constant. We will deal with this topic later. While measurements within a
person are usually correlated, those between individuals are thought of as being independent. Note that in this book we are discussing models for normally distributed data so the assumption of
correlation between observations is equivalent to the assumption of dependent measurements. When we are interested in relating observations to other (independent or explanatory) variables, the
familiar regression approach can be used. 259
C H A P T E R 14. R E G R E S S I O N FOR L O N G I T U D I N A L DATA
Thus, in longitudinal data analysis the goal of analysis is the same as throughout the whole book, that is, to identify those explanatory variables t h a t can explain the dependent variable and to
estimate the magnitude of the effect each independent variable has. The analysis is complicated by the correlated observations within a person. This characteristic of repeated measures regression can
be seen in contrast to two other wellknown techniques for data analysis. In time series analysis we usually have only one or possibly a few long series of observations and the focus is on describing
the relatedness of the observations to each other over time. In multivariate statistics there is a sample of individuals and for each individual we measure several variables. These observations per
individual are typically not thought of as being related over time but cover various aspects of the individuals, e.g., miscellaneous mental abilities. The interdependence of the data for each
individual is, therefore, not as highly structured as in longitudinal data analysis. Approaches to the analysis of longitudinal data differ from each other, among other aspects, in the allowable time
pattern with which data are collected. If the repeated observations are made for each individual at the same point in time or the intervals between repeated observations are the same for all
individuals, analysis is considerably simplified. In the following we describe how to analyze longitudinal data when the measurements are made at the same points in time for each individual but for
the intervals between measurements are not necssarily equal. For an overview of approaches to analyzing longitudinal data, see Ware (1985).
W i t h i n Subject Correlation
Let us first consider the measurements made on one individual. Let yij be the observation on the ith individual, i - 1 , . . . , m, made at the j t h point in time, j - 1 , . . . ,n. Note that this
notation implies that the pattern of observation points in time is the same for every individual. Associated with each yij is a (p x 1) vector of explanatory variables x[j = (Xijl,Xij2,... ,Xijp) I
and we assume that Yij could be written as a linear model,
correlations within a subject is well known from the analysis of variance. Suppose t h a t subjects are measured repeatedly over time in a designed experiment. In a designed experiment the values of
the covariates are the same for each observation in a treatment group. For example, the productivity of workers is observed under three different illumination conditions. Such an experiment is
usually referred to a repeated measurement design. Of course, this can also be seen as a mixed model, treating the illuminating conditions as a fixed factor and the workers as the random factor. The
model used in the analysis of variance is usually Yij - #j + Ti + ~ij,
where #j represents the effect of the illumination conditions on y. sij are the residual error terms which are supposed to be independently normally distributed with constant variance, that is, Cij
"~ N ( O , a e2 ) , and the Ti are also independent random variables, normally distributed, Ti ~" N(O, a2). The Ti index the workers. Both random terms are assumed to be mutually independent. Note
that ~ij = Ti + ~ij, where ~ij are the correlated errors in the linear model formula from above. In the example, the random term for workers means that some workers are more productive than others,
varying with Ti, and that this is independent of errors under the different illuminating conditions. Generally speaking, there are high scorers and low scorers. We now derive the variance of a single
observation under the above model and given assumptions. V a r ( y i j ) = V a r ( # j + T i -~-5ij) = y~r(~) + y~(~) - ~ 2 + ~ .2 For this derivation it is essential that the two random terms be
independent of each other. Next we look at the covariance of two observations within a subject, i.e., C o v ( y i j , yij,) for j~j', C o v ( y i j , yij' )
E ( y i j - #j)(Yij, - #j')
E ( t j + ~ ) ( r ~ + ~j,)
E(T 2 -Jr- Tis
E(~:) + E ( ~ ) + E ( ~ , ) + E(~j~j,)
E(~) 2
-~- Tis
-~- s 1 6 3 ' )
14.1. W I T H I N S U B J E C T C O R R E L A T I O N
We see t h a t under this model all within subject correlations are equal. This is an essential assumption for the above analysis of variance models. The covariance matrix of the repeated
observations of a single subject E has therefore the following form: 2 2 6re -J- Orr 2 fir
2 0rr 2 2 ore + (Tr
"'' """ 9
2 orr
2 Orr 2 {7r
2 orr
2 2 ore q- orr
The correlation matrix, 1~, is obtained from E8 by dividing each element b y a v2 - a e2 + a r2 .
1 p
p 1
... ...
p p
It follows that observations on different subjects have zero covariance so t h a t the correlation matrix of all observations is of block diagonal form with uniform correlations as off-diagonal
elements in each block. The uniform correlation model supposes that the correlation between observations does not decay with time. This situation is quite different for another prominent way of
building correlation into repeated observations. This way is known as the first-order autoregressive process, which is also known as the first-order Markov process. Here the correlations decay with
time. This model is only appealing for equidistant observations. The observation of individual i at time point j, yij, is thought of as having a fixed part #ij which is usually a linear form in the
predictors X i j k , that is, #ij = ~lXijl Jr "'" Jr ~ p X i j p , and an additive error sij. In particular, when the process is started we have ell ~ N(0, a2). Now, at the second point in time the
error ~i2 - - PCil + el2 depends on the error eil through a given weight p and a new error component s which is independent of 2 ail and normally distributed, that is, ei2 ~ N(O, ae). As the notation
suggests, p is the correlation between ailand ei2. This will be the result of the following arguments. Note that the variance of ei2 is different from
C H A P T E R 14. R E G R E S S I O N F O R L O N G I T U D I N A L D A T A
2 If the variance of Cil. Now the variance of ~i2 is given as p2a2 + a~. one can assume constant variance of repeated observations, a~2 should be (1 - p2)a2. We assume therefore ei2 "~ N(0, (1 - p2)
a2). After having defined repeated observations in this way, we can calculate the correlation between sil and ~i2, C o v ( c i l , Ci2 )
+ ~2))
E(p~21) + E(cil c,2)
flr 2.
Now the correlation is obtained by simply dividing COV(Cil,Ci2) by the respective standard deviations (which are of course equal) and we get Cor(~i1,r = p. We can see that the weight by which the
error at time two is influenced by the error at time one is just the correlation between the two error terms, assuming, of course, constant variance of the observations. We can generalize this idea
to an arbitrary number of repeated observations. Let the error at time j be Cij = PCij-1 + gij and the s independently N(0, ( 1 - p2)a2) distributed. Notice that this distribution does not depend on
time point j. The correlation between time j and time j - k is then given by repeatedly using the arguments from above as p j - k , k -- 1 , . . . , j - 1. In matrix notation the correlations between
the observations of the ith individual can therefore be written as
... ...
pn pn-1
1% 9
p,~- 2
It is interesting to note that in both models for building correlation into the within subject observations the whole correlation matrix is determined by only one parameter, p. There are many other
models to explain this kind of correlation. For instance, one can build more sophisticated models by using more than one parameter, or one can build a mixture of both models explained above. Having
parsimonious parameterized models, one can hope to get good parameter estimates. But this, of course, is only true if the correlational structure is correctly specified. In practice
14.1. WITHIN SUBJECT CORRELATION
this structure is rarely known. If one is not willing to assume a certain model it is possible to estimate the whole covariance structure. It is then necessary to estimate all elements of the
covariance matrix and not only one or possibly a few parameters according to the model for the covariance structure used. This technique is only practical when there are only a few repeated or
correlated observations, because then the number of distinct elements in the covariance structure is small. If there are n repeated observations it is necessary to estimate n(n + 1)/2 distinct
entries in the matrix because the covariance matrix is symmetric. As this number increases quadratically with n this approach is only feasible when n is not too large. What "too large" exactly means
is somewhat unclear as the amount of information in the data to estimate all n(n + 1)/2 covariances depends on the number of replications as, by assumption, the covariance structure is the same for
each replication. In brief, the estimation approach is useful when there are only few repeated observations relative to the total number of replications. The nature of the problem can be further
illustrated by considering a simple example from Dunlop (1994) where inferences based on ordinary least squares estimation can lead to incorrect conclusions if the correlation between observations is
ignored. Suppose we have a sample of m individuals, half male and half female. Each individual is observed twice. Yij represents the observation of individual i at time j, j = 0, 1. If we are
interested in a possible group effect between males and females we could write down a linear model as yij = ~0 +/31 xi + eij, where xi is a dummy variable taking the value 0 for males and 1 for
females. Interest lies in whether ~1 is significantly different from zero. eij "~ N(#i, a2R), where #i =/30 +/3xxi, and the correlation matrix Rs for the ith individual has the simple form
The covariance matrix E8 is therefore given by •2Rs. Note that the variance of the two observations is assumed to be constant. As before,
C H A P T E R 14. R E G R E S S I O N FOR L O N G I T U D I N A L DATA
and R without subscripts refer to the covariance and correlation matrices of all observations, respectively, and have block diagonal form. Knowing the true 2], weighted least squares would result in
the BLUE, that is, best linear unbiased estimate, of ~ = (/~0,/31). Suppose for the moment the true covariance matrix 2] is known. Although the ordinary least squares estimate is not the best
estimate we can possibly get, it is well known that it is unbiased regardless of the true correlation structure. This is not the case for the variance estimate of/31. Because the OLS estimate is
- (X'X)-lX'y the variance can be calculated by linear means as
Var( )
__-- 0.2(XlX) -1X'RX(X'X) - 1
-1 ,
which would only be identical with the covariance matrix of/~ using OLS if R = I, with I being the identity matrix. Exploiting the simple structure of X, one obtains var(~l) explicitly as var(~l) -
2a2(1 + p)/m. Ignoring the correlation between the two repeated measures is equivalent to setting p = 0. The estimated variance would then incorrectly be estimated as var(~l) - 2a2/m. With a positive
correlation (p > 0), which is typical for longitudinal data, the variance estimate of the interesting parameter fll will therefore be too small, resulting in progressive decisions concerning the
group effect; that is, the null hypothesis of no group effect will be rejected too often. A similar example which models yij dependent on time leads to false variance estimates of the parameters as
well, as was pointed out by Dunlop (1994). This example further clears the point that the covariance structure of the observations has to be taken into account in order to perform a correct
statistical analysis concerning inferences about the parameter vector/3.
of Longitudinal
This approach was developed by Diggle, Liang, and Zeger (1994). Before describing this approach, let us first recall some facts about weighted least
14.2. ROB UST MODELING OF LONGITUDINAL DATA
squares estimation. The WLS estimator, fl, is defined as the value for which minimizes the quadratic form
(yand the solution is given as --(X'WX)-lX'Wy. The expected value of/~ is easily calculated using E(y) - Xfl to give E(/~)--
that is, the WLS estimator of ~ is unbiased for fl whatever the choice for the weight matrix W will be. The variance of/~ is given as
var(fl)- ( X ' W X ) - l X ' W ] E W X ( X ' W X ) -1, where E = var(y) is the covariance matrix. The best linear unbiased estimator of/~ (which is also the maximum likelihood estimator), that is,
the estimator with the smallest variance, is obtained by using E -1 = W. The formulas for calculating the estimator of D and its variance are then given by /j- (X,E-Ix)-IX,E-ly and
Var(fl) = ( X ' E - l x ) -1. It is conceptually important for the following procedure to distinguish the weight matrix W from the covariance matrix 2]. Because the true covariance matrix E is
usually unknown in practice, the question arises whether using a suboptimal weight matrix would dramatically change the variance estimates of/~. This is often not the case. This can be seen using
the above example from Dunlop (1994). Although E is usually unknown, for the following argument we keep E in the formula, replacing only the weight matrix W . This is done to investigate the change
in the variance for 3 if the optimal weight matrix is not used. We could interpret the Dunlop example as having used the identity matrix as the weight matrix in the formula that gives the variance of
the OLS estimator. It could be asked as to how much the variance would change if we used the optimal weight matrix E -1 instead of the identity matrix. Interestingly, the variance for /~1 does not
change at all. Therefore, we say that there is no loss in efficiency if we use a "false" weight matrix. This, unfortunately, is not always the case, especially when the used weight matrix totally
misses the structure of the optimal weight matrix E -1. ^
The main idea for the robust estimation approach is given in the following paragraph. By using a weight matrix which is not optimal but reasonably close to the true E -1 our inferences about t h e /
3 vector are valid. In fact it can be shown that as long as we substitute a consistent estimate for the covariance matrix E the validity of our conclusions about 3 does not depend on the weight
matrix W . Choosing a suboptimal weight matrix affects only the efficiency of our inferences about/3; that is, hypothesis tests and confidence intervals for the regression coefficient will be
asymptotically correct. A consistent estimate of the true covariance matrix E means that as the number of observations increases, that is, the amount of information from the sample about the unknown
parameters increases, the probability that the sample estimates of the elements of E are close to the true but unknown values approaches one. In brief, having a consistent estimate for E and a
reasonable weight matrix W , one is doing a WLS analysis and proceeds as if 3 ~ N(/3, Vw), where
I~rW --(X'WX)--lxtw~']WX(X'WX)
which is just the formula from above for var(/~) with N replaced by the consistent estimate ~. Subscript w indicates that the estimate is dependent on the specific choice of the weight matrix. Now
having come so far, the question that remains to be dealt with is how a consistent estimate for ~ can be obtained. Generally, this is referred to as variance component estimation. Two prominent
methods for doing this are maximum likelihood (ML) and restricted maximum likelihood
14.2. R O B U S T M O D E L I N G OF L O N G I T U D I N A L D A T A
(REML), estimation. The derivation of equations for these methods are beyond the scope of this book. A derivation of the equations that need to be solved for calculating the ML or the REML estimate
of E can be found in Diggle et al. (1994). Normally these equations can only be solved numerically, but for the case of a designed experiment the computation simplifies considerably. Recall that in a
designed experiment the values of the covariates are the same for each observation in a treatment group. This is typically not the case for observational data where there is no control over these
values. Accounting for the various treatment groups, the notation is as follows. There are g treatment groups. In each group there are rnh individuals, each of which was repeatedly observed at n
points in time. The complete set of measurements can be represented by g Yhij, where h - 1 , . . . ,g; i - 1 , . . . , m h ; j -- 1 , . . . ,n; and m - ~i=1 mi. The consistent REML estimate of E can
then be obtained by first calculating the means over the observations within each of the g treatment group for each observation point, that is, Y h j --- ~mh ~":~'~=hlYhij, j -1 , . . . , n. Let Y'h-
-- (Ylhl, YDh2,... , YDhn) be the mean response vector for group h and Yhi the response vector for the ith individual in treatment group h. ~]8 is then obtained by calculating ( Y h i - Y h ) ( Y h i
- Y h ) ' for each individual i. Note that this outer vector product results in a matrix. Summing all these matrices together and dividing the sum by m - g gives ~]~. Formally this is g
~8 ----
Z ~
(Yhi - Yh)(Yhi -- Y'h)'.
/rt -- g h--1 i--1
~8 can be obtained, for instance, by matrix calculations using S-Plus or the matrix language of the standard statistical packages, for example, SAS-IML. This matrix can also be obtained by standard
statistical packages as the error covariance matrix from a multivariate analysis of variance, treating the repeated observations from each individual as a multivariate response. To calculate ~8 this
way the observations Yhij have to be arranged as a matrix and not as a vector as described above. Note also that (Yhi -- .Vh) is just the vector of residuals for the repeated observations of
individual i from group h. Having obtained the consistent estimate of E, a weight matrix W must be determined for weighted least squares analysis. Recall that E is
just a block diagonal matrix with ~8 as the blocks. This is demonstrated by the following data example. We use a simulated data set. We are then able to evaluate the efficiency of our estimates
obtained from the robust approach.
A Data Example
The data should be thought of as being obtained from 100 individuals at five different points in time. The sample is divided into two groups. The first 50 cases are females and the other 50 males.
The time interval between repeated observations is the same. The correlation for the multivariate normal distribution of every individual follows a first-order Markov process with p - 0.9, that is,
the error of observation separated by one, two, three and four time intervals is 0.9,0.92 - 0.81,0.93 = 0.729 , and 0.94 - 0.656, respectively. The correlation is high but decreases with time. The
mean response profiles which are added to the error are increasing functions of time with the difference between the groups getting smaller. Finally, the data are rounded to the nearest integer. The
data set is presented in Appendix E.2 to allow the reader to reproduce the results. Such data are fairly typical for social science data. An example would be that students fill out the same
questionnaire in weekly intervals starting five weeks before an important examination to measure state anxiety. The research interest would then be (1) whether there are gender differences in state
anxiety and (2) whether these differences change over time in different ways for the two groups; that is, whether there is an interaction between time and gender. First of all we can plot each
individuals response profile (see Figure 14.1). This plot is too busy to be useful. A useful plot presents the mean response profiles (see Figure 14.2). We see that state anxiety is steadily
increasing and that the scores are higher for females than for males but that the difference decreases with time. Now we first estimate ~ , because looking at this matrix us an idea what a reasonable
choice for the weight matrix W Remember that the optimal weight matrix is W - E - l , so look at the estimate of E we could possibly infer what a likely
could give would be. that if we covariance
14.3. A DATA E X A M P L E
40 .~ X t-"
X r
lO i
Figure 14.1" Development of state anxiety in females and males in the five weeks before an examination.
structure will be and then invert it to get W . Recall also that one useful characteristic of this procedure is that only the efficiency of the inferences of fl is decreased, but the validity of the
inferences is not affected by a misspecification of W . Perhaps the easiest way to calculate ~8 by matrix operations is to reorganize the y vector into a Y matrix with individuals as rows and
repeated observations as columns. In the present example this results in a 100 x 5 matrix. We then have to build a design matrix X according to a linear model including the group effect. (If the
groups were defined by some factorial design the design matrix should contain all main effects and interactions in the corresponding analysis of variance.) For this example the design matrix is fl
0'~ 1 0 1 0
C H A P T E R 14. R E G R E S S I O N FOR L O N G I T U D I N A L DATA
x ta~
Females les
(D a~ tcD
_ I
Figure 14.2: Average development of state anxiety in the two gender groups.
where the first column is the usual intercept and the second column dummy codes the two groups. It consists of 50 zeros for the 50 males and 50 ones for the 50 females. The estimated covariance
matrix is then given by the following matrix expression: ~d m
For these data one obtains (because the matrix is symmetric, only the diagonal and the lower diagonal elements will be given)
25.191 22.515 20.673 19.194 16.127
25.606 22.817 25.224 21.029 23.051 17.668 20.072
25.367 22.608
14.3. A D A T A E X A M P L E
This reflects quite well the true covariance matrix, which is known to be 25.000 22.500 20.250 18.225 16.402
25.000 22.500 20.250 18.225
25.000 22.500 20.250
25.000 22.500
since the data were simulated. Hence the optimal weight matrix can be determined as the inverse of the true covariance matrix E. The form of W is quite interesting. It is well known that a
first-order Markov process, according to which the data were simulated, has the property that given the present, the past and future are independent. This conditional independence relationship
between observations, that are more than one time point apart, is reflected by the zero elements in the inverse of the covariance matrix; for details see Whittaker (1990). The next step is to model
the mean response profiles for the two groups. This can be done by polynomial regression. As can be seen from the plot, although there is some curvature in the response profile for the male group, a
linear regression describes the mean responses over time quite well. If there are more time points, polynomial regression can be used. Now we set up the models for the males and females. They are
~l tj
E(YMj )
~o +
~o + ~ l t j + T + 7tj.
Of course, the model can be rewritten as E(yFj)
~0 -[- ~1 tj -[- T + "),tj
~0 -[- ~1 tj,
which shows that we are actually fitting two separate regression lines, one for each of the two groups. (The values of tj w e r e assumed for simplicity to have the values 1 , . . . , 5.) The design
matrix X for doing the regression has 500 rows and 5 columns. It has the following form, where the first 4 rows correspond to the five repeated observations of the first male. The
second complete block of rows which are rows 251-255 in the complete matrix, correspond to the five repeated observations of the first female. These two blocks of five repeated observations are
repeated 50 times for the male and 50 times for the female group, respectively, and one obtains
\ . . . .
W and ~ are both block diagonal matrices and have 500 rows and 500 columns. If we use the optimal weights and the consistent estimate of to fit the model and calculate standard errors by weighted
least squares, the estimated regression coefficients (with standard errors in parentheses) are ~o -- 5.362(0.788), ~1 - 4.340(0.153), ~ - - 12.635(1.114),andS,-1.928(0.217). The p values for the four
hypotheses are often of interest. Tests for the hypotheses Ho : ~o - 0; H0 :/~1 - 0; Ho : T -- 0; and Ho : V - 0 can be derived from the assumption that/~ ~ N(f~, V'w). In the example the ~ vector is
given by ~ = (/~o,/~1, T, V)- As the true covariance matrix is typically unknown, tests derived from the above assumption will only be asymptotically correct. If, for instance, Ho :/~o - 0 is true,
then ~o/&(~o) will be approximately standard normally distributed. The same holds for the other three hypothesis tests. For hypothesis tests involving two or more parameters simultaneously see Diggle
et al. (1994, p. 71). We calculate the observed z values, which should be compared to the standard normal distribution as z(/~0) - 5 . 3 6 2 / 0 . 7 8 8 - 6.80, z(/~l) - 4 . 3 4 / 0 . 1 5 3 28.37, Z
(T) = 12.635/1.114 = 11.34, and z ( v ) = - 1 . 9 2 8 / 0 . 2 1 7 = - 8 . 8 8 .
14.3. A DATA EXAMPLE
All observed z values lie in the extreme tails of the standard normal distribution so that all calculated p values for one- or two-sided tests are virtually zero. The conclusions reached by this
analysis are that, as can be seen from the plot, anxiety increases as the examination comes nearer (~1 > 0) and the anxiety level for females is generally higher than that for males (~- > 0). But
there is also an interaction with time (V < 0) showing that the difference between males and females decreases with time. The effect of T should only be interpreted with great caution as one has to
take the differential effect of time 7 into account. One benefit from using simulated data with known characteristics is that one can compare results with the OLS analysis of the data, that is, W -
I, keeping the estimate of E in the formula to estimate the standard errors. This analysis yields the following values: /~0 = 7.142(0.803),/~1 = 3.918(0.153), T = 11.089(1.135), ands/= -1.626(0.216).
While the coefficients are slightly different, the obtained standard errors are nearly the same. Hypothesis tests follow as before with the result that all four parameter estimates are highly
significant. If we look at the two plots in Figures 14.3 and 14.4 for the mean response profiles including the estimated regression lines for the optimal and the OLS analyses we can see that both
analysis yield very similar results. The conclusions of the analysis would surely not change. We see that the estimated standard errors are quite robust against a misspecification of W . As the
estimated covariance matrix is quite close to the true one, we would not expect the results to change for W ~ ]E- 1 as much as for W = I. But this cannot be recommended generally. Normally we use a
guess for W , which we hope captures the important characteristics. This is done by looking at the estimated covariance matrix and investigating how the covariances change with time. For the
estimated covariance matrix it is seen that the ratio of two consecutive points in time is about 0.9. From this we could build a covariance matrix (which is, in this instance, the true correlation
matrix) and invert it to obtain our guess for W . Note that the results of the analysis do not change if we multiply W by an arbitrary constant. Thus it is only necessary to obtain a guess for the
true covariance matrix up to an arbitrary constant for the guess of W as the inverse of the guess for E.
C H A P T E R 14. R E G R E S S I O N F O R L O N G I T U D I N A L D A T A
Females...-:::::: ~176
0 I
F i g u r e 14.3: M e a n r e s p o n s e profiles a n d regression lines for b o t h g r o u p s c a l c u l a t e d using t h e o p t i m a l weight m a t r i x .
40 -
X c-
Females ...... ........
30 -
20 t-
10 -
_ I
F i g u r e 14.4: M e a n r e s p o n s e profiles a n d regression lines for b o t h g r o u p s u s i n g t h e i d e n t i t y m a t r i x as t h e weight m a t r i x .
Chapter 15
PIECEWISE REGRESSION Thus far, we have assumed that the regression line or regression hyperplane is uniformly valid across the entire range of observed or admissible values. In many instances,
however, this assumption may be hard to justify. For instance, one often reads in the newspapers that cigarettes damage organisms only if consumption goes beyond a certain minimum. A regression
analysis testing this hypothesis would have to consider two slopes: one for the number of cigarettes smoked without damaging the organism, and one for the higher number of cigarettes capable of
damaging the organism. Figure 15.1 displays an example of a regression analysis with two slopes that meet at some cutoff point on X. The left-hand slope is horizontal; the right-hand slope is
positive. This chapter presents two cases of piecewise regression (Neter et al., 1996; Wilkinson, Blank, & Gruber, 1996). The first is piecewise regression where regression lines meet at the cutoff
(Continuous Piecewise Regression; see Figure 15.1). The second is piecewise regression with a gap between the regression lines at the cutoff (Discontinuous Piecewise Re-
gression). These two cases share in common that the cutoff point is defined on the predictor, X. Cutoff points on Y would lead to two or more regression 277
D a
m a
g e
N u m b e r of Cigarettes smoked
Figure 15.1: Continuous piecewise regression with one cutoff point.
lines that each cover the entire range of X values. The examples given in the following sections also share in common that there is only one cutoff point. The methods introduced for parameter
estimation can be applied in an analogous fashion for estimation of parameters for problems with multiple cutoff points.
Continuous Piecewise Regression
We first consider the case where 1. piecewise regression is continuous, and 2. the cutoff point is known. Let Xc be the cutoff point on X. Then, the model of Simple Continuous Piecewise Regression
can be described as Y - bo + b l X 4- b2(X - xc)Xe + Residual,
C O N T I N U O US P I E C E W I S E R E G R E S S I O N
where X is the regular predictor variable, and X2 is a coding variable that assumes the following two values: 1 ifX>xc, 0 otherwise.
The effect of the second coding variable is that when b2 is estimated, only values greater than Xc are considered. The design matrix for this type of piecewise regression contains three vectors,
specifically, the constant vector, the vector of predictor values, xi, and a vector that results from multiplying X2 with the difference ( X - x c ) . The following design matrix presents an example.
The matrix contains data for six cases, the second, the third, and the fourth of which have values greater than Xc. For each case, there is a constant, a value for X, and the value that results from
X 2 ( X - Xc). 1
X __
I I 1
XI,II O Xl,21 (Xl,21 - Xc)X2,21 xi,31 (xi,31 -Xc)X2,31 Xl,41 O
(Xl,51 --
The zeros in the last column of X result from multiplying (xl,ij - Xc) with x2,ij = 0, where i indexes subjects and j indexes variables. The following data example analyzes data from the von Eye et
al. (1996) study. Specifically, we regress the cognitive complexity variables, Breadth (CC1), and Overlap (OVC), for n = 29 young adults of the experimental group. Figure reffi:piecew2 displays the
Breadth by Overlap scatterplot and the OLS regression line. The regression function estimated for these data is OVC = 0.90 - 0.03 9CC1 + Residual. This equation explains R 2 - 0.60 of the criterion
variance and has a significant slope parameter (t = -6.40,p < 0.01). Yet, the figure shows that there is no sector of the predictor, CC1, where this regression line provides a particularly good
approximation of the data.
280 1.1
20 Breadth
0.1 0
Figure 15.2" Regression of Overlap of Cognitive Complexity, OVC, on Breadth of Cognitive Complexity, CC1.
Therefore, we test the hypothesis that the steep slope of the regression of O VC on CC1 that characterizes the relationship for values CC1 <_ 15 is followed by a much flatter slope for values CC1 >
15. To test this hypothesis we perform the following steps: 1. Create the variable X2 with values as follows:
X2 -
ifCC1_<15, otherwise.
2. Create the product (CC1 - 15) 9X2. 3. Estimate parameters for the regression equation OVC - b0 + bl * CC1 + b2 * (CC1 - 15) 9X2 + Residual. The following parameter estimates result: OVC - 1.130 +
0.056 9 CC1 + 0.002 9 (CC1 - 15) 9X2 + Residual.
15.2. D I S C O N T I N U O US P I E C E W I S E R E G R E S S I O N
1.1 1.0 0.9 0.8
20 Breadth
0.7 0.6 _
0.5 0.4 0.3 O.2 _ 0.1
Figure 15.3: Piecewise regression of Overlap on Breadth with known cutoff point. All of the parameters are significant, with the following three t values for b0, bl, and b2, respectively" 14.98,
4.35, and-8.40. A portion of R 2 = 0.769 of the criterion variance is explained. Figure 15.3 displays the CC1 9 O V C scatterplot with the piecewise regression line. There can be no doubt that the
data are much better represented by the piecewise regression than by the straight line regression in Figure 15.2.
In this section we 1. illustrate discontinuous piecewise regression and 2. show how curvilinear regression can be applied in piecewise regression. While we show this using one example, curvilinear
and discontinuous piecewise regression do not necessarily go hand in hand. There are many
examples of curvilinear continuous piecewise regression (Wilkinson, Hill, Welna, & Birkenbeuel, 1992, Ch. 9.6), and there are approaches to linear piecewise regression that are discontinuous (Neter
et al., 1996, Chap. 11). Formula (15.1) presents a regression equation with two linear, additive components that can differ in slope. Specifically, these components are blX and b 2 ( X - xc)X2. This
section illustrates how one (or both) of these linear components can be replaced by a curvilinear component. The regression model used in this section has the form
Y - bo + E bj~j + Residual,
J where ~j are regression functions that can be linear or curvilinear. Formula (15.1) is a special case of (15.3) in that the sum goes over two linear regression functions. The following example uses
the data from Figure 15.3 again. The figure suggests that there are segments of the predictor, Breadth, that are not particularly well covered. For example, there are four cases with values of
Breadth < 7 and Overlap > 0.8 that are poorly represented by the left-hand part of the regression line. Because changing parameters of the linear, continuous piecewise regression line would lead to
poor representations in other segments of the predictor, we select nonlinear regression lines. In addition, we allow these lines to be discontinuous, that is, be unconnected at the cutoff. To
optimize the fit for the regression of Overlap on Breadth of Cognitive Complexity we chose the function O v e r l a p - b0 + bl (exp b2(-Breadth)) + b3(Breadth 2) + Residual, (15.4) where the cutoff
between the two components of the piecewise regression is, as before, at Breadth - 15. The first component of the curvilinear piecewise regression is an exponential function of Breadth. The second
component is a quadratic function of Breadth. Technically, the function given in (15.4) can be estimated using computer programs that perform nonlinear OLS regression, for instance, SPSS, SAS, BMDP,
and SYSTAT. The following steps are required:
15.2. DISCONTINUOUS PIECEWISE R E G R E S S I O N
1. Create indicator variable, X2, that discriminates between predictor values before and after cutoff 2. Multiply indicator variable, X2, with Breadth squared (see (15.4)); 3. Estimate parameters for
(15.4). In many instances, even standard computer programs for OLS regression can be used, when all terms can be transformed such that a linear regression model can be estimated where all exponents
of parameters equal 1 and parameters do not appear in exponents. For the present example we estimate the parameters Overlap - 0.29 + 2.22(exp 0.23(-Breadth)) + 0.00015(Breadth 2) § Residual, where
the second part of the regression model, that is, the part with Breadth 2, applies to all cases with values of Breadth _> 15. The scatterplot of B r e a t h . Overlap of Cognitive Complexity appears
in Figure 15.4, along with the curvilinear, discontinuous piecewise regression line. The portion of variance accounted for by this model is R 2 - 0.918, clearly higher than the R 2 - 0.769 explained
by the model that only used linear regression lines. The figure shows that the two regression lines do not meet at the cutoff. When linear regression lines are pieced together, as was illustrated in
the first data example (see Figure 15.3), one needs a separate parameter that determines the magnitude of the leap from one regression line to the other. In the present example with curvilinear
lines, the curves do not meet at the cutoff and, thus, create a leap. Extensions. Piecewise regression can be extended in various ways, two of which will be mentioned in this section. One first and
obvious extension concerns the number of cutoffs. Consider a researcher that investigates the Weber-Fechner Law. This law proposes that differences in such physical characteristics of objects as
weight or brightness can be perceived only if they are greater than a minimum proportion. This proportion is assumed to be constant: AR = constant. R
CHAPTER15. i
Breadth Figure 15.4: Curvilinear, discontinuous piecewise regression of Overlap on Breadth of Cognitive Complexity.
It is well known that this law is valid only in the middle of scales, not at the extremes. Therefore, a researcher investigating this law may consider two cut-offs rather than one. The first cutoff
separates the lower extreme end of the scale from the rest. The second cutoff separates the upper extreme end from the rest. The second extension concerns the use of repeated observations. Consider a
researcher interested in the stability of effects of psychotherapy. This researcher may wish to model the observations made during the therapy using a first type of function, for example, a
negatively decelerated curve as in the left-hand side of Figure 15.4. For the time after completion of therapy, this researcher may wish to use some other model.
Caveat. It is tempting and fun to improve fit using more and more refined piecewise regression models. There are not too many data sets that a data analyst that masters the art of nonlinear and
piecewise estimation will not be able to depict very well. "Nonlinear estimation is an art... is rococo" (Wilkinson et al., 1992, p. 428). However, the artist needs guidance from theory. If there is
no such guidance, models may provide
15.2. DISCONTINUOUS PIF.CEWISE REGRESSION
impressive fit. However, that fit often reflects sample specificities rather than results that can be replicated or generalized.
This Page Intentionally Left Blank
Chapter 16
DICHOTOMOUS CRITERION V A R I A B LES This chapter presents a simple solution for the case where both the predictor and the criterion are dichotomous, that is, only have two categories. Throughout
this chapter, we assume that the predictor can be scaled at the nominal level. Alternatively, the predictor can be scaled at any higher level, but was categorized to reduce the number of levels. As
far as the criterion is concerned, scaling determines the sign and its interpretation of the slope parameter. If both the predictor variable and the criterion variable are scaled at least to the
ordinal level, a sign can be meaningfully interpreted. For nominal level predictors or criteria, signs of regression slopes are arbitrary. It should be noted that, in many contexts, the distinction
between scale levels does not have any consequences when variables are dichotomous. In the present context, scaling does make a difference. More specifically, when the criterion variable is scaled at
the ordinal level (or higher), the sign can be interpreted as follows 1. A positive sign suggests that the "low" predictor category allows one to predict the "low" criterion category and the "high"
predictor 287
C H A P T E R 16. D I C H O T O M O U S C R I T E R I O N V A R I A B L E S
Table 16.1" Scheme of a 2 x 2 Table Predictor Categories
Criterion Values 1
a c
b d
category allows one to predict the "high" criterion category. 2. A negative sign suggests that the " l o w - high" and the " h i g h - low" categories go hand-in-hand. The theoretical background for
the method presented here is the wellknown result that the r coefficient of association between categorical variables can be shown to be a special case of Pearson's correlation coefficient, r. As is
also well known, the relationship between r and the slope coefficient, bl, is 8x bl ~ - r m ~ 8y
where sx is the standard deviation of X, and s~ is the standard deviation of Y. Thus, one can ask what the regression of one categorical onto another is. In this chapter we focus on dichotomous
variables. Consider Table 16.1. The cells of this 2 x 2 table, denoted by a, b, c, and d, contain the numbers of cases that display a certain pattern. For instance, cell a contains all those cases
that display predictor category 1 and criterion value 1. From this table, the regression slope coefficients can be estimated as follows: a d - bc bl = (a + b)(c + d)"
The intercept can be estimated using (16.1) as follows: b0 =
a+c n
a+b bl ~ . n
289 Table 16.2- Handedness and Victory Pattern in Tennis Players Handedness r
Victories 1 2 41 38 4 17
In the following numerical example we predict number of victories in professional tennis, V, from handedness, H. We assign the following values: V - 1 for a below average number of victories and V -
2 for an above average n u m b e r of victories, and H - l for left-handers and H - r for right-handers. A total of n = 100 tennis players is involved in the study. T h e frequency table appears in
Table 16.2. Inserting into (16.1) yields 41,17-4,38 bl = (41 + 38)(4 + 17) = 0.329. Inserting into (16.2) yields b0 =
41 + 4 100
41 + 38 0.329~ = 0.45 - 0.26 - 0.19. 100
Taken together, we obtain the following regression equation: Victories - 0.19 + 0.329 9 Handedness + Residual. The interpretation of regression parameters from 2 x 2 tables differs in some respects
from the interpretation of s t a n d a r d regression parameters. We illustrate this by inserting into the last equation. For H - r - 1 we obtain y - 0.19 + 0.329 9 1 - 0.458, and for H - 1 - 0
(notice t h a t one predictor category must be given the numerical value 0, and the other must be given the value 1; this is a
variant of d u m m y coding) we obtain y = 0.19 + 0.329 9 0 = 0.19. Obviously, these values are not frequencies. Nor are they values t h a t the criterion can assume. In general, interpretation of
regression parameters and estimated values in 2 x 2 regression is as follows: 1. The regression parameter/~0 reflects the conditional probability t h a t the first criterion category is observed,
given the predictor category with the value 0 or, in more technical terms,
~o = prob(Y = I l X = 0). 2. The regression slope parameter ~1 suggests how much the probability t h a t Y = 1, given X = 1, differs from the probability t h a t Y = 1, given X = 0. Using these
concepts we can interpret the results of our d a t a example as follows: 9 The intercept parameter estimate, b0 = 0.19, suggests t h a t the probability that left-handers score a below average
numbers of victories is p = 0.19. 9 The slope parameter estimate, bl -- 0.329, indicates t h a t the probability t h a t right-handers score a below average numbers of victories is p = 0.329 higher
than the probability for left-handers.
Chapter 17
C 0 MP UTATI 0 NAL ISSUES This chapter illustrates application of regression analysis using statistics software packages for PCs. The package used for illustration is SYSTAT for Windows, Releases
5.02 and 7.0. Rather than giving sample runs for each case of regression analysis covered in this volume, this chapter exemplifies a number of typical cases. One sample set of data is used throughout
this chapter. The file contains six cases and three variables. This chapter first covers data input. Later sections give illustrations of regression runs.
Creating a S Y S T A T S y s t e m File
SYSTAT can be used to input data. However, the more typical case is that the data analystreceives data on a file in, for instance, ASCII code. In this chapter we illustrate how to create a SYSTAT
system file using an existing ASCII code data file. The file used appears below: 1 3 2 5 3 9 4 11
C H A P T E R 17. COMPUTATIONAL ISSUES 9
One of the main characteristics of this file is that data are separated by spaces. This way, SYSTAT can read the data without format statements. The first of the columns contains the values for
Predictorl, the second column contains the values for Criterion, and the third column contains the values for Predictor2. For the following purposes we assume that the raw data are in the file
"Raw.dat." SYSTAT expects an ASCII raw data file to have the suffix "dat." To create a SYSTAT system file from this raw data ASCII file, one issues the following commands: Command
Data Save Compdat
Invokes SYSTAT's Data module Tells SYSTAT to save data in file; "Compdat.sys," where the suffix "sys" indicates that this is a system file (recent versions of SYSTAT use the suffix "syd." Conveys
variable names to program Reads raw data from file "Raw.dat" Saves raw data in system file "Compdat.sys"; the system file also contains variable names
Input Predictor1, Criterion, Predictor2 Get Raw Run
After these operations there exists a SYSTAT system file. This file is located in the directory that the user has specified for Save commands. For the following illustrations we assume that all files
that we create and use are located in the default directory, that is C-\SYSTATW5\. Before analyzing these data we create a graphical representation of the data, and we also calculate a correlation
matrix. Both allow us to get a first image of the data and their characteristics. Figure 17.1 displays the response surface of the data in file "Compdat.sys."
.~D bD u,,,~
Figure 17.1" Response surface for sample data. This response surface can be created by issuing the following commands"
Use Compdat click Graph, Options, Global click Thickness - 3, Decimals to Print - 1, and select from the font pull down the British font OK
Reads file "Compdat.sys" Provides print options for graphics Specifies output characteristics
click Graph, 3'D, 3-D
Concludes selection of output characteristics Opens the menu for 3-D graphs continued on next page
CHAPTER 17. COMPUTATIONAL ISSUES assign Predictor1 to X, Predictor2 to Y, and Criterion to Z click Axes and insert in Label Fields for X, Predictor1; for Y, Predictor2; and for Z, Criterion; OK click
Surface and select the Kernel Smoother, OK OK click File in Graph Window and select Print increase Enlarge to 150%; OK
select from the following window OK
Assigns variables to axes in 3-D representation Labels axes
Specifies method for creating surface Starts creation of graph on screen Initiates printing process increases print output to 150% (results in output 6 inches in height; width depends on length of
title) Starts printing
Table 17.1 gives the variable intercorrelations and the Bonferronisignificance tests for these intercorrelations. Table 17.1: Matrix of Intercorrelations (Upper Triangle) and BonferroniSignificance
Values (Lower Triangle)for Predictor1, Criterion, and Predictor2 Predictor1 Predictor1 Criterion Predictor2
0.064 0.121
Criterion 0.878 0.211
Predictor2 0.832 0.775
17.2. SIMPLE R E G R E S S I O N
The correlation table can be created by issuing the following commands: Command
Use Compdat click Stats, Corr, Pearson
Reads file "Compdat.sys" Selects method for statistical analysis Specifies that Bonferroni significance tests be performed; variable selection is not necessary if all variables are involved in
correlation matrix Initiates computation and sends results to screen
click Bonferroni
OK highlight desired print output click File, Print, OK
Sends highlighted selection to printer
Both the response surface and the correlation table suggest that predictors and criteria are strongly intercorrelated. After Bonferroni adjustment, the correlations fail to reach significance.
However, the portions of variance shared in common are high for all three variable pairs. The following chapters illustrate application of regression analysis using SYSTAT when regressing the
variable Criterion onto Predictor l, Predictor2, or both.
Simple Regression
This section illustrates application of simple linear regression. We regress Criterion onto Predictorl. To do this, we issue the following commands:
CHAPTER 17. COMPUTATIONAL ISSUES Command
Use Compdat click Stats, MGLH
Reads file "Compdat.sys" Initiates statistical analysis; selects the MGLH (Multivariate General Linear Hypotheses) module Selects regression module Specifies predictors and criteria in regression
click Regression assign Predictor1 to Independent; assign Criterion to Dependent OK
Starts calculations; sends results to screen
To print results we proceed as before. The following output results for the present example: USE
'A :\COMPDAT. SYS ' SYSTAT Rectangular file A: \COMPDAT. SYS, created Mon Nov 17, 1997 at 19:26:32, contains variables: P1 C P2
>MODEL C = C0NSTANT+PI >ESTIMATE Dep Var: C N"
Multiple R: 0.878 Squared multiple R: O. 771 Adjusted squared multiple R: 0.714 Standard error of estimate: 1.757 Effect
2. 600 1. 543
Std Error
1. 635 O. 420
. 1. 000
1. 590 3. 674
P(2 Tail)
O. 187 O. 021
Reading from the top, the output can be interpreted as follows: 9 The top four lines tell us what file was opened, give the date and time, and name the variables in the file. The result display
begins with (1) the dependent variable, (C)riterion, and gives (2) the sample size, 6, (3) the multiple R, and (4) the multiple R 2. When there is only one predictor, as in the present example, the
multiple R and the Pearson correlation, r, are identical (see Table 17.1); 9 The 13th line gives the adjusted multiple R 2, which is calculated as (m-1)(1-R
where m is the number of predictors, and N is the sample size. R 2 describes the portion of variance accounted for in a new sample from the same population. At the end of the 13th line we find the
standard error of the estimate, defined as the square root of the residual mean square. 9 After a table header we find the parameters and their standard errors, the standardized coefficients ("betas"
in SPSS); the tolerance value (see the section on multiple regression); the t value; and the two-sided tail probability for the t's; To depict the relationship between Predictor1 and Criterion we
create a graph with the following commands:
Use Compdat click Graph, Options, Global click Thickness = 3, Decimals to Print - 1, and select from the font pull down the British font
Reads file "Compdat.sys" Provides print options for graphics Specifies output characteristics
continued on next page
C H A P T E R 17. COMPUTATIONAL ISSUES OK click Graph, Plot, Plot assign Predictor1 to X and Criterion to Y click Axes and insert in Label Fields: for X: Predictor1 and for Y: Criterion, OK click
Smooth and OK
click Symbol, select symbol to represent data points (we take the star), and select size for data points (we take size 2) OK click File in Graph Window and select Print increase Enlarge to 150%; OK
Concludes selection of output characteristics Opens the menu for 2-D plots Assigns variables to axes Labels axes
Specifies type of regression line to be inserted in the graph; in the present example there is no need to make a decision, because a linear regression line is the default Specifies print options
Starts creation of graph on screen Initiates printing process Increases print output to 150% (results in output that is 6 inches in height; width depends on length of title)
The resulting graph appears in Figure 17.2. Readers are invited to create the same regression output and the same figure using Criterion and Predictor2.
Curvilinear Regression
For the following runs suppose the researchers analyzing the data in file "Compdat.sys" derive from theory that a curvilinear regression slope may validly describe the the slope of Criterion.
Specifically, the researchers
//I -Jr
10 9 o
I 1
3 4 Predictor 1
. 7
Figure 17.2: Simple linear regression of Criterion on Predictor1 of sample data. assume that there may be a quadratic slope. To test this assumption the researchers perform the following two steps:
1. Create a variable that reflects a quadratic slope for six observation points. 2. Predict Criterion from this variable. Let us call the variable that gives the quadratic slope Quad. Then, one
performs the following steps in SYSTAT to test the assumption of a quadratic fit. Step 1 is to create a coding variable for a quadratic slope:
Use Compdat click Window, Worksheet
Reads data file "Compdat.sys" Opens a worksheet window that displays the data continued on next page
CHAPTER 17. COMPUTATIONAL ISSUES move cursor in top line, that is, the line that contains variable names, and move to the right, one cell past the last existing name type "Quad" move cursor one step
down, type "5," move cursor one step down, type "-1" repeat until last value for variable Quad is inserted click File, select Save click File, select Close
Opens array (column) for new entries
Creates variable Quad Insert values for variable Quad (values are taken from Kirk (1995, Table El0)) Insert all polynomial coefficients; the last four coefficients a r e - 4 , - 4 , -1, 5 Saves new
file under same name, that is, under "Compdat.sys" Closes worksheet; reads new data; readies data for analysis
We now have a new data file. It contains the original variables, Predictorl, Predictor2, and Criterion, and the new variable, Quad, which carries the coefficients of a quadratic polynomial for six
observation points. In Step 2, we predict Criterion from Quad. Step 2 consists of testing the assumption that Criterion has a quadratic slope: Command
Use Compdat click Stats, MGLH, and Regression assign Criterion to Y and Quad to X click OK
Reads data file "Compdat.sys" Specifies that OLS regression will be employed for data analysis Defines predictor and criterion Starts analysis and send results to screen
These steps yield the results displayed in the following output:
MODEL C = CONSTANT+QUAD >ESTIMATE
Dep Var" C
N- 6
Multiple R" 0.356 Squared multiple R" O. 127 Adjusted squared multiple R" 0.0 Standard error of estimate" 3.433 Effect
Std Error
P(2 Tail)
8. 000 -0. 286
1.402 O. 375
5. 708 -0. 763
O. 005 O. 488
Reading from the top of the regression table, we find the same structure of output as before. Substantively, we notice that the quadratic variable gives a poorer representation of the data than
Predictor l. The variable explains no more than R 2 = 0.127 of the criterion variance. Predictorl was able to explain R 2 = 0.771 of the criterion variance. Asking why this is the case, we create a
graph that displays the observed values and the values estimated by the regression model. To create this graph, we need the estimates created by the regression model with the quadratic predictor. The
following commands yield a file with these estimates.
Use Compdat click Stats, MGLH, and Regression assign Criterion to Y and Quad to X select Save Residuals
Reads data file "Compdat.sys" Selects regression analysis Specifies regression predictor and criterion Initiates saving of residuals in new file continued on next page
C H A P T E R 17.
type file name for residuals (we type "Quadres"), click OK
type "Data" type "Use Compdat Quadres"
type "Save Quadres2"
type "Run"
Asks for file name and specification of type of information to save Specifies file name, readies module for calculations; there is no need for further selections because the default choice saves the
information we need Starts calculations, sends residuals and diagnostic values to file "Quadres.sys"; sends regression results to screen Invokes SYSTAT's Data module Reads files "Compdat.sys" and
"Quadres.sys" simultaneously; makes all variables in both files simultaneously available Creates file "Quadres2.sys" in which we save information from both files Merges files "Compdat.sys" and
"Quadres.sys" and saves all information in file "Quadres2.sys"
After issuing these commands, we have a file that contains the original variables, Predictor l, Predictor2, and Criterion; the variable Quad; and the estimates, residuals, and various residual
diagnostic values (see Chapter 6). This is the information we need to create the graph that plots observed versus estimated criterion values. The following commands yield this graph:
Use Quadres2 click Graph, Options, Global
Reads file "Quadres2.sys" Provides print options for graphics continued on next page
17.3. CURVILINEAR R E G R E S S I O N
click Thickness = 3, Decimals to Print = 1, and select from the font pull down the British font OK click Graph, Plot, Plot assign Predictor1 to X, Criterion to Y, and Estimate also to Y click Axes
and insert in Label Fields; for X, Predictor1, and for Y, Criterion; OK click Symbol, select symbol to represent data points (we take the star), and select size for data points (we take size 2) OK
click File in Graph Window and select Print increase Enlarge to 150%; OK
Specifies output characteristics
Concludes selection of output characteristics Opens the menu for 2D plots Assigns variables to axes
Labels axes
Specifies print options
Starts creation of graph on screen Initiates printing process Increases print output to 150% (results in output that is 6 inches in height; width depends on length of title)
The resulting graph appears in Figure 17.3. The figure indicates why the fit provided by the quadratic polynomial is so poor. When comparing the stars (observed data points) and the circles (expected
data points) above the markers on the Predictorl axis, we find that, with the exceptions of the third and the fifth data points, not one is well represented by the quadratic curve. Major
discrepancies are apparent for data points 1, 2, and 7. What onewould need for an improved fit is a quadratic curve tilted toward the upper right corner of the graph. The next section, on multiple
regression, addresses this issue.
I "A"
" tr,
I 1
3 4 Predictor 1
Figure 17.3: Observed data points (stars) and expected data points (circles) in quadratic regression.
Multiple Regression
The curvilinear analysis performed in Section 17.3 suggested a poor fit. The main reason identified for this result was that the quadratic curve needed a positive linear trend that would lift its
right-hand side. To create a regression line that has this characteristic we now fit a complete quadratic orthogonal polynomial, that is, a polynomial that involves both a linear term and the
quadratic term used in Section 17.3. To create a coding variable that carries the coefficients of the linear orthogonal polynomial we proceed as in Section 17.3, where we created a coding variable
that carried the coefficients for the quadratic polynomial. Readers are invited to do this with their own computers. Let the name of the new coding variable be Lin. It has values -5, -3, -1, 1, 3,
and 5. We store the data set, now enriched by one variable, in file "Quadres2.sys." Using the linear and the quadratic orthogonal polynomials we can (1) calculate a multiple regression and (2) create
a graph that allows us to compare the results from the last chapter with the results from this
17.4. MULTIPLE REGRESSION
chapter. The following commands yield a multiple regression: Command
Use Quadres2 click Stats, MGLH
Reads file "Quadres2.sys" Initiates statistical analysis; selects the MGLH (Multivariate General Linear Hypotheses) module Selects Regression module Specifies predictors and criteria in regression
click Regression assign Lin and Quad to Independent; assign Criterion to Dependent OK
Starts calculations; sends results to screen
The resulting printout appears below. MODEL C = CONSTANT+LIN+QUAD >ESTIMATE
Dep Var : C N" 6
Multiple R" 0.948 Squared multiple R: 0.898 Adjusted squared multiple R" 0.831 Standard error of estimate: 1.352 Effect CONSTANT LIN QUAD
Std Error
8. 000 0.771 -0. 286
O. 552 0.162 O. 148
P(2 Tail)
Tol. 1.00 1.00
14.491 4.773 -1.936
0.001 0.017 O. 148
Analysis of Variance Source Regression Residual
48.514 5.486
24.257 1. 829
The output for multiple regression follows the same scheme as for simple regression. Yet, there are two characteristics of the results that are worth mentioning. The first is the Tolerance value.
Tolerance is an indicator of the magnitude of predictor intercorrelations (see the VIF measure in Section 8.1). It is defined as Tolerancej - 1 - R 2, over all i # j, or, in words, the Tolerance of
Predictor i is the multiple correlation between this predictor and all other predictors. It should be noticed that the criterion is not part of these calculations. Tolerance values can be calculated
for each predictor in the equation. When predictors are completely independent, as in an orthogonal design or when using orthogonal polynomials, the Tolerance assumes a value of Tolerance = 1 - 0 =
1. This is the case in the present example. Tolerance values decrease with predictor intercorrelation. The second characteristic that is new in this printout is that the ANOVA table is no longer
redundant. ANOVA results indicate whether the regression model, as a whole, makes a significant contribution to explaining the criterion. This can be the case even when none of the single predictors
make a significant contribution. The F ratio shown is no longer the square of any of the t values in the regression table. Substantively, we note that the regression model with the complete quadratic
polynomial explains R 2 = 0.898 of the criterion variance. This is much more than the quadratic polynomial alone was able to explain. The linear polynomial makes a significant contribution, and so
does the entire regression model. To compare results from the last chapter with these results we want to draw a figure that displays two curves, that from using only the quadratic term and that from
using the complete quadratic polynomial. We issue
17.4. MULTIPLE R E G R E S S I O N
the following commands:
Command Use Quadres2 click Graph, Options, Global click Thickness = 3, Decimals to Print = 1, and select from the font pull down the British font (or whatever font pleases you) OK click Graph, Plot,
Plot assign Estimate and Predictor1 to X, and Criterion to Y click Axes and insert in Label Fields, for X, Predictor1, and for Y, Criterion; OK click Symbol, select symbol to represent data points
(we take the diamond and the star, that is, numbers 1 and 18), and select size for data points (we take size 2) click Smooth, select Quadratic OK, OK click File in Graph Window and select Print
increase Enlarge to 150%; OK
Effect Reads file "Quadres2.sys" Provides print options for graphics Specifies output characteristics
Concludes selection of output characteristics Opens the menu for 2D plots Assigns variables to axes
Labels axes
Specifies print options
Specifies type of regression line Starts drawing on screen Initiates printing process Increases size of print output and starts the printing
Figure 17.4 displays the resulting graph. The steeper curve in the figure represents the regression line for the
"It, r,.9
I -Jr
9 "t:Z
4F 3
] 1
i 2
I 3
] 4
I 5
I 6
Predictor 1 Figure 17.4: Comparing two quadratic regression lines.
complete quadratic polynomial. The curve that connects the circles represents the regression line from the last section. Quite obviously, the new regression line gives a much closer representation of
the original data points than the earlier regression line. Adding the linear component to the polynomial has provided a substantial increase in fit over the quadratic component alone.
Regression Interaction
This section illustrates how to estimate regression interaction models. We use file "Compdat.sys" with the variables Predictorl, Predictor2, and Criterion. The sample model analyzed here proposes
that the slope of the line that regresses Criterion on Predictorl depends on Predictor2. More specifically, we investigate the model C r i t e r i o n - Constant + Predictor1 + Predictorl 9Predictor2
(see Chapter 10).
17.5. REGRESSION INTERACTION
In order to be able to include the multiplicative term in a model that can be analyzed using SYSTAT's Regression module, one must create a new variable. This variable results from element-wise
multiplying Predictorl with Predictor2. However, there is a simpler option. It involves using the more general MGLH General Linear Model module. This model allows one to have the program create the
multiplicative variable. The following commands need to be issued: Command
Use Compdat click Stats, MGLti, and General linear model assign Criterion to Dependent and Predictor1 to Independent highlight both Predictor1 and Predictor2 and click Cross under the Independent
variable box
Reads data file "Compdat.sys" Selects GLM module
click OK
Specifies criterion and predictor of regression The multiplicative term Predictor1 9 Predictor2 will appear as new independent variable, thus completing the specification of the regression model
Starts data analysis; sends output to screen
The following output results from these commands: MODEL C = CONSTANT+PI+P2+pI*p2 >ESTIMATE
Dep Var: C N" 6
Multiple E: 0.904 Squared multiple R: 0.817 Adjusted squared multiple R" 0.542 Standard error of estimate: 2.224
Effect CONSTANT P1 P2 PI*P2
C H A P T E R 17. C O M P U T A T I O N A L ISSUES
Std E r r o r
O. 048 2. 125 1. 256 -0.264
4. 190 1.544 1. 897 0.404
Tol. . O. 119 O. 137 0.056
t O. 011 1.376 O. 662 -0.653
P(2 T a i l ) O. 992 0.303 O. 576 0.581
The output shows that including the multiplicative term adds only 6/1000 of explained variance to the model that includes only Predictorl. In addition, due to the high correlation between Predictorl
and the multiplicative term and because of the low statistical power, neither parameter for the two predictor terms is statistically significant. The same applies to the entire model. The Tolerance
values are particularly low, indicating high variable intercorrelations.
Regression with Categorical Predictors
Regression with categorical predictors is a frequently addressed issue in the area of applied statistics (Nichols, 1995a, 1995b). Chapter 4 focused on interpretation of slope parameters from
categorical predictors. The present section focuses on creating categorical predictors and estimating parameters using SYSTAT. To illustrate both, we use the data in file "Compstat.sys." First, we
illustrate the creation of categorical (also termed subinterval) predictors from continuous predictors. Consider variable Predictorl in the data file (for the raw data see Section 17.1). For the
present purposes we assume that Predictor l operates at the interval level. To categorize this variable, we specify, for example, two cutoff points. Let the first cutoff point be at Predictorl = 2.5
and the second at Predictorl = 4.5. From the two cutoff points we obtain a three-category variable. This variable must be transformed such that pairs of categories result that can be compared using
standard OLS regression. Using SYSTAT, there are several options for performing this transformation. The easiest for very small numbers of cases is to punch in values of one or two new variables. The
following routine is more useful when
17. 6. R E G R E S S I O N W I T H C A T E G O R I C A L P R E D I C T O R S
samples assume more realistic sizes. More specifically, we use the Recode option in SYSTAT's worksheet to create new variables that reflect the contrasts we are interested in. For example, we create
the following two contrasts: 9 cl = ( - 0 . 5 , - 0 . 5 , 1). This contrast compares the combined first two categories with the third. -- ( - 1 , 1, 0). This contrast compares the first two
categories with each other.
9 c2
Use Compdat click Window, Worksheet
Reads data file "Compdat.sys" Presents data in form of a worksheet Starts a new variable
click any variable name and move cursor to the right to the first free field for variable names type variable name, e.g., "Cat2" move one field to the right, type "Cat3" to assign values to Cat2 so t
h a t they reflect the first contrast, fill the variable and operation boxes as follows: If Predictor1 < 4.5 Then Let Cat2 = - 0 . 5 ; If Predictorl > 4.5 Then Let Cat2 - 1; OK
Labels first categorical variable Cat2 Initiates new variable; labels it Cat3
Notice that the value 4.5 does not appear for variable Predictor1; had it appeared, this operation would have been equivalent to setting values 4.5 equal to 1
to assign values to Cat3 so t h a t they reflect the second contrast, fill the variable and operation boxes under Recode as follows: continued on next page
CHAPTER 17. COMPUTATIONAL ISSUES If Predictor1 < 2.5 Then Let Cat3 = -1; If Predictor1 > 2.5 Then Let Cat3 = 1; If Predictorl > 4.5 Then Let Cat3 = 0; OK click File, Save click Close
Saves data in file "Compdat.sys" Closes worksheet window and readies file "Compdat.sys" for analysis
After these operations, file "Compdat.sys" contains two additional variables, Cat2 and Cat3, which reflect the two contrasts specified above. These variables can be used as predictors in regular
multiple regression. The following commands are needed to perform the regression of Criterion on Cat2 and Cat3: Command
Read Compdat click Stats, MGLH, and Regression assign Criterion to Dependent and Cat2 and Cat3 to Independent click OK
Reads data file "Compdat.sys" Selects regression for data analysis Specifies predictors and criterion for multiple regression Starts data analysis; sends results to screen
The results from this multiple regression appear in the following output: USE 'A :\COMPDAT. SYS ' SYSTAT R e c t a n g u l a r file A :\COMPDAT. SYS, created Mon Nov 17, 1997 at 18:20:40, contains
variables: PI C P2 QUAD LIN CAT2 CAT3 >MODEL C = CONSTANT+CAT2+CAT3
17.6. R E G R E S S I O N W I T H C A T E G O R I C A L P R E D I C T O R S
>ESTIMATE Dep Var: C N" 6
Multiple R" 0.943 Squared multiple R: 0.889 Adjusted squared multiple R" 0.815 Standard error of estimate: 1.414
Effect CONSTANT CAT2 CAT3
Std E r r o r
8. 000 2. 000 3. 000
O. 577 O. 816 O. 707
. 1.0 1.0
t 13. 856 2. 449 4. 243
P(2 T a i l ) O. 001 O. 092 O. 024
The output suggests that the two categorical variables allow one to explain R 2 = 0.89 of the criterion variance. In addition, the Tolerance values show that the two categorical variables were
specified to be orthogonal. They do not share any variance in common and, thus, cover independent portions of the criterion variance. The following output illustrates this last result. The output
displays the results from two simple regression runs, each performed using only one of the categorical predictors. >MODEL C = CONSTANT+CAT2 >ESTIMATE Dep Var: C N: 6 Multiple R" 0.471 Squared
multiple R: 0.922 Adjusted squared multiple R: 0.09.8 Standard error of estimate: 3. 240
314 Effect CONSTANT CAT2
Std Error
P(2 Tail)
8. 000 2.000
1. 323 1.871
. 1.00
6. 047 1.069
O. 004 0.345
Std Error
P(2 Tail)
0. 866 1.061
>MODEL C = CONSTANT+CAT3 >ESTIMATE Dep Var: C N- 6
Multiple R: 0.816 Squared multiple R: 0.667 Adjusted squared multiple R" 0.583 Standard error of estimate" 9. 121 Effect
Coefficient 8.000
238 2. 828 9.
0.001 0.047
This output indicates that parameter estimates from the two simple regression runs are the same as those from the multiple regression run. In addition, the two R 2 add up to the value obtained from
the multiple regression. Independence of results occurs systematically only for orthogonal predictors. The examples given in this chapter thus far illustrate regression using categorized continuous
predictors. When predictors are naturally categorical as, for instance, for the variables gender, make of car, or type of event, the same methods can be applied as illustrated. When the number of
categories is two, predictors can be used for regression analysis as they are. When there are three or more categories, one must create contrast variables t h a t compare two sets of variable
categories each.
17. 7. THE P A R T I A L I N T E R A C T I O N S T R A T E G Y
The Partial Interaction Strategy
Using the means from Chapter 10 (Regression Interaction) and Section 17.6 (Categorizing Variables) we now can analyze hypotheses concerning partial interactions. To evaluate a hypothesis concerning a
partial interaction one selects a joint segment of two (or more) variables and tests whether, across this segment, criterion values are higher (or lower) than estimated using the predictor variables'
main effects. For each hypothesis of this type there is one parameter. For each parameter there is one coding vector in the design matrix X of the General Linear Model, y = Xb+e.
The coding vectors in X are created as was illustrated in Section 17.6. Consider the following example. A researcher assumes that criterion values yi are smaller than expected from the main effects
of two predictors, A and B, if the predictors assume values a l < a <_ a2 and bl < b <__b2. To test this hypothesis, we create a coding vector that assumes the following values:
-1 1
To illustrate this procedure we use our sample data file "Compdat" and first create a categorical variable of the type described in 17.6. Let this variable be named Cat l. We define this variable as
follows: 9 Cat l = - l i f 1. Predictor1 is less than 2.5 and Predictor2 is either 1 or 3 2. Predictor1 is greater than 4.5 and Predictor2 is either 3 or 5 9 Cat1 = 1 else Creating a variable
according to these specifications proceeds as follows:
CHAPTER 17. COMPUTATIONAL ISSUES Command
Use Compdat click Window, Worksheet move cursor to the top line that contains variable names, move to the right in the first free field, type "Catl" move cursor one field down, key "- 1," move down,
key "-1," move down, key "1," until all values are keyed in click File, Save
Reads data file "Compdat.sys" Presents data in worksheet form Creates variable Cat l
click File, Close
Specifies values for variable Cat l; the values a r e - 1 , - 1 , 1, 1 , - 1 , - 1
Saves new and original data in file "Compdat.sys" Closes Worksheet window and reads file "Compdat.sys"
These commands create the data set displayed in the following output: P1
17. 7. THE PARTIAL INTERACTION STRATEGY
Issuing the following commands, we calculate a regression analysis with partial interaction" Command
Use C o m p d a t click Stats, MGLH, Regression assign Criterion to Dependent and Predictorl, Predictor2, and C a t l to Independent click OK
Reads data file "Compdat.sys" Selects regression analysis Specifies predictors and criterion for multiple regression
Starts calculations and sends results to screen
The results from this multiple regression with partial interaction appear in the following output: MODEL C = CONSTANT+PI+P2+CATI >ESTIMATE Dep Var: C N: 6 Multiple R: 1.000 Squared multiple R: 1.000
Adjusted squared multiple R: 1.000 Standard error of estimate: 0.0 Effect
Std Error
CONSTANT P1 P2
2.833 1.333 O. 333
P(2 Tail)
O. 0 0.0 O. 0
. 0.309 O. 309
. . .
9 . .
To be able to assess the effects of including the partial interaction variable C a t l , we also calculate a multiple regression that involves only
the two main effects of Predictor l and Predictor2. The results from this analysis appear in the following output: MODEL C = CONSTANT+PI+P2 >ESTIMATE
Dep Var: C
N" 6
Multiple R: 0.882 Squared multiple R: O.YY8 Adjusted squared multiple R" 0.630 Standard error of estimate: 2.000
CONSTANT Pi P2
2. 333 1.333 O. 333
Std Error Tolerance
2. 073 0.861 l . 139
. 0.309 O. 309
P(2 Tail)
1. 126 1.549 O. 293
O. 342 0.219 O. 789
These two outputs indicate that the portion of criterion variance accounted for by the partial interaction multiple regression model increased from R 2 - 0.78 to R 2 - 1.00. Thus, the partial
interaction variable allowed us to explain all of the variance that the two main effect variables were unable to explain. Notice the degree to which Predictorl and Predictor2 are dependent upon each
other (low Tolerance values). The first output does not include any statistical significance tests. The reason for this is that whenever there is no residual variance left, there is nothing to test
against. The F tests (and t tests) test the portion of variance accounted for against the portion of variance unaccounted for by the model. If there is nothing left unaccounted for, there is no test.
It is important to note that, in the present example, lack of degrees of freedom is n o t the reason why there is no statistical test. Including the partial interaction variable Cat l implies
spending one degree of freedom. The last output suggests that there are three degrees of freedom to the residual term and two to the model. Spending one of the residual degrees of freedom results in
a model with three degrees of freedom and in a residual term with two degrees of freedom. Such a model is testable.
However, a necessary condition is that there is residual variance left.
Residual Analysis
Analysis of residuals from regression analysis requires the following three steps: 1. Performing regression analysis; 2. Calculating and saving of residuals and estimates of regression diagnostics;
3. Interpretation of estimates of regression diagnostics. In this section, we use the results from the analysis performed in Section 17.2, where we regressed the variable Criterion onto Predictor l.
Results of this regression appeared in the first output, in Section 17.2. The following commands result in this output, and also make the program save estimates, residuals, and indicators for
regression diagnostics in a new file:
Command Use Compdat click Stats, MGLH click Regression assign Predictor1 to Independent; assign Criterion to Dependent click Save Residuals
Effect Reads file "Compdat.sys" Initiates statistical analysis; selects the MGLH module Selects Regression module Specifies predictor and criterion in regression Initiates saving of residuals in
user-specified file; program responds by prompting name for file continued on next page
C H A P T E R 17. COMPUTATIONAL ISSUES type "Compres"
click OK
We name the residual file "Compres" (any other name that meets DOS name specifications would have done); notice that the suffix does not need to be typed; SYSTAT automatically appends ".sys" Starts
calculations; sends results to screen and saves
The output that results from these commands contains the same information as the earlier output. Therefore, it will not be copied here again. The file "Compres.sys" contains the estimated values, the
residuals, and the estimates for regression diagnostics. To be able to compare estimates and residuals with predictor values, we need to merge the files "Compdat.sys" and "Compres.sys." To do this we
issue the following commands (to be able to perform the following commands, we need to invoke SYSTAT's command prompt): Command
This command carries us into the data module Clears workspace for new data set(s) Specifies name for file that will contain merged files, "Compdat.sys" and "Compres.sys" Reads the files to be merged;
program responds by listing all variables available in the two files Saves merged data in file "Compres2.sys"; program responds by giving the numbers of cases and variables processed and the name of
the merge file
New Save Compres2
Use Compdat Compres
17.8. R E S I D U A L A N A L Y S I S
This completes the second step of regression residual analysis, that is, the calculation and saving of residuals, estimates, and regression diagnostics. When interpreting these results, we first
inspect the predictor x residual and estimate x residual plots. Second, we check whether any of the regression diagnostics suggest that leverage or distance outliers exist. To create the plots we
first specify output characteristics for the graphs. We issue the following commands:
Use Compres2 click Graph, Options, Global click Thickness=3, Decimals to Print = 1, and select from the Graph Font pull down British (or any other font you please); OK click Graph, Plot, Plot
Reads merged file 'Compres2.sys' Opens window for design specifications Specifies that lines be printed thick, that one decimal be printed in the axes labels, and that the British type face be used
assign Predictor1 to X and Criterion to Y click Symbol and select 2x for size of data points to print, and the asterisk (or any other symbol you please) click Smooth and OK click OK
Initiates the two-dimensional scatterplot module Predictorl will be on the X-Axis and Criterion will be on the Y-Axis Specifies size and form of data points in print
Creates a linear regression line that coincides with the X-axis Sends graph to screen
The graph that results from these commands appears in the left panel of Figure 17.5. Issuing analogous commands we create the right panel of Figure 17.5, which displays the Estimate 9 Residual Plot.
As is obvious from comparing the two panels of Figure 17.5, the Estimate 9 Residual plot is very similar to the Predictor 9 Residual plot.
322 I
'I I
o II,
-2 0
3 4 Predictor 1
7 9 Estimate
Figure 17.5: Predictorl 9 Residual, and Estimate 9 Residual plots for sample data. The pattern of data points in the graphs is almost exactly the same. The reason for this great similarity is that
the patterns emphasized in both graphs are those that were left unexplained by the regression model. (Figure 17.2 displays the regression of Criterion onto Predictorl. This relationship is
statistically significant and allows one to explain 77% of the criterion variance.) The reason why researchers create Estimate * Residual plots or Predictor 9 residual plots is that these plots allow
one to perform visual searches for systematic patterns of residuals that are associated with the predictor (in simple regression) or the estimates (in simple or multiple regression). While in small
samples such patterns may be hard to detect, in the present example it seems that relatively large positive residuals are associated with medium predictor values and with medium estimates. Relatively
small negative residuals are associated with the extremes of the predictor and estimate scales. The following printout displays the file "Compres.sys" that SYSTAT created for the regression of
Criterion onto Predictorl" ESTIMATE 4.14 5.68 7.22 8.77
RESIDUAL -i. 14 -0.68 1.77 2.22
LEVERAGE 0.52 O. 29 0.18 O. 18
COOK 0.48 O. 04 0.13 0.21
STUDENT -0.92 -0.41 1.16 1.70
SEPRED 1.27 O. 95 0.74 0.74
17.9. MISSING DATA ESTIMATION 10.31 11.85
-1.31 -0.85
0.29 0.52
0.16 0.27
-0.86 -0.65
0.95 1.27
The output suggests that SYSTAT saves the following variables in its default option: estimate, residual, leverage, the Cook D statistic, student's t, and sepred, that is, the standard error of
predicted values. The critical values for the t statistic is t0.o5,d/=4 = 2.776. The critical value for the D statistic is F0.o5,1,5 = 230. None of the values in the output are greater than the
critical values. Therefore, there is no reason to assume the existence of outliers in this sample data set. Readers are invited to calculate whether cases 6 and 1 exert too much leverage. The
following commands created this output: Command
Use Compres
Reads file "Compres.sys"; program responds by listing names of variables in file Program transfers data in worksheet Initiates printing of data file Sends entire data file to printer
click Window, Worksheet click File, Print click OK
Missing Data Estimation
As by the date of this writing, SYSTAT does not provide a separate module for missing data estimation. Only in the time series programs missing values can be interpolated using distance weighted
least squares methods. There are modules for missing data estimation in BMDP and EQS. The EQS module provides the options to estimate and impute variable means, group means, and values estimated by
multiple regression. In this section we focus on estimating and imputing missing data using regression methods from SYSTAT's MGLH regression module. We focus on data with missing criterion values.
Data with missing predictor values can be treated accordingly.
C H A P T E R 17. COMPUTATIONAL ISSUES
This module allows one to estimate missing data from a regression model. Estimation and imputation of missing values requires three steps: 1. Estimate regression model; 2. Save regression estimates,
residuals, and diagnostics using the default option (Section 17.6); 3. Impute estimates for missing values into original data file; the residuals file contains these estimates. To be able to use the
data in file "Compdat.sys," we need to add at least one case with a missing criterion value. We add the values Predictorl = 12 and Predictor2 - 14. To do this in SYSTAT, we perform the following
Use Compdat click Window, Worksheet
Reads data file 'Compdat.sys' Transfers data file into worksheet window Specifies Predictor1 value for new
move cursor down to first line below last data line, move to Predictorl column, key in "12" move to Predictor2 column, key in "14" click File, Save As type "Compdat2"; OK
click File, Close
Specifies Predictor2 value for new case
Initiates saving of completed data in new file Saves completed data in file "Compdat2.sys"; leaves all other values for new case missing; missing values are indicated by a period Carries us back to
command mode where we can perform statistical analyses
For the following analyses we only use the three variables Predictor1,
17.9. MISSING DATA E S T I M A T I O N
Predictor2, and Criterion. The values assumed by these three variables appear below. P1
The output displays the values the three variables assume for the original six cases. In addition, the values Predictorl = 12 and Predictor2 14 appear, along with the missing value for Criterion.
This selective Output was created within SYSTAT's command mode, issuing the following commands (these commands assume that file 'Compdat.sys' is the active file; if this is not the case, it must be
opened before issuing the following commands): Command
Caselist Predl, Pred2, Crit
Initiates listing of values of variables Predl, Pred2, and Grit; all cases will be displayed Displays all cases on screen Sends highlighted parts of what is displayed on screen to printer
Run highlight desired parts of display, click File, Print, OK.
Using file "Compdat2.sys" we now estimate the parameters for the multiple regression of Criterion onto Predictorl and Predictor2. The following output displays results from this analysis. The
commands for this run are analogous to the commands for regression with the saving of the residual file and are not given here. MODEL
C = CONSTANT+PI+P2
C H A P T E R 17. COMPUTATIONAL ISSUES
>ESTIMATE 1 case(s) deleted due to missing data. Dep Var: C N: 6 Multiple R: 0.882 Squared multiple R: 0.778 Adjusted squared multiple R: 0.630 Standard error of estimate: 2.000 Effect
CONSTANT P1 P2
Std Error
2. 333 1. 333 O. 333
2. 073 O. 861 1. 139
P(2 Tail)
1. 126 1. 549 O. 293
O. 342 O. 219 O. 789
With only three exceptions, this output is identical with the output on p. 316. For the present purposes most important is that the case with the missing criterion value was excluded from analysis.
This is indicated in the line before the numerical results block. The last line of the above output (not given here) indicates that the residuals have been saved. File "Cd2res.sys" contains results
from residual analysis. Using the same commands as before, we print this file. It appears in the following output: ESTIMATE 4 6 7 9 10 12 23
RESIDUAL -i -1 2 2 -1 -1 .
LEVERAGE 0.58
COOK 0.28
STUDENT -0.70
0.33 0.33
This output displays the same type of information as before. In addition, it contains information of importance for the purposes of estimating and imputing missing data. For the last case, that is,
the case with the
17.9. MISSING DATA E S T I M A T I O N
missing Criterion value, this file shows an estimate, a leverage value, and the standard error of the predicted value. It is the estimate that we impute into the original file. To perform this we
"Use" data file "Compdat2.sys," transfer the data to the worksheet window, and impute 23.0 for the missing Criterion value. Using the now completed data we recalculate the regression analysis.
Results of this analysis appear in following output: MODEL C = C O N S T A N T + P J + P 2 >ESTIMATE Dep Var: C N"
Multiple R: 0.975 Squared multiple R: 0.951 Adjusted squared multiple R: 0.927 Standard error of estimate: 1.732 Effect
Std Error
P(2 Tail)
2. 333 1. 333
1. 224 0. 686
1. 906 1. 944
0. 129 0. 124
Analysis of Variance Source
Regression Residual
234. 857 12. 000
117.429 3. 000
39. 143
0. 002
This output shows a number of interesting results. First, the number of cases processed now is n - 7 rather than n - 6 as in the earlier regression output. No case was eliminated because of missing
data. Second, the three parameter estimates are exactly as before. The reason is t h a t the estimated value sits exactly on the regression hyperplane. Therefore, the new data point changes neither
the elevation nor the angles of the hyperplane.
Also because the new data point sits exactly on the regression plane, the overall amount of variance that is unaccounted for remains the same. Thus, considering the larger number of data points
processed, the portion of variance accounted for is greater than that without the imputed data point. This can be seen in the analysis of variance panel. As a result, the standard errors of the
parameters are smaller than before, and so are the tail probabilities of the t statistics. Overall, the regression model now accounts for a statistically significant portion of the criterion
variance. In general, one can conclude that good estimation and imputation of missing values tends to increase statistical power. The main reasons for this increase include the increase in sample
size that results from includingcases with imputed data and the placing of OLS estimates on the regression hyperplane.
Piecewise Regression
SYSTAT provides two options for estimating continuous piecewise regression models. The first is to regress a criterion variable onto vectors from a user-specified design matrix of the form given in
(15.2) in Section 15.1. The second is to estimate a model from a user-specified form. SYSTAT's Nonlin module allows one to perform the second type of analysis. This section focuses on the second
option. To illustrate piecewise regression we use the file "Compdat.sys" again. We analyze the relationship between the variables Predictorl and Criterion. Figure 17.2 suggests that there may be a
change in the relationship between Predictorl and Criterion at Predictorl = 3.5. Therefore, we estimate a piecewise regression model of the following form (cf. Section 15.1): Criterion = Constant §
bl * Predictor1 + b2 * (Predictor2 - 3.5) + Residual. Using SYSTAT, this model can be estimated issuing the following commands:
17.10. P I E C E W I S E R E G R E S S I O N
Use Compdat click Stats, Nonlin
Reads file 'Compdat.sys' Invokes Nonlin module for nonlinear estimation of regression models Opens window for model specification; the Loss Function option does not need to be invoked unless one
wishes to specify a goal function other than least squares Specifies piecewise regression function to be fit
click Model
type into the model window "crit - constant + bl*predl + b2*(predl-
3.5)" click OK
Starts estimation; sends results to screen
The following output displays results from the estimation process: MODEL c = constant+bl,pl+b2,(pl-3.5)
>ESTIMATE / QUASI Iteration No. Loss 0 .86985D+02 1 .37150D+02 2 .12343D+02 3 .12343D+02
CONSTANT .IOIID+OI .1240D+01 .1617D+01 .1615D+01
B1 .I020D+OI .1724D+01 .1825D+01 .1824D+01
B2 .I030D+OI .9311D+00 -.2868D+00 -.2814D+00
Dependent variable is C Source Regression Residual
Sum-of-Squares 425. 657 12.343
df 3 3
Mean-Square 141. 886 4. 114
330 Mean corrected
54. 000
Raw R-square (l-Residual/Total) Mean corrected R-square (l-Residual/Corrected) R(observed vs predicted) square
Parameter CONSTANT B1 B2
Estimate 1.615 1.824 -0.281
= 0.972 = 0.771 = 0.771
Wald Conf. Interval A.S.E. Lower < 95Y.> Upper 51479 - 1 6 3 8 2 8 . 536 163831. 766 14708 -46806. 790 46810. 439 14708 -46808. 896 46808. 333
The first line of this output shows the model specification. The second line shows the Estimate command and, after the slash, the word QUASI. This word indicates that the program employs a
quasi-Newton algorithm for estimation. What follows is a report of the iteration process. Information provided includes the number of iteration, the value of the loss function, and parameter values
for each iteration step. The report suggests that convergence was achieved after the fourth iteration step. The last part of the output displays results of the statistical analysis of the regression
model, the raw R 2 and the corrected R 2, and the parameter estimates. Dividing the regression sum of squares by the residual sum of squares yields the F ratio for the regression model. For the
present example we obtain F - 141.886/4.114- 34.489. This value is, for dr1 - 3 and dr2 - 3 greater than the critical value F0.0~,3,3 - 9.28. We thus conclude that the regression model accounts for a
statistically significant portion of the criterion variance. Wilkinson et al. (1992, p. 429) suggest you "never trust the output of an iterative nonlinear estimation procedure until you have plotted
estimates against predictors ..." Therefore, we now create a plot of the data from file "Compdat.sys" in which we lay the piecewise regression line along with the standard linear regression line.
Figure 17.6 displays this plot. The straight line in this plot represents the standard linear regression line for the regression of Criterion onto Predictor l. The second line represents the
piecewise regression line. The two lines coincide for the segment
where Predictor1 is greater than 3.5. For the segment where Predictor1 is less than or equal to 3.5, the piecewise regression line has a steeper slope, thus going almost exactly through the two data
points for Predictor l = 2 and Predictor l = 2. The following commands create this plot:
Use Compdat click Graph, Option
Reads data file "Compdat.sys" Opens window for specification of general graph characteristics Specifies thickness of lines, number of decimals, and typeface
select Thickness = 3, Decimals to Print = 1, and Graph Font = British click OK click Graph, Function type "y = 0.83 + 2.049 * - 0.506 * ( x - 3.5) * (x > 3.5)" click Axes insert xmin = 0 , y m i n =
2 , xmax = 7, ymax = 12, xlabel = Predl, and ylabel = Crit; click OK click OK, OK click within the Graph Window the Window, Graph Placement, and Overlay Graph options click Graph, Plot, Plot
Carries us back to command mode Opens window that allows one to specify function to be plotted Specifies function to be plotted
Opens window for specification of axes' characteristics Specifies minimum and maximum axes values and labels axes
Sends graph to screen Tells the program that a second graph will follow that is to be laid over the first Invokes module for plotting data in 2D
assign Predictor1 to the X axis and Criterion to the Y axis continued on next page
CHAPTER 17. COMPUTATIONAL ISSUES click Symbol and select *, Size = 2, OK
Determines size and type of symbol to use for depicting d a t a points Tells the program to insert a linear regression line into the d a t a plot; the linear smoothing is default Specifies min and
max values for axes and axes labels; this is needed for the same reason as the title
click Smooth, OK
click Axes and specify xmin = 0, ymin - 2, x m a x - 7, y m a x = 12, xlabel = P r e d l , and ylabel = Crit; OK OK
Redraws and overlays both graphs; sends picture to screen
//I /
Predictor 1 Figure 17.6" Comparing straight line and piecewise regression.
Appendix A
E L E M E N T S OF MATRIX ALGEBRA This excursus provides a brief introduction to elements of m a t r i x algebra. More detailed introductions t h a t also cover more ground can be found in such t e x
t b o o k s as Ayres (1962), Searle (1982), Graybill (1983). This excursus covers the following elements of m a t r i x algebra: definition of a matrix, types of matrices, transposing a matrix,
addition and subtraction with matrices, multiplication with matrices, the rank of a matrix, the inverse of a matrix, and the d e t e r m i n a n t of a matrix.
Definition of a Matrix
Definition: A m a t r i x is a rectangular array of numbers. These n u m b e r s are a r r a n g e d in rows and columns. Consider matrix A with m rows and n columns, t h a t is, the m x n matrix A.
T h e numbers in this m a t r i x are a r r a n g e d as
all a21 9
a12 a22 .
9 aml
999 a l n 999 a2n
with i = 1 , . . . , m , and j = 1 , . . . , n . The presentation of a matrix such as given in (A.1) meets the following conventions: 1. Matrices are denoted using capital letters, for example, A. 2.
The a/j are the numbers or elements of matrix A. Each number is placed in a cell of the matrix; subscripts denote cell indexes. 3. The subscripts, ij, give first the row number, i, and second the
column number, j. For example, subscript 21 indexes the cell that is in the second row in the first column. 4. Matrix elements, that is, cells, are denoted using subscripted lower case letters, for
example, aij. The following example presents a 2 x 3 matrix with its elements:
A - ( all a12 a13 ) . a21 a22 a23 Matrices can be considered mathematical objects just as, for instance, real valued numbers. Therefore, it is natural to ask how one can define such operations as
addition or multiplication. Are there neutral and inverse elements for addition? Are there neutral and inverse elements for multiplication? Generally, the answer is yes, as long as addition is
concerned. However, when it comes to multiplication, an inverse element does not always exist. This may not be intuitively plausible, since for real numbers there is for every number a an element 1/a
which is the inverse of a. It is also true that addition and multiplication are not well-defined for arbitrary matrices. This is also not the case for real numbers. Usually, any two real numbers can
be added or multiplied. Matrix multiplication is generally not even commutative, that is, if A and B are two matrices that can be multiplied with each other, then generally A B ~ B A . However, the
situation is not as discouraging as it may seem at first glance. One gets rapidly familiar with matrix operations. The following sections introduce readers into matrix definitions and operations
needed in subsequent chapters.
T Y P E S OF M A T R I C E S
Types of Matrices
This section presents the following types of matrices: square matrix, colu m n vector, row vector, diagonal matrix, identity matrix, and triangular forms of matrices.
1. Square matrices. Matrices are square if the number of rows equals the number of columns. For example, matrices with dimensions 2 x 2 or 5 x 5 are square. The following is an example of a 2 x 2
matrix: B-(
12.31 -36.02
-4.45) 0.71
2. Column vectors. A matrix with only one column is termed a column vector. For example, the following are sample dimensions of column vectors: 7 x 1, 3 x 1, and 2 x 1. The following is an example of
a 3 x 1 column vector:
a3 Since there is only one column a subscript indicating the column is redundant and therefore omitted.
3. Row vectors. A matrix with only one row is termed a row vector. The following are sample dimensions of row vectors: 1 x 2, 1 x 45, and 1 x 3. The following is an example of a row vector a'--(
-1.9 ).
For the remainder of this volume we adopt the following convention: Whenever we speak of vectors we always mean column vectors. Row vectors will be denoted by a prime. 1 For example, if vector a is a
row vector, we express it as a ~.
4. Diagonal matrices. A square matrix with numbers in its diagonal cells and zeros in its off-diagonal cells is termed a diagonal matrix. 1See the definition of a transposed matrix below.
A P P E N D I X A. E L E M E N T S OF M A T R I X A L G E B R A
The elements a11, a 2 2 , . . . , describe the main diagonal of matrix A; t h a t is, the main diagonal of a matrix is constituted by the cells with equal indexes. For example, the diagonal cells of
a 3 x 3 matrix have indexes 11, 22, and 33. These are the cells that go from the upper left corner to the lower right corner of a matrix. When a matrix is referred to as diagonal, reference is made
to a matrix with a main diagonal. The following is an example of a 3 x 3 diagonal matrix:
all 0 0
0 a22 0
0 / 0 . a33
Usually, diagonal matrices are written as A-diag(
a33 ) .
If the elements aii are matrices themselves A is termed a block diagonal matrix. 5. Identity matrices. A diagonal matrix with diagonal elements t h a t are equal to some constant value k, that is,
all au3 = . . . = ann - k, is called ascalar matrix. If, in addition, the constant k - 1, the matrix is termed an identity matrix. The symbol for an n x n identity matrix is In, for example,
- a22 --
6. Triangular matrices. A square matrix with elements aij - 0 for j < i is termed an upper triangular matrix. The following is an example of a 2 x 2 upper triangular matrix:
all0 a22a12/"
A square matrix with elements aij - 0 for j > i is termed a lower triangular matrix. The following is an example of a 3 x 3 lower
triangular matrix:
Diagonal matrices are both upper and lower triangular.
Transposing Matrices
Consider the m x n m a t r i x A. Interchanging rows and columns of A yields A ' , the transpose 2 of A. By transposing A, one moves cell ij to be cell ji. The following example transposes the 2 x 3
matrix, A:
(14) 2 3
As (A.2) shows, after transposition, what used to be rows are columns, and w h a t used to be columns are rows. Transposing a transposed m a t r i x yields the original matrix, or, more specifically,
A " = A. Matrices for which A ' = A are termed symmetric.
Adding Matrices
Adding matrices is only possible if the dimensions of the matrices are the same. For instance, adding A with dimensions m and n and B with dimensions p and q can be performed only if both m - p and n
= q. This is the case in the following example. If A and B are given by
(4 2
2Instead of the prime symbol one also finds in the literature the symbol denote a transpose.
A P P E N D I X A. E L E M E N T S OF M A T R I X A L G E B R A
The sum of these two matrices is calculated as
(1+4 10+2
Rather than giving a formal definition of matrix addition, this example shows that the corresponding elements of the two matrices are simply added to create the element of the new matrix A + B. In
contrast, matrix A with dimensions 3 x 6 and matrix B with dimensions 3 x 3 cannot be added to each other. In a fashion analogous to (A.3) one can subtract matrices from each other, if they have the
same dimensions.
Multiplying Matrices
This excursus covers three aspects of matrix multiplication: 1. Multiplication of a matrix with a scalar; 2. Multiplication of two matrices with each other; and 3. Multiplication of two vectors.
Multiplication of a matrix with a scalar is performed by multiplying each of its elements with the scalar. For example, multiplication of scalar k = 3 with matrix A from Section A.4 yields
kA - 3
It should be noted that multiplication of a matrix with a scalar is commutative, that is, kA = Ak holds.
Multiplication of two m a t r i c e s Matrices must possess one specific characteristic to be multipliable with each other. Consider the two matrices, A and B. Researchers wish to multiply them to
calculate the matrix product A B . This is possible only if the number of columns of A is equal to the number of rows of B.
One calls A, that is, the first of two multiplied matrices, postmultiplied by B, and B, that is, the second of two multiplied matrices, premultiplied by A. In other words, two matrices can be
multiplied with each other if the number of columns of the postmultiplied matrix is equal to the number of rows of the premultiplied matrix. When multiplying two matrices one follows the following
procedure: one multiplies row by column, and each element of the row is multiplied by the corresponding element of the column. The resulting products are summed. This sum of products is one of the
elements of the resulting matrix with the row index carried over from the row of the postmultiplied matrix and the column index carried over from the column of the premultiplied matrix. Again, the
number of columns of the postmultiplied matrix must be equal to the number of rows of the premultiplied matrix. For example, to be able to postmultiply A with B, A must have dimensions m x n and B
must have dimensions n x p. The resulting matrix has the number of rows of the postmultiplied matrix and the number of columns of the premultiplied matrix. In the present example, the product A B has
dimensions m xp. The same applies accordingly when multiplying vectors with each other or when multiplying matrices with vectors (see below). Consider the following example. A researcher wishes to
postmultiply matrix A with matrix B (which is the same as saying the researcher wishes to premultiply matrix B with matrix A). Matrix A has dimensions m x n. Matrix B has dimensions p x q. A can be
postmultiplied with B only if n = p. Suppose matrix A has dimensions 3 x 2 and matrix B has dimensions 2 x 2. Then, the product A B of the two matrices can be calculated using the following
multiplication procedure. The matrices are
(al al )( 11 a21
B =
The matrix product A B is given in general form by
AB =
allb11 -t- a12b21 a21bll -t- a22b21 a31bll + a32b21
alibi2 + a12b22 a21b12 + a22b22 )
a31b12 + a32b22
APPENDIX A. ELEMENTS OF M A T R I X A L G E B R A
For a numerical example consider the following two matrices:
A -
B -
Their product yields
(2 3 3,7 4,3+1,7 3,3+5,7
4,4+1,8 3,4+5,8
As vectors are defined as a matrix having only one column, there is nothing special in writing, for instance, the product of a matrix and a vector A b (yielding a column vector), or the product of a
row vector with a matrix b~A (yielding a row vector). As a matter of course, matrix A and vector b must have appropriate dimensions for these operations, or, in more technical terms, A and b must be
conformable to multiplication.
M u l t i p l i c a t i o n of two v e c t o r s w i t h each o t h e r
Everything that was said concerning the multiplication of matrices carries over to the multiplication of two vectors a and b, with no change. So, no extra rules need to be memorized. While the
multiplication of two column vectors and the multiplication of two row vectors is impossible according to the definition for matrix multiplication, the product of a row vector with a column vector
and the product of a column vector with a row vector are possible. These two products are given special names. The product of a row vector with a column vector, that is, a~b, is called the inner
product. Alternatively, because the result is a scalar, this product is also termed the scalar product. The scalar product is sometimes denoted by < a, b >. In order to calculate the inner product
both vectors must have the same number of elements. Consider the following example of the two
A.5. MULTIPLYING MATRICES three-element vectors a and b. Multiplication of
a13 )
yields the inner product 3
a ' b - all bll + a12b21 + a13b31 - E
For a numerical example consider the vectors
1 )
The inner product is then calculated as a'b-
Most important for the discussion of multicollinearity in multiple regression (and analysis of variance) is the concept of orthogonality of vectors. Two vectors are orthogonal if their inner product
is zero. Consider the two vectors a' and b. These two vectors are orthogonal if
E aibi - a ' b - 0. i
The product of a column vector with a row vector ab' is called the
outer product and yields a matrix with a number of rows equal to the number of elements of a and number of columns equal to the number of elements of b. Transposing both a' and b yields a and b'. a
has dimensions 3 x 1 and b' has dimensions 1 x 3. Multiplying
all / a21 a31
b13 )
A P P E N D I X A. E L E M E N T S OF M A T R I X A L G E B R A
yields the outer product alibi1 a21bll a31bll
ab' -
alibi2 alibi3 ) a21b12 a21b13 a31b12 a31bla
Using the numbers of the last numeric example yields
(3,2 3,73,9)(6 21 27)
~ -
6,2 1,2
6,7 1,7
6,9 1,9
Obviously, the order of factors is significant when multiplying vectors with each other. In the present context, the inner product is more important.
The Rank of a Matrix
Before introducing readers to the concept of rank of a matrix, we need to introduce the concept of linear dependency of rows or columns of a matrix. Linear dependency can be defined as follows: The
columns of a matrix A are linearly dependent if there exists a vector x such that A x yields a vector containing only zeros. In other words, if the equation A x - 0 can be solved for x then the
columns of A are linearly dependent. Because this question would be trivial if we allow x to be a vector of zeros we exclude this possibility. Whether such a vector exists can be checked by solving
the equation A x - 0 using, for instance, the Gaug Algorithm. If an x other than the trivial solution x = 0 exists, the columns are said to be linearly dependent. Consider, for example, the following
3 x 3 matrix:
1 -2 -3
2 -4 -6
3) -6 5
If we multiply A by the column vector x' - ( 2 , - 1 , 0) we obtain b' = (0, 0, 0). Vector x is not an array of zeros. Therefore the columns of A are said to be linearly dependent. A relatively
simple method for determining whether rows or columns
T H E R A N K OF A M A T R I X
of a matrix are linearly independent involves application of such linear operations as addition/subtraction and multiplication/division. Consider the following example of a 3 x 3 matrix:
1 -4 5
-2 2 -3
-3 ) 0 . -1
The columns of this matrix are linearly dependent. The following operations yield a row vector with only zero elements:
1. Multiply the second column by two; this yields
4 -6
2. Add the result of Step 1 to the first column; this yields
0 -1
3. Subtract the third column from the result obtained in the second step; this yields a column vector of zeros. Thus, the columns are linearly dependent. If the columns of a matrix are linearly
independent, the rows are independent also, and vice versa. Indeed, it is one of the interesting results of matrix algebra that the number of linearly independent columns of a matrix always equals
the number of linearly independent rows of the matrix. We now turn to the question of how many linear dependencies there are among the columns of a matrix. This topic is closely related to the rank
of a matrix. To introduce the concept of rank of a matrix, consider matrix A with dimensions m • n. The rank of a matrix is defined as the number of linearly independent rows of this matrix. If the
rank of a matrix equals the number of columns, that is, r a n k ( A ) = n, the matrix has full column
A P P E N D I X A. E L E M E N T S OF M A T R I X A L G E B R A
rank. Accordingly, if r a n k ( A ) = m, the matrix has full row rank. If a square matrix has full column rank (and, therefore, full row rank as well), it is said to be nonsingular. In this case, the
inverse of this matrix exists (see the following section). For numerical characteristics of the rank of a matrix we refer readers to textbooks for matrix algebra (e.g., Ayres, 1962).
The Inverse of a Matrix
There is no direct way of performing divisions of matrices. One uses inverses of matrices instead. To explain the concept of an inverse consider the two matrices A and B. Suppose we postmultiply A
with B and obtain A B = I, where I is the identity matrix. Then, we call B the inverse of A. Usually, inverse matrices are identified by the superscript "-1," Therefore, we can rewrite the present
example as follows: B = A -1. It also holds that both pre- and postmultiplication result in I, that is, A A -1 = A - 1 A -
For an inverse to exist, a matrix must be nonsingular, square (although not all square matrices have an inverse), and of full rank. The inverse of a matrix can be viewed similarly to the reciprocal
of a number in ordinary linear algebra. Consider the number 2. Its reciprocal is 1/2. Multiplying 2 by its reciprocal, 1/2, gives 1. In general, a number multiplied with its reciprocal always equals
1, 1
x----1. x
Reexpressing this in a fashion parallel to (A.4) we can write 1
Calculating an inverse for a matrix can require considerable amounts of computing. Therefore, we do not provide the specific procedural steps for calculating an inverse in general. All major
statistical software packages include modules that calculate inverses of matrices. However, we do
A. 7. THE I N V E R S E OF A M A T R I X
give examples for two special cases, for which inverses are easily calculated. These examples are the inverses of diagonal matrices and inverses of matrices of 2 x 2 matrices. The inverse of a
diagonal matrix is determined by calculating the reciprocal values of its diagonal elements. Consider the 3 x 3 diagonal matrix all 0 0
A -
0 a22 0
0 / 0 . a33
The inverse of this matrix is
1/all 0 0
0 1/a22 0
0 ) 0 . 1/a33
This can be verified by multiplying A with A -1. The result will be the identity matrix I3. Consider the following numerical example of a 2 x 2 diagonal matrix A and its inverse A - l :
( 3 0) 0
i _ l _ (1/3 0
0 )
Multiplying A with A -1 results in
AA-1 -
3,1/3+0,0 0,1/3 + 6,0
3"0+0"1/6 /_/ 0 * 0 + 6 * 1/6
1 01 ) - I 2 , 0
which illustrates that multiplying a matrix by its inverse yields an identity matrix. The inverse of a 2 x 2 matrix, A -1, can be calculated as
(a 2
a11a22 -- a12a21 --a21
a12) all
or, after setting D = alia22 - a12a21 we obtain A - ~ = (--D 1
-a21a22 -a12)a11 .
To illustrate this way of calculating the inverse of a matrix we use the same matrix as in (A.5). We calculate D = 3 96 - 0 90 - 18. Inserting into (A.6) yields A-1=1(1
/ - 063 - 013) - ( 8
1/60 ) .
Obviously, (A.5) is identical to (A.7). The inverses of matrices are needed to replace algebraic division. This is most important when solving equations. Consider the following example. We have the
matrix equation A = B Y that we wish to solve for Y. Suppose that the inverse of B exists. We perform the following steps: 1. Premultiply both sides of the equation with B -1. We then obtain the
expression B - 1A - B - ~BY. 2. Because of B - 1 B Y = I Y = Y we have a solution for the equation. It is Y = B - 1 A .
The Determinant
of a Matrix
The determinant of a matrix is defined as a function that assigns a realvalued number to this matrix. Determinants are defined only for square matrices. Therefore, we consider only square matrices in
this section. Rather than going into technical details concerning the calculation of determinants we list five characteristics of determinants: 1. A matrix A is nonsingular if and only if the
determinant, abbreviated IAI or det(A), is different than 0. 2. If a matrix A is nonsingular, the following holds: 1
3. The determinant of a diagonal or triangular matrix equals the product of its diagonal elements. Let A be, for instance, all 0 A
then det(A) = a l i a 2 2 " " a n n .
4. The determinant of a product of two matrices equals the product of the two determinants, d e t ( A B ) = det(A)det(B).
5. The determinant of the transpose of a matrix equals the determinant of the original matrix, det(A') = det(A).
Rules for Operations with Matrices
In this section we present a selection of operations that can be performed with vectors and matrices. We do not provide examples or proofs. Readers are invited to create examples for each of these
operations. Operations include the simpler operations of addition and subtraction as well as determinants and the more complex operations of multiplication and inversion. It is tacitly assumed that
all matrices A, B , . . . , have admissible dimensions for the corresponding operations. 1. When adding matrices the order is unimportant. One says that the addition of matrices is commutative:
APPENDIX A. E L E M E N T S OF M A T R I X A L G E B R A
2. Here is a simple extension of the previous rule: (A+B)
+C = A+ (B+C).
3. When multiplying three matrices the result does not depend on which two matrices are multiplied first as long as the order of multiplication is preserved" (AB)C = A(BC). 4. Factoring out a product
term in matrix addition can be performed just as in ordinary algebra: C ( A + B) = C A + CB. 5. Here is an example of factoring out a scalar as product term: k(A + B) = kA + kB , where k is a scalar
6. Multiplying first and then transposing is equal to transposing first and then multiplying, but in reversed order: (AB)'= B'A'. 7. The transpose of a transposed matrix equals the original matrix:
(A')' = A. 8. Adding first and then transposing is equal to transposing first and then adding: (A + B)' = A' + B'. 9. Multiplying first and then inverting is equal to inverting first and then
multiplying, but in reversed order: (AB) -1 - B-1A-1.
A. I O. E X E R C I S E S
10. The inverse of an inverted matrix equals the original matrix: ( A - t ) -1. 11. Transposing first and then inverting is equal to inverting first and then transposing:
(A') - ' = (A-l)'
1. Consider the following three matrices, A, B, and C. Create the transpose for each of these matrices.
(130) A-
-1 2
15 -8
, C-
34 -6
2. Consider, again, the above matrices, A, B, and C. Transpose B and add it to A. Transpose A and add it to B. Transpose one of the resulting matrices and compare it to the other result of addition.
Perform the same steps with subtraction. Explain why A and B cannot be added to each other as they are. Explain why C cannot be added to either A or B, even if it is transposed. 3. Consider the
matrix, A, and the vector, v, below: -2 15 -1
4 ) 1 , v3
12 -2 )"
Postmultiply A by v. 4. Consider the matrix, X, below
A P P E N D I X A. E L E M E N T S OF M A T R I X A L G E B R A
Multiply the matrix, X, by the scalar-3. 5. Consider the matrices, A and B, below
Multiply these matrices with each other. with B.)
(Hint: postmultiply A
6. Consider the two vectors, v and w, below:
4 ,
2 -3
Perform the necessary operations to create (1) the inner product and (2) the outer product of these two vectors. Add these two vectors to each other. Subtract w from v. 7. Consider the diagonal
matrix D - diag[34, 4 3 , - 12, 17, 2]. Calculate the inverse of this matrix. Multiply D with D - 1 . Discuss the result. 8. Consider the following matrix:
(43) 3 3
Create the inverse of this matrix. 9. Find the 2*3 matrix whose elements are its cell indices.
Appendix B
B A S I C S OF D IF F E R E N T I A T I O N This excursus reviews basics of differentiation. Consider function f(x). Let x be a real number and f(x) be a real-valued function. For this function we
can determine the slope of the tangent, that is the function, that touches f(x) at a given point, x0. In Figure B.1 the horizontal straight line is the tangent for the smallest value of f(x). The
arrow indicates where the tangent touches the square function. The tangent of a curve at a specific point xo is determined by knowing the slope of the curve at that point. The slope at xo can be
obtained by inserting xo into the first derivative,
df(~), of the function
In other words, d
d---~f (xo) yields the slope of the curve at x0 and thus the slope of the tangent of f (x) at x0. Often the first derivative is denoted by f'(x). At an extremum the slope of f(x) is zero. Therefore,
differentiation is a tool for determining locations where a possible extremum might occur. This is done by setting 351
b.. Y
X Figure B.I: Minimum of square curve.
the first derivative of f ( x ) to zero and solving for x. This yields the points where an extremum is possible. Therefore, knowledge of how derivatives are created is crucial for an understanding of
function minimization. We now review six rules of differentiation. 1. The first derivative of a constant function, .f(x) - k, is zero: d
2. The first derivative of the function f ( x ) = x is 1: d
3. Rule: Creating the first derivative of a function, f ( x ) , t h a t has the form f ( x ) = x n proceeds in two steps: 9 subtract 1 from the exponent
353 9 place the original exponent as a factor of x in the function Applying these two steps yields as the derivative of x n
d x n _ n x n-1 dx Notice t h a t Rule 2 can be derived from Rule 3. We obtain
~xxx - l x ~
4. If we want to differentiate a function t h a t can be written as k f ( x ) , where k is a constant, then we can write this derivative in general form as d
d--~kf (x) - k - ~ x f (X ). 5. The first derivative of the sum of two functions, fl(X) and f2(x), equals the sum of the first derivatives of the two functions,
d d--x (fl (x) + f2(x)) -
d fl (x) W ~xx f2(x),
or, more generally, d
In words, s u m m a t i o n and differentiation can be interchanged. 6. The first derivative of a function f ( y ) which takes as its argument another function g(x) = y is given by the chain rule:
d f d ~---~g(x) dx (g(x)) - -~y f (y) . For example, let f ( y ) - y2 and y -- g(x) - (2 - x), t h a t is, f ( x ) = ( 2 - x) 2. Now, the chain rule says t h a t we should multiply the derivative of
f with respect to y, which in the example is 2y, with
A P P E N D I X B. BASICS OF D I F F E R E N T I A T I O N
the derivative of g with respect to x, which is -1. We obtain d
d--~f(g(x)) - 2 y ( - 1 ) -
2 ( 2 - x ) ( - 1 ) - 2 x - 4.
after substituting ( 2 - x) for y. So far we have only considered functions that take a single argument, usually denoted as x, and return a real number denoted as f (x). To obtain the least squares
estimates we have to be able to handle functions that, while still returning a real number, take two arguments, t h a t is, f ( x , y). These functions are defined over the x - y plane and represent
a surface in three-dimensional space. Typically, these functions are differentiable as well. We will not try to explain differential calculus for real-valued functions in n-space. All we need is the
notion of partial derivation, and this is quite easy. We merely look at f ( x , y) as if it were only a function of x, treating y as a constant, and vice versa. In other words, we pretend t h a t f
(x, y) is a function taking a single argument. Therefore, all the above rules for differentiation apply. Just the notation changes slightly; the partial derivative is denoted as
0 0--xf (x, y) if we treat y as a constant and as
O-~-f (x, y) Oy if x is treated as a constant. Again, what we are after is finding extrema. A result from calculus states that a necessary condition for an e x t r e m u m at point (xo,yo) in the xy
plane is that the two partial derivatives are zero at t h a t point, that is,
0 Oy s ( x~ ' Yo ) - O. oxS(xo, Yo) - 0 and o__
Appendix C
B A S I C S OF V E C T O R DIFFERENTIATION Consider the inner product a~x, where a and x are vectors each having p elements, a is considered a vector of constants and x is a vector containing p
variables. We can write f ( x l , x 2 , . . . , X p ) = a~x, thus considering the inner product a~x a real valued function of the p variables. Now, the partial derivatives of f with respect to each
xj are easily obtained as 0
Oxj i=1
i1"= o x j
In words this says that each partial derivative of a~x with respect to xj just yields the corresponding constant aj. We now define the derivative of f with respect to the vector x as 0--~1f (x)
0 0--~f (x)
and for the special case that f ( x ) = a~x we obtain as the derivative with 355
respect to x al a2
-- a,
that is, we obtain the vector a. We formulate this as Rule 1. Rule 1-
0 _---a'x - a. 0x
The following two rules follow accordingly. Rule 2"
Rule 3"
(9 x l
~xx a - a .
0 - z - a - - 0, &x
where 0 denotes a vector of zeros and a is a vector or a real-valued number. In both cases the result will be the vector 0. In words Rule 3 says that each partial derivative of a constant vector or a
constant is zero. Combining all these partial derivatives into a vector yields 0. It should be noted that these rules are very similar to the corresponding rules for differentiation for real-valued
functions that take a single argument. This holds as well for the following: Rule 4:
~ x X ' X - 2x.
Combining the partial derivative of the inner product of a vector with itself yields the same vector with each element doubled. Rule 5:
~xxX l A x -
If A is symmetric, functions of the form x ' A x are called quadratic forms. In this case, A - A' and Rule 5 simplifies to ~ x x ' A x - 2Ax.
357 With theses rules we are in the position to derive the general O LS solution very easily. For further details on vector differentiation, see for instance, Mardia, Kent, and Bibby (1979), Morrison
This Page Intentionally Left Blank
Appendix D
P O LYN O MIAL S The following excursus presents a brief introduction to polynomials and systems of polynomials (see, for instance, Abramowitzm & Stegun, 1972). Readers familiar with these topics can
skip the excursus. Functions that 1. are defined for all real-valued x and 2. do not involve any divisions are called p o l y n o m i a l s o f the form given in (D.1) J
y = bo + b l x + b2x 2 + . . .
~ bjxj,
where the real-valued bj are the polynomial parameters. The vectors x contain x values, termed polynomial coefficients. These coefficients describe polynomials given in standard form. It is important
to note that the formula given in (D.1) is linear in its parameters. The x values are raised to powers of two and greater. Therefore, using polynomials to fit data can still be accomplished within
the framework of the General Linear Model. The highest power to which an x value is raised determines the degree of the polynomial, also termed the order of the polynomial. For example, if the
highest power to which an x value is raised is 4, the polynomial 359
A P P E N D I X D. P O L Y N O M I A L S
is called a fourth-degree polynomial or fourth-order polynomial. highest power is J, the polynomial is a J t h - o r d e r polynomial.
If the
Figure 7.2 presents four examples of polynomials - one first-, one second-, one third-, and one fourth-order polynomial, all for seven observation points. The polynomial coefficients used to create
Figure 7.2 appear in Table D.1. Textbooks of analysis of variance contain tables with polynomial coefficients t h a t typically cover polynomials up to fifth order and up to 10 values of X; see
Fisher and Yates (1963, Table 23) or Kirk (1995, Table El0).
Table D.I: Polynomial Coefficients for First-, Second-, Third-, and FourthOrder Polynomials for Seven Values of X Predictor Values
-First 3 -2 -1 0 1 2 3
Polynomials (Order) Second Third Fourth 5 1 3 0 1 -7 -3 1 1 -4 0 6 -3 1 1 0 1 -7 5 1 3
The straight line in Figure 7.2 displays the coefficients in the column for the first order polynomial in Table D.1. The graph of the second order polynomial is U-shaped. Quadratic polynomials have
two p a r a m e t e r s t h a t can be interpreted. Specifically, these are the slope p a r a m e t e r from the linear polynomial and the p a r a m e t e r for the curvature. 1 The sign of this p a
r a m e t e r indicates what type of e x t r e m u m the curve has. A positive
1In the following sections we apply the Hierarchy Principle. This principle requires that a polynomial of degree J contains all lower order terms, that is, the J - lth term, the J - 2nd term, ...,
and the J - Jst term (the constant). The reason for adhering to the Hierarchy Principle is that it is rarely justifiable to only use a specific polynomial term. For instance, in simple regression,
one practically always uses both bo and bl. Omitting bo forces the intercept to be zero which is reasonable only if the dependent variable, Y, is centered (or when regression analysis is based on a
correlation matrix).
parameter indicates that the curve is U-shaped, that is, has a minimum. A negative parameter indicates that the curve has a maximum. This curve is inversely U-shaped. The size of the parameter
indicates how flat the curve is. Small parameters suggest flat curves, large parameters suggest tight curves. First- and second-order polynomials do not have inflection points, that is, they do not
change direction. "Changing direction" means that the curve changes from a curve to the right to a curve to the left and vice versa. A look at Figure 7.2 suggests that neither the first-order
polynomial nor the second-order polynomial change direction in this sense. In contrast, the third and the quartic polynomials do have inflection points. For example, the line of the third-order
polynomial changes direction at X=4andY=O. The size of the parameter of the third-order polynomial indicates how tight the curves of the polynomial are. Large parameters correspond with tighter
curves. Positive parameters indicate that the last "arm" of the polynomial goes upward. This is the case for all four polynomials in Figure 7.2. Negative parameters indicate that the last "arm" goes
downward. The fourth-order polynomial has two inflection points. The curve in Figure 7.2 has its two inflection points at X = 2.8 and X = 5.2, both at Y - 0. The magnitude of the parameter of
fourth-order polynomials indicates, as for the other polynomials of second and higher order, how tight the curves are. The sign of the parameter indicates the direction of the last arm. with positive
signs corresponding to an upward direction of the last arm.
Systems of Orthogonal Polynomials
This section introduces readers to a special group of polynomials, orthogonal polynomials. These polynomials are equivalent to other types of polynomials in many important characteristics; however,
they have unique characteristics that make them particularly useful for regression analysis. Most importantly, orthogonal polynomials are "independent" of each other. Thus, polynomials of different
degrees can be added or eliminated without affecting the magnitude of the parameters already in the equation.
Consider a researcher that has estimated a third-order polynomial to smooth the data collected in an experiment on the Yerkes-DodsonLaw (see Chapter 7). After depicting the data, the researcher
realizes that a third-order polynomial may not be necessary, and a quadratic curve may be sufficient. If the researcher has fit an orthogonal thirdorder polynomial, all that needs to be done to
switch to a second-order polynomial is to drop the third-order term. Parameters for the lower order terms will remain unaffected. To introduce systems of orthogonal polynomials, consider the
expression y -- bo~) "q- 5 1 ~ -[- b2~2 -~-.-- ,
where the ~, with i - 0, 1 , . . . , denote polynomials of ith order. Formula (D.2) describes a system of polynomials. In order to describe a system of orthogonal polynomials, any two different
polynomials on the system must fulfill the orthogonality condition
Z YiJYki --O' f o r j ~ k , i
where j and k index polynomials, and i indexes cases, for instance, subjects. This condition requires the inner product of any two different vectors of polynomial coefficients to be zero. Consider,
for example, the orthogonal polynomial coefficients in Table D.1. The inner product of the coefficients for the third- and the fourth-order polynomials is ( - 1 ) , 3 + 1 9( - 7 ) + 1 , 1 + 0 , 6 + (
- 1 ) 91 + ( - 1 ) , ( - 7 ) + 1 , 3 = 0. Readers are invited to vector-wise sum polynomial coefficients and to calculate the other inner products for the vectors in Table D.1 and to decide whether
the vectors in this table stem from a system of orthogonal polynomials. It should be noted that this condition cannot always be met by centering. While not part of the orthogonality condition, the
following condition is also often placed, for scaling purposes:
Z Yi - 0 . i
S M O O T H I N G S E R I E S OF M E A S U R E S
In words, this condition requires that the sum of y values, that is, the sum of polynomial coefficients, equals zero. This can be obtained for any array of real-valued measures by, for example,
centering them. Consider the following example. The array y' - (1,2,3) is centered to be y~r - ( - 1 , 0, 1). Whereas the sum of the components, y, equals 6, the sum of the components of Yr equals 0.
Equation (D.2) contains the ~, for i _ 0, as placeholders for polynomials. The following formula provides an explicit example of a system of two polynomials, one first- and one second-order
polynomial: Y - ~1 (O~10 -~- OLllX) -I- ]~2 (Ol20 -{- O~21X -~- OL22X2). In this equation, the/~j, f o r j = 1, 2, are the parameters (weights) for the polynomials, and the sit, f o r j = 1, 2, andl
= 0, 1, 2, are the parameters within the polynomials. As we indicated before, the parameter estimates for orthogonal polynomials are independent of each other. Most importantly, the ~j remain
unchanged when one adds or eliminates any other parameter estimate for a polynomial of different degree.
S m o o t h i n g Series of Measures
Polynomials can be used to smooth any series of measures. If there are J measures, the polynomial of degree J - 1 will always go exactly through all measures. For reasons of scientific parsimony,
researchers typically strive for polynomials of degree lower than J - 1. The polynomial of degree J - 1 has J parameters, that is, as many parameters as there are data points. Thus, there is no data
reduction. The polynomial of degree J - 2 has J - 1 parameters. It may not go exactly through all data points. However, considering the ubiquitous measurement error, one may be in a position where
the polynomial describes the series very well nevertheless. However, there are some problems with polynomial approximation. Two of these problems will be briefly reviewed here. First, while one can
approximate any series of measures as closely as needed, this may not always be possible using polynomials of low order. This applies in particular to cyclical processes such as seasonal changes,
rhythms, and circadian changes. It also applies to processes that approach an asymptote
or taper off. Examples of the former include learning curves, and examples of the latter include forgetting curves. Thus, one may approximate these types of processes using functions other than
polynomials. Second, as is obvious from Figure 7.2, polynomials assume increasingly extreme values when x increases (or decreases). Thus, using polynomial functions for purposes of extrapolation only
makes sense if one assumes that the process under study also assumes these extreme values. This applies accordingly to interpolation (see the sections on extrapolation and interpolation in Section
Appendix E
DATA SETS E.1
Recall Performance Data
T h i s is t h e d a t a set used to i l l u s t r a t e t h e v a r i ab l e selection techniques. AGE EGI SEX HEALTH READ EDUC CCI
CC2 OVC
0.24 0.15 0.65 0.21 0.70 0.29 0.42 0.27 0.35 0.28 0.54 0.90 0.28 0.36 1.00 0.56 0.41 0.49 0.38
A P P E N D I X E. DATA S E T S 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4
1 2 1 1 1 2 2 1 1 2 2 1 2 1 2 1 2 1 2 1 1 1 1 2 1 1 1 1 1 1 1 1 1 1 i 2 1 1 1 2
1 1 1 1 2 2 1 2 2 1 1 3 1 1 1 1 2 1 1 2 2 1 1 1 1 2 1 2 1 1 1 1 1 2 i 1 1 2 1 1
14 7 7 6 5 7 7 16 8 11 7 5 5 7 8 13 6 6 18 7 6 11 7 12 9 6 18 12 6 8 8 5 7 11 13 ii i0 9 8 8
0.49 1.00 1.00 0.73 0.95 0.94 0.60 0.32 0.52 0.43 0.67 0.93 0.78 0.64 0.57 0.35 0.62 0.90 0.33 0.60 1.00 0.47 0.76 0.46 0.50 0.38 0.32 0.40 0.76 0.53 0.65 1.00 0.61 0.30 0.36 0.33 0.73 0.46 0.60 0.55
E.1. R E C A L L P E R F O R M A N C E DATA 25 21 18 25 18 18 20 21 20 22 21 19 20 20 18 20 20 20 46 37 47 34 37 45 40 31 40 47 48 38 47 48 47 46 44 46 46 44 46 49
8 10 8 16 6 5 9 6 11 16 4 6 18 8 12 9 6 9 13 10 24 13 9 12 7 15 6 10 5 14 6 7 12 8 12 ii 20 7 6 24
367 0.61 0.35 0.52 0.17 0.90 0.53 0.42 0.59 0.39 0.25 1.00 0.65 0.25 0.76 0.40 0.77 1.00 0.50 0.35 0.54 0.32 0.42 0.40 0.40 0.52 0.28 0.68 0.95 1.00 0.42 0.78 0.74 0.38 0.48 0.35 0.60 0.25 0.68 0.78
368 40 50 48 43 47 44 43 40 44 40 40 49 45 47 39 38 38 44 49 51 42 42 42 4O 41 46 40 42 47 41 48 46 49 67 61 65 61 70 68 69
A P P E N D I X E. DATA S E T S 1 1 1 1 2 1 1 1 2 1 1 2 2 1 1 1 1 2 1 1 1 2 1 1 1 1 2 1 1 2 1 1 2 2 1 1 1 2 1 1
0.47 0.12 0.56 O. 40 0.17 O. 47 0.32 0.33 O. 14 O. 16 0.23 0.68 O. 63 0.27 0.61 1.00 0.39 O. 46 O. 13 0.46 O. 17 0.20 0.69 0.45 0.65 0.37 0.33 0.22 0.51 0.23 0.46 0.41 0.39 1.00 1. O0 0.65 0.61 0.34
0.43 1.00
E.1. R E C A L L P E R F O R M A N C E DATA 64 67 67 68 69 60 64 63 64 63 60 60 61 66 61 67 59 69 73 60 60 60 67 63 66 63 59 70 61 61 64 67 64 67 63 60 59 62 66 63
2 1 1 1 1 1 1 1 1 2 2 1 1 2 1 2 I i 2 i 2 i i 2 2 2 i i i i I 2 i 1 i 2 2 1 1 1
2 2 2 3 1 2 1 2 1 1 2 1 2 1 1 1 i 2 3 I 2 2 2 i 1 2 i 2 2 i i 2 I 1 2 I i 2 1 1
8 6 8 7 7 8 7 12 8 15 16 9 17 4 21 14 17 8 6 6 ii 20 17 21 5 6 8 9 7 6 ii 5 17 7 12 ii 22 6 6 12
369 0.62 1.00 0.90 1.00 0.56 0.63 0.86 0.38 0.63 0.28 0.33 0.46 0.34 1.00 0.30 0.36 0.35 0.52 0.76 0.72 0.33 0.23 0.30 0.20 1.00 1.00 0.78 0.67 0.95 0.90 0.45 0.68 0.34 0.35 0.46 0.66 0.21 0.94 0.52
2 2 2 2 1 1 1 1 1 1 1 1 1 1 1 1 i i i i i i i i 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
42 49 67 50 47 52 37 67 67 65 50 57 93 37 58 57 40 37 14 ii 51 54 84 25 16 20 42 39 29 21 41 29 45 50 22 25 34 29 45 34
A P P E N D I X E. DATA S E T S
370 67' 59 69 65
0.23 1.00 0.90 0.34
E x a m i n a t i o n and State A n x i e t y D a t a
This is the d a t a set t h a t was used to illustrate robust modeling of longitudinal data. Females Males t l t2 t3 t4 t5 t l t2 t3 t4 t5
E.2. E X A M I N A T I O N AND STATE A N X I E T Y DATA 28 25 26 23 25 23 20 20 16 24 17 11 25 19 22 2:1. 15 23 21 17 31 21 19 22 20
29 27 27 28 28 30 25 27 32 34 32 34 19 19 24 30 22 23 29 29 20 23 13 17 26 27 21 27 25 28 24 23 17:1.6 21 16 24 24 22 24 33 36 24 25 26 27 28 26 22 26
32 8 3:1. 8 35 15 30 16 38 12 32 15 25 5 38 1_5 32 3 37:1.1 32 7 29 20 34 4 27 3 38 18 23 :1. 16 5 21 15 30 11 26 11 42 0 33 15 30 8 28 13 30 6
17 21 15 18 23 24 23 23 2:1. 26 23 25 10 16 19 26 11:1.2 20 21 15 18 27 32 8 11. 1_! 17 24 28 8 10 11 15 23 23 15 20 21 25 7 1_0 19 25 14 15 21 26 19 21
23 19 27 24 27 24 21 25 14 1_9 21 30 13 20 26 12 21 27 18 23 14 26 17 29 25
28 26 28 29 35 30 24 30 19 22 3l 37 22 25 29 1_6 29 28 27 27 20 30 25 35 30
This Page Intentionally Left Blank
References Abramowitzm, M., & Stegun, I. (1972). Handbook of mathematical functions. New York: Dover. Afifi, A. A., & Clark, V. (1990). Computer-aided multivariate analysis (2nd ed.). New York: von
Nostrand Reinhold. Aiken, L. S., & West, S. G. (1991). Multiple regression: Testing and interpreting interactions. Newbury Park: Sage. Alexander, R. A., & Govern, D. M. (1994). A new and simpler
approximation for ANOVA under variance heterogeneity. Journal of Educational Statistics, 1, 91-101. Arnold, S. F. (1990). Mathematical statistics. Englewood Cliffs, New Jersey: Prentice-Hall.
Atkinson, A. C. (1986). Comment: Aspects of diagnostic regression analysis. Statistical Science, 1, 397-402. Ayres, F. J. (1962). McGraw-Hill.
Theory and problems of matrices.
New York:
Box, G. E. P., & Cox, D. R. (1964). An analysis of transformations (with discussion). Journal of the Royal Statistical Society, B, 26, 211-246. Bryk, A. S., & Raudenbush, S. W. (1992). Hierarchical
linear models. Newbury Park, CA: Sage. Christensen, R. (1996). Plane answers to complex questions (2nd ed.). New York: Springer-Verlag. Cohen, J. (1978). Partialed products are interactions;
partialed powers are curved components. Psychological Bulletin, 85, 856-866. 373
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum Associates. Cox, D. R. (1958). Planning of experiments. New York: John Wiley &
Cronbach, L. J. (1987). Statistical tests for moderator variables: Flaws in analysis recently proposed. Psychological Bulletin, 102, 414-417. Darlington, R. B. (1990). Regression and linear models.
New York: McGraw-Hill. de Shon, R. P., & Alexander, R. A. (1996). Alternative procedures for testing regression slope homogeneity when group error variances are unequal. Psychological Methods, 1,
261-277. Diggle, P. J., Liang, K., & Zeger, S. L. (1994). Analysis of longitudinal data. Oxford: Clarendon Press. Dobson, A. J. (1990). An introduction to generalized linear models (2nd ed.). London:
Chapman and Hall. Draper, N. R., Guttman, I., & Kanemasu, H. (1971). The distribution of certain regression statistics. Biometrika, 58, 295-298. Dunlop, D. D. (1994). Regression for longitudinal
data: A bridge from least squares regression. The American Statistician, 48(4), 299-303. Efroymson, M. A. (1960). Multiple regression analysis. In A. Ralston & H. S. Wilf (Eds.), Mathematical methods
for digital computers (pp. 191-203). New York: John Wiley & Sons. Feigelson, E. D., & Babu, G. J. (Eds.). (1992). Statistical challenges in astronomy. New York: Springer-Verlag. Fennessey, J., &
D'Amico, R. (1980). Colinearity, ridge regession, and investigator judgment. Social Methods and Research, 8, 309-340. Finkelstein, J. W., von Eye, A., & Preece, M. A. (1994). The relationship between
aggressive behavior and puberty in normal adolescents: A longitudinal study. Journal of Adolescent Health, 15, 319-326. Fisher, G. A. (1988). Problems in the use and interpretation of product
variables. In J. S. Long (Ed.), Common problems//proper solutions (pp. 84-107). Newbury Park, CA: Sage.
Fisher, R. A., & Yates, R. (1963). Statistical tables for biological, agricultural and medical research (5th ed.). Edinburgh: Oliver-Boyd. Fleury, P. (1991). Model II regression. Sysnet, 8, 2-3.
Furnival, G. M., & Wilson, R. W. (1974). Regression by leaps and bounds. Technometrics, 16, 499-511. Games, P. A. (1983). Curvilinear transformations of the dependent variable. Psychological
Bulletin, 93, 382-387. Games, P. A. (1984). Data transformations, power, and skew: A rebuttal to Levine and Dunlap. Psychological Bulletin, 95, 345-347. Goodall, C. (1983). M-estimators of location:
An outline of the theory. In D. C. Hoaglin, F. Mosteller, & J. W. Tukey (Eds.), Understanding robust and exploratory data analysis (pp. 339-403). New York: John Wiley & Sons. Graybill, F. A. (1976).
Theory and application of the linear model. North Scituate, MA: Duxbury Press. Graybill, F. A. (1983). Matrices with applications in statistics (2nd ed.). Belmont, CA: Wadsworth. Gruber, M. (1989).
Regression estimators: A comparative study. San Diego, CA: Academic Press. Hadi, A. S., & Ling, R. F. (1998). Some cautionary notes on the use of principle components regression. The American
Statistician, 52, 15-19. Hartigan, J. A. (1983). Bayes theory. New York: Springer-Verlag. Hettmansperger, T. P., & Sheather, S. J. (1992). A cautionary note on the method of least median squares. The
American Statistician, ~6, 79-83. Hilgard, E. R., & Bower, G. H. (1975). Theories of learning. Englewood Cliffs, NJ: Prentice Hall. Hoaglin, D. C., Mosteller, F., & Tukey, J. W. (1983). Introduction
to more refined estimators. In D. C. Hoaglin, F. Mosteller, & J. W. Tukey (Eds.), Understanding robust and exploratory data analysis (pp. 283-296). New York: John Wiley & Sons.
Hocking, R. R. (1996). Methods and applications of linear models. New York: John Wiley & Sons. Huber, P. J. (1981). Robust statistics. New York: John Wiley & Sons. Huberty, C. J. (1994). A note on
interpreting an R 2 value. Journal of Educational and Behavioral Statistics, 19, 351-356. Isobe, T., Feigelson, E. D., Akritas, M. G., & Babu, G. J. (1990). Linear regression in astronomy i. The
Astrophysical Journal, 36~, 104-113. Ito, P. K. (1980). Robustness of ANOVA and MANOVA test procedures. In P. R. Krishnaiah (Ed.), Handbook of Statistics, Vol. I, Analysis of Variance (pp. 198-236).
Amsterdam: North Holland. Jolicoeur, P. (1973). Imaginary confidence limits of the slope of the major axis of a bivariate normal distribution: A sampling experiment. Journal of the American
Statistical Association, 68, 866-871. Jolicoeur, P. (1991). Introduction h la biomdtrie. Montreal: Ddcarie. Jolicoeur, P., & Mosiman, J. E. (1968). Intervalles de confidance pour la pente de l'axe
majeur d'une distribution normal bidimensionelle. Biomdtrie- Praximdtrie, 9, 121-140. Kaskey, G., Koleman, B., Krishnaiah, P. R., & Steinberg, L. (1980). Transformations to normality. In P. R.
Krishnaiah (Ed.), Handbook of Statistics Vol. I, Analysis of Variance (pp. 321-341). Amsterdam: North Holland. Kermak, K. A., & Haldane, J. B. S. (1950). Organic correlation and allometry.
Biometrika, 37, 30-41. Kirk, R. E. (1995). Experimental design: Procedures for the behavioral sciences (3rd ed.). Monterey: Brooks/Cole Publishing. Klambauer, G. (1986). Aspects of calculus. New
York: Springer-Verlag. Levine, D. W., & Dunlap, W. P. (1982). Power of the f test with skewed data: Should one transform or not? Psychological Bulletin, 92, 272-280. Levine, D. W., & Dunlap, W. P.
(1983). Data transformation, power, and skew: A rejoinder to Games. Psychological Bulletin, 93, 596-599.
Marazzi, A. (1980). ROBETH: A subroutine library for robust statistical procedures. In M. M. Bearritt & D. Wishart (Eds.), Compstat 1980, proceedings in computational statistics (pp. 577-583). Wien:
Physica. Marazzi, A. (1993). Algorithms, routines, and S functions for robust statistics. Pacific Grove: Wadsworth. Mardia, K. V., Kent, J. T., &: Bibby, J. M. (1979). Multivariate analysis. London:
Academic Press. McCullagh, P., &: Nelder, J. A. (1989). Generalized linear models (2nd ed.). London: Chapman and Hall. Meumann, E. (1912). ()konomie und Technik des Ged~chtnisses. Leipzig: Julius
Klinkhardt. Miller, A. J. (1990). Subset selection in regression. London: Chapman and Hall. Morrison, D. F. (1990). Multivariate statistical methods (3rd ed.). New York: McGraw-Hill. Neter, J.,
Kutner, M. H., Nachtsheim, C. J., & Wasserman, W. (1996). Applied linear statistical models (4th ed.). Chicago: Irwin. Nichols, D. (1995a). Using categorical variables in regression. Keywords, 56,
10-12. Nichols, D. (1995b). Using polytomous predictors in regression. Keywords, 57, 10-11. Olkin, I., &: Pratt, J. W. (1958). Unbiased estimation of certain correlation coefficients. Annals of
Mathematical Statistics, 29, 201-211. Pearson, K. (1901a). On lines and planes of closest fit to systems of points in space. Philosophical Magazine, 2, 559-572. Pearson, K. (1901b). On the systematic
fitting of curves to observations and measurements. Biometrika, 1, 265-303. Pierce, M. J., & Tulley, R. B. (1988). Distances to the VIRGO and URSANAJOR clusters and a determination of ho. The
Astrophysical Journal, 330, 579-595.
Pope, P. T., & Webster, J. T. (1972). The use of an F-statistic in stepwise regression procedures. Technometrics, 1~, 327-340. Rollett, W. (1996). Klassische, orthogonale, symmertrische und robuste
symmetrische Regressionsverfahren im Vergleich. University of Potsdam: Department of Psychology: Unpublished Master's Thesis. Rousseeuw, P. J. (1984). Least median squares regression. Journal of the
American Statistical Association, 79, 871-880. Rousseeuw, P. J., & Leroy, A. M. (1987). Robust regression and outlier detection. New York: John Wiley & Sons. Rovine, M. J., & yon Eye, A. (1996).
Correlation and categorization under a matching hypothesis. In A. von Eye & C. C. Clogg (Eds.), Analysis of categorical variables in developmental research (pp. 233248). San Diego, CA: Academic
Press. Rubin, V. C., Burstein, D., & Thonnerd, N. (1980). A new relation for estimating the intrinsic luminosities of spiral galaxies. The Astrophysical Journal, 2~2, 149-152. Ryan, T. P. (1997).
Modern regression analysis. New York: John Wiley & Sons. Schatzoff, M., Tsao, T., & Fienberg, S. (1968). Efficient calculation of all possible regressions. Technometrics, 10, 769-779. Searle, S. R.
(1971). Linear models. New York: John Wiley & Sons. Searle, S. R. (1982). Matrix algebra useful for statistics. New York: John Wiley & Sons. Snijders, T. A. B. (1996). What to do with the upward bias
in R 2A comment on Huberty. Journal of Educational and Behavioral Statistics, 21, 283-287. Spiel, C. (1998). Langzeiteffekte yon Risiken in der ]riihen Kindheit [Long term effects of risks in early
childhood.]. Bern: Huber. Str5mberg, G. (1940). Accidental and systematic errors in spectroscopic absolute magnitudes for dwarf go-k2 stars. The Astrophysical Journal, 92, 156-160.
Stuart, A., & Ord, J. K. (1991). Kendall's advanced theory of statistics (Vol. 2, 5th ed.). New York: Oxford University Press. Tsutakawa, R. K., & Hewett, J. E. (1978). Comparison of two regression
lines over a finite interval. Biometrics, 3~, 391-398. Tukey, J. W. (1977). Exploratory data analysis. Reading, MA: AddisonWesley. Venables, W. N., & Ripley, B. D. (1994). Modern applied statistics
with S-Plus. New York: Springer-Verlag. von Eye, A. (1983). t tests for single means of autocorrelated data. Biometrical Journal, 25, 801-805. von Eye, A., & Rovine, M. J. (1993). Robust symmetrical
autoregression in observational astronomy. In O. Lessi (Ed.), Applications of time series analysis in astronomy and meteorology (pp. 267-272). Padova: Universit~ di Padova, Dipartimento di Science
Statistiche. von Eye, A., & Rovine, M. J. (1995). Symmetrical regression. The Methodology Center Technical Report Series, 1, The Pennsylvania State University. von Eye, A., SSrensen, S., &: Wills, S.
D. (1996). Kohortenspezifisches Wissen als Ursache fiir Altersunterschiede im Ged~ichtnis fiir Texte und S~itze. In C. Spiel, U. Kastner-Koller, & P. Deimann (Eds.), Motivation und Lernen aus der
Perspektive lebenslanger Entwicklung (pp. 103-119). Miinster: Waxmann. Ware, J. H. (1985). Linear models for the analysis of longitudinal studies. The American Statistician, 39, 95-101. Welkowitz,
J., Ewen, R. B., & Cohen, J. (1990). Introductory statistics for the behavioral sciences (4th ed.). San Diego, CA: Harcourt Brace Jovanovich. Whittaker, J. (1990). Graphical models in applied
multivariate statistics. New York: John Wiley & Sons. Wilkinson, L., Blank, G., & Gruber, C. (1996). Desktop data analysis with SYSTAT. Upper Saddle River: Prentice Hall. Wilkinson, L., Hill, M.,
Welna, J. P., & Birkenbeuel, G. K. (1992). SYSTAT statistics. Evanston, IL: Systat Inc.
Yerkes, R. M., & Dodson, J. D. (1908). The relation of strength of stimulus to rapidity of habit formation. Journal of Comparative and Neurological Psychology, 8, 459-482. Yung, Y. F. (1996).
Comments on Huberty's test of the squared multiple correlation coefficient. Journal of Educational and Behavioral Statistics, 21, 288-298.
Index characteristics of least squares estimators, 17 characteristics of the regression model, 18 coding, 65, 70 dummy, 65, 240, 290 effect, 64, 68, 167 coefficient of determination, 26 multiple
determination, 54 Cohen's procedure, 164 confidence intervals, 5, 19, 30, 4041, 118, 136, 217, 221 constrained model, 58 contour lines, 153 contrast, 62, 70, 73-77, 311 orthogonal, 70-72 Cook's
distance, 88-89 covariance between residuals, 18 covariance matrix for estimated regression coefficients, 49 curvilinear functions exponential, 145 heat capacity, 146 negatively accelerated, 146
polynomial, 145 trigonometric, 146 vapor pressure, 146
additive effects of predictors, 49 adjusted R 2, 245, 247-248 Alexander's normalized t approximation, 176 analysis of variance, 6, 38, 62, 65, 74, 176, 262, 263, 271 autocorrelation, 176
autoregressive, 263 backward elimination, 240, 254256 best linear unbiased estimate, 266 best subset regression, 240, 244251 bias, 15, 57, 70, 139, 176, 179, 185, 230, 239, 257 constant, 196
omission, 50, 239 biasing constant, 180-186 block diagonal covariance matirx, 261 BLUE, see best linear unbiased estimate BMDP, 192, 282, 323 Bonferroni adjustment, 133, 295 centroid, 216 of
bivariate distribution, 226 chain rule, 20, 353 381
degrees of freedom, 32, 37, 39, 53, 55, 59, 200, 248 design matrix, 45 differentiation, 351-354 distance between two points, 16 distance between two points in n-space, 16 distance outlier, s e e
outlier, distance dummy coding, s e e coding, dummy
for slope in simple linear regression, 32-33 in multiple linear regression, 58-62 F-to-delete, 255 F-to-enter statistic, 253 Fisher's iris Data, 50 fixed factor, 262 forward selection, 252-254 full
column rank, 30, 47, 66
Gauss-Algorithm, 342 generalized linear models, 1 Gram-Schmidt algorithm, 131
exploratory data analysis effect coding, s e e coding, effect Efroymson's algorithm, 256-257 ellipse, 217 ellipse of contours, 212 ellipsoid, 229 EQS, 323 estimation least squares, 15-22, 44, 144,
210, 265 maximum likelihood, 267, 268 restricted maximum likelihood, 268 weighted least squares, 9498, 266, 267 expected frequencies, 112 expected value of R 2, 58 see
residual, 18 exploratory data analysis, 257 extrapolation, 28, 150 F test, 176
hat matrix, 81, 82 heteroscedasticity, 95, 99, 105, 158 hyperplane curvature, 152 imputation of missing values, 324 inflection points, 119 interaction, 49, 78, 151-174,239, 271,308 ANOVA-type, 163,
166 asymmetric regression, 152 symmetric regression, 152 treatment-contrast, 152 interaction vectors, 173 intercept parameter, 10 interpolation, 28 inverse prediction, 214-216 inversely U-shaped
relation, 120, 122 least median of squares regression, 186-189, 203-207
least trimmed squares regression, 189-190, 203-207 leverage outlier, s e e outlier, leverage linear functions, 3, 7-9, 28 linear in parameters, 18 linear in predictors, 18 linear model, 6 linear
relationship, s e e linear functions linearly dependent columns, 66, 342 linearly dependent variables, 74 M-estimator, 190-191 main effects, 62, 74 Mallow's Cp, 249-251 matrix determinant, 346-347
full rank, 344 inverse of, 344-346, 349 nonsingular, 344, 346 rank of, 342, 344 matrix operations addition, 337-338, 347 multiplication, 338-340 subtraction, 347 transposing, 337 maximum likelihood,
s e e estimation, maximum likelihood mean squared error, 33, 50 measurement level of criterion variable, 30 median, 84, 191 median split, 166 mediated effect, 156 misspecification, 271
383 misspecified models, 162 mixed model, 262 monotonic relationship, 167 MSE, s e e mean squared error multicollinearity, 70, 80, 125, 133141, 163-164, 178 multiple correlation, 25, 53-58, 137,
245-247, 306 adjusted, 247-248 multiple predictors, 55 multiplicative terms, 151, 154167, 309 Newton algorithm, 330 nonlinear regression, 117 normal equations, 46 normal probability plot, 116,242,
251 ordinary least squares, s e e estimation, least squares orthogonal regression, s e e regression, major axis outlier, 15, 81,102, 109, 176,206, 230 distance, 85-88 leverage, 81-89 r coefficient,
288 parallel shift of graph, 8 partial derivation, 354 partial derivatives, 20, 44 partial interaction strategy, 170, 171,315 partial regression coefficient, 48 Poisson distribution, 242 polynomial
384 coefficients, 118, 125,130, 149, 360, 362 parameters, 118, 359 polynomials, 118 first order, 361 fourth order, 361 higher degree, 106 orthogonal, 125,127, 130, 139, 361-363 second order, 361
third order, 361 power analysis, 30 power of F and t test, 33 power of the X2 test, 176 predictor categorical, 63-80 centered, 23 centering, s e e variable, centering dropping, 139 fixed, 5, 23
highly correlated, 141 multicollinear, 134 random, 5, 18 uncorrelated, 137 principal components, 139, 217 principal components analysis, 217 product terms, s e e multiplicative terms proportionate
reduction in error, 26 Pythagorean theorem, 15 quadratic form, 144, 249, 267 random factor, 262 range of multiple correlation, 26
raw residuals, s e e residuals, raw reduced model, 149, 255 regression asymmetric, 212, 213, 217, 219, 224, 228, 235 bisector, 220 curvilinear, 89, 90, 106, 143150, 281,298 discontinuous piecewise,
281 double, 220 impartial, 220 inverse, 209, 213 L1, 15 major axis, 220, 225, 231233 multiple, 43-62 piecewise, 277-285,328 polynomial, 117-131 reduced major axis, 220 robust, 178-207 stepwise, s e e
stepwise regression symmertic, 209 regression hyperplane, s e e surface, regression regression sum of squares, 330 reparameterization, 30, 66, 69 repeated measurement design, 38 residual sum of
squares, 92, 233, 330 residuals, 10-18, 30, 82, 88, 99116, 164, 319-323 distribution of, 109, 110, 113, 116 expected, 108, 113 median of squared, 187, 230 normally distributed, 30, 109,
113 perpendicular, 211 raw, 112 size of, 95 standardized, 112 studentized, 85, 88, 113, 165 sum of squared, 33, 45, 46, 54, 138, 144 variance of, 26, 95, 97, 250 response surface, s e e surface
restricted maximum likelihood, s e e estimation, restricted maximum likelihood robust modeling of longitudinal data, 268-270 robust regression, s e e regression, robust robustness, 175 S-Plus, 130,
191,203, 206, 245 saddle point, 22 SAS, 33, 245, 282 GLM, 62 IML, 130 RIDGEREG, 192 significance level, 32 simultaneously centered main effects, 169 simultaneously testing of correlations, 57 slope
parameter, 10 smoothing the response surface, 171 solution bisector, 210, 222 major axis, 222, 227
385 reduced major axis, 210, 221223, 233-236 SPSS, 115, 297 standardized regression coefficients 138, 181 standardized residuals, s e e residuals, standardized statistical power, 310 statistical
power of regression interaction, 167 statistically significant, 41, 59, 170 stepwise regression, 251 studentized residuals, s e e residuals, studentized subset selection techniques, 237 surface
correlation, 212 function, 22 nonlinear, 154 regression, 47, 152 SYSTAT, 33, 115, 191,203, 230, 282, 291,323, 328 CORR, 197 GRAPH, 149, 158, 171 MGLH, 62, 193, 197 t test for differences in
regression slopes, 36-38 for intercept in simple linear regression, 39 for slope in simple linear regression, 33-36 for slopes in multiple linear regression, 49-50 normalized, 176 test
386 X2, 176 James's, 176 Welch-Aspin, 176 time series, 260 tolerance, 200, 297, 306, 313 transformation, 23, 90, 98, 106 Box-Cox, 90, 92 inverse sine, 93 linear, 26, 27, 53, 126, 140, 219, 235
logarithmic, 92, 219 power, 90 reciprocal, 93 sine, 107 square root, 92, 242 trigonometric, 93 Tshebysheff-Minimization, 15
centering, 22, 27, 40, 53, 131, 138-141,169, 360, 362 dependent, 1 independent, 1 intercorrelation, 125, 134, 141, 165, 306, 310 variables standardized, 179 variance component estimation, 268
variance inflation factor, 136, 180 vector differentiation, 355-357 vectors inner product, 340 outer product, 341 VIF, see variance inflation factor
unconstrained model, 58
within subject correlation, 260266
Yerkes-Dodson-Law, 117, 120 | {"url":"https://epdf.tips/regression-analysis-for-social-sciences.html","timestamp":"2024-11-12T10:07:32Z","content_type":"text/html","content_length":"712993","record_id":"<urn:uuid:6b89a4f0-164e-43c0-bfb1-464e1371835c>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00209.warc.gz"} |
What is the impact of FSI in sloshing phenomena? | SolidWorks Assignment Help
What is the impact of FSI in sloshing phenomena? After being given an honest review of the paper in this column by M. Aronhauer, S. Röber, S. Meyer and S. Maier, “Spoliation Behavior and the
Underlying Effects of FSI in the Tumor Microenvironment”; https://doi.org/10.1016/j.spol.2020.029 12. INTRODUCTION One of its most prominent features is the high extinction rate at high frequency, of
which some models are only satisfactory in order to understand, respectively, why it is more difficult to detect, and in what way? One of the main requirements of science is to understand different
aspects of non-linear dynamics because many factors such as temperature, conductivity and oxidation current and rate may combine to influence the system behavior. Knowledge of these factors, and
therefore how they interact, is important in our society. Many important factors with relevance in the study of non-linear dynamics can be integrated in the work of science, and their influence can
even emerge by fitting known nonlinear models to certain experimental data. In this post, the authors write out what they mean by FSI. They consider different situations, where each is a mixture of
fissionable/active particles, where the particle dynamics and its properties vary as a function of temperature, conductivity and oxidation pressure. They then conduct their view of how these factors
interact. Consider how if we place an axially deformed piece of solid with such properties: 1. Röber et al. 2014 \ 2. Mezçan et al\ And Linares (2011) \ 3.
How To Start An Online Exam Over The Internet And Mobile?
Makous and Klug (2013 \ 4. Barracini et al. 2011 \ 5. Casteliani et al. 2011 \ 6. Bellmann et al. 2011 \ 7. Schatz et al. 2013 \ 8. Shekulska et al. 2011 \ If they are looking at fission units, we
would expect too many fissionable, and of course many non-fissionable, particles. Now this is especially in light of the above results and our theory; as it turns out, if we considered only the
non-fissionate ones — that is, they obtained non-fission by fission, all non-fissionate particles are transformed into fission to give rise to one fission with degenerate degrees of freedom — that
is, fission and non-fission. They still say that fission could be caused by some temperature or conductivity of the matter, but FSI itself is a result of this. In later parts of this work, the
authors are going to consider a system with some thermal expansion coefficient, which is of the order of 10$^2kT$, which is another equation to formulate for non-linear dynamics, but which is of
order of fission. The key ingredient of this paper is the formulation of fission and non-fission as functions of temperature/conductivity (for the fission-fission transition, see S. Miyake et al.).
The paper introduces a different mathematical framework, where the heat capacity and the conductivity are functions of a variable. In detail, a heat capacity of fluid fluid consists in summing the
fission and non-fission. This is then called the flux, and a non-fission is written as a sum of non-fission and fission minus fission.
Do Assignments And Earn Money?
In the case of non-linear dynamics, the fission rate is an output of the fission rate, which is denoted by $D\rho/2$, not by the measure factor. \ Formulae C1 and C2 we shall present. The idea of
introducing fission is to imagine this system in a regime that would always be different: the thermal expansion coefficient $D$, and the heat capacity of fluid (if it is chosen) in a corresponding
regime, in which the thermal expansion coefficient is much smaller, than heat capacity. From the main point of view of the theory of probability processes, it turns out that in this regime, the
fission rate is a form of survival probability, that any probability is a function of the nature of the system to the event that the FSI occurs and the value of fission rate in this regime cannot be
negative in this case. We consider that there are actually three types of fission to this spectrum, that is, a fission which happens at the temperature $T\gg T_0$ of the system whose mean-free-path
parameter is given by fission length ${\cal L}=:\mu_0kT_0$, and a fission which happens at the lower temperature $Tclick site FSI in sloshing phenomena? by Michael M. Schmid, published 30 March 2006
By Michael J. Schmid, PhD Dissertation, Department of Philosophy, University of Vienna and author of *Sloshing in Science.* It has been a long-standing tradition of science and philosophy that in
this period as in all the previous centuries the use of subjectivity and discrit differences is usually relegated to the foreground, as a means of informing the debate on FSI in the history of
science. But does this mean that because it replaces disc.S (for abstract concepts), it is very often taken to have a role only in this setting, i.e., as a precondition, since a conceptual or
objectivist way of being capable of discritical or objectivist thought and that does not come through in a separate conceptual system, i.e., a separate logical way of being being capable of
discritual thought, is required? I am wondering why in my view, while interpreting the paper as there was a substantial discussion of the role of disc.S in its introduction, it is not a thing of the
past. It came to be seen as reflecting a disc or objective nature of the physicalism, such as something is still able to be said, only in the abstract sense. As such it could be argued, it is an
attribute of the mind, which is what I am describing; rather, I am arguing here that it has been used in this sense, but in this sense is not understood as containing a conceptual proposition,
regardless of its being correct, but that the connection between the brain and the mind is very much dependent on how a body is doing in terms of its disc. It is an attribute of the mind is, then, no
alternative theoretical account of its roles. One might propose that in the light of this discussion it is only an attribute of the mind, which cannot be taken to imply anything else, much less a
property of a mind-stuff, since it is no longer “minds”, it is a mechanical system, and the mind is the cause of the specific attributes, such as the body, which have changed. Today I am wondering if
it is not true that the brain was the cause of knowledge (not merely on account of neurophysiology), but if that brain and mind are the equivalent properties of a physical component of matter only in
the abstract sense, so in my view it is no longer a thing of brain, and when the relation to mind is regarded as more complex than we can be anything of our human bodies, it is no longer an attribute
of the mind.
Someone Do My Homework Online
Is this a wrong question? If you will excuse me, I am going to show you a moment of clarity of thought. I have just walked away from the issue, and by a few lines I can give a brief summary of this
question. Let me suggest that this is what the presentation ofWhat is the impact of FSI in sloshing phenomena? Most of us do not know about FSI. A lot of our theories do not really describe the
behaviour of the FSI which, by definition, affects all of the phenomena of a finite field. The theoretical ability to describe many of these phenomena has been very limited, perhaps by lack of the
necessary tools. These have gone unwillingly inexhaustively over the years. From the beginning I believe that the idea of combining the two disciplines has succeeded. It is no question of how to make
the two sets of theories complete, but how? I think the work to really help to understand FSI is what I was told in a recent book “The FSI Fuzzy Computation Book”. What I’m in the minority, I think,
is that the concept of the Fuzzy Computing Book which is about FSI not really showing us anything that’s of any real value over the ordinary standard learning. But, with some help, I can now go for a
more advanced version of the Fuzzy Computing Book. There’s also a better title for every single PPM book. My main complaint with the title ‘Fuzzy Computation’ feels a bit extreme and doesn’t do
justice, but I can’t help but disagree (or maybe I’m wrong) a little. The main section (‘Why’ section) of the book is composed of an eclectic number of discussions concerning the use of non-linear
post-processing techniques in a number of different domain scenarios. There are also a few talks on the two- and two-point functions, which will likely be relevant for any of the issues raised in the
book. First up is ‘Why SIC?’. When I started my career on linear analysis in high school I looked up the standard classical version of the Fuzzy Computation (in English). It has the following
definition for Fuzzy Computation: Any function is of this hyperlink form: (x1 + y1 -n x y1 + y1x2 + xy) Where x, y, and n are integers with the same sign, there is also a meaning to say that if a
function can be written as the sum of a FST of x and y then all elements in the result of the FST will be counted. And the interesting thing is that this definition only uses the n-dimensional
Newton-Hölder symbols – they themselves are just symbols which represent FST in two different ways. Thus, if we write a function as x/n + y, for example – note how the symbol μ(x) converts to its
ordinary double multiplication symbol. That’s all there is to it.
In College You Pay To Take Exam
Then the fun is to scale every element in the result of the FST. The number of those elements is bounded by n and the Euclidean distance is n, where x is the point where the product is | {"url":"https://solidworksaid.com/what-is-the-impact-of-fsi-in-sloshing-phenomena-18380","timestamp":"2024-11-03T14:07:23Z","content_type":"text/html","content_length":"158330","record_id":"<urn:uuid:019cf603-f462-4f0a-80b3-8a4762d69e30>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00885.warc.gz"} |
FOUN046 Mathematics for Science Assignment Term 3 2020 Total marks 60 You must answer the questions in your own words to demonstrate your understanding of the statistical ideas. Answers must be presented in paragraphs, or sentences, as prescribed in Academic English I and make use of relevant statistical language. Also each question must be clearly referenced. (6 marks) | Top Grade Essay Writings
FOUN046 Mathematics for Science Assignment Term 3 2020 Total marks 60 You must answer the questions in your own words to demonstrate your understanding of the statistical ideas. Answers must be
presented in paragraphs, or sentences, as prescribed in Academic English I and make use of relevant statistical language. Also each question must be clearly referenced. (6 marks)
FOUN046 Mathematics for Science Assignment Term 3 2020 Total marks 60 You must answer the questions in your own words to demonstrate your understanding of the statistical ideas. Answers must be
presented in paragraphs, or sentences, as prescribed in Academic English I and make use of relevant statistical language. Also each question must be clearly referenced. (6 marks).
FOUN046 Mathematics for Science Assignment Term 3 2020 Total marks: 60
You must answer the questions in your own words to demonstrate your understanding of the statistical ideas. Answers must be presented in paragraphs, or sentences, as prescribed in Academic English I
and make use of relevant statistical language.
Also each question must be clearly referenced. (6 marks)
1. Define briefly (at most two sentences) what is meant by each of the following three terms in Statistics:
2. a) a sample b) a sample design c) secondary data. (9 marks)
3. a) Define briefly (at most three sentences) what is meant by bias in Statistics.
4. b) List two factors that might introduce statistical bias into a sample.
5. c) Give a statistical example of when using a biased sample could be an advantage in
scientific research. (8 marks)
3. a) List five different factors you must be careful about when designing a survey
1. b) Briefly describe (at most two sentences) two factors from your list in a). (9 marks)
2. In this question you are going to create a sample from a database. Follow the instructions below to help you create your sample.
3. a) Select a large set of data, which has a science, or health related, theme. Provide a reference for the data. If it is not easily available online, print or scan the set of data and attach it
to the assignment.
We recommend http://www.gapminder.org/data/ as a good source to begin your search for a suitable theme and data. Choose a science, or health related, theme that is of interest to you. State why you
have chosen this set of data. Check with your teacher if you are unsure about your choice. (2 marks)
systematic sampling; random sampling using a calculator; stratified sampling; cluster sampling
1. b) Choose two suitable statistical sampling methods from the box above that could be used to create your sample. Write a detailed description of each. Your description must include all of the
steps you need to do to choose a sample from a population.
(8 marks)
1. c) Describe in detail how you would create a sample from your chosen set of data. Your answer must include the following relevant details
2. your choice of one of your two methods described in part b) to create your sample and state why you have chosen to use this method. (2 marks)
3. the size of the sample you wish to create. Justify, statistically, why you have chosen that size. (2 marks)
iii. all the steps you need to calculate to create your sample. Each step should include a description and statistical calculations. (8 marks)
1. your final sample. This should be detailed and presented in table form or as a list. You may wish to use Excel. (6 marks)
Also see: Mathematics in Montessori
FOUN046: Mathematics for Science
Statistical Assignment: Sampling
(10% towards the final grade).
This assignment is a literature and website based assignment. You have eight references available to you to answer your questions. These references are available for use available in the FOUN046
Teams folder under files, Statistics assignment folder. No other research is needed.
The four resources in the book are a selection of material from the following references.
Power Point notes from a website: Why Sample?
Modular Mathematics for GCSE—chapter 19: Sampling, Surveys and Questionnaires
Statistics AEB—chapter 2: Data Collection
Advanced Level Statistics — chapter 9: Sampling and estimation
Another four resources from the internet are also provided. The reference is on the document.
Your Student Book 2 may also be useful in helping you answer the questions.
Reference each question.
Assignment resources:
For the book, use the titles as given above when referencing each question in your assignment.
For the other resources, use the reference in the document as given.
Using Other References from the internet: You may use other references to help you answer the questions. Please reference these resources correctly (as taught in FOUN001) for each question. Be
careful about the time you spend researching other resources and make sure they are reliable academic sources (as taught in FOUN001).
You are required in to hand in this assignment on or before Tuesday 19 January 2021 tutorial time (4 pm NZ time). If you are an on campus student by giving it to your teacher in class, or by email,
for marking and comments. If you are an online student by email for marking and comments.
Emails must be sent to Una’s email address. Use your university email when sending the assignment. Emails from personal addresses with attachments may not arrive as they will be treated as junk mail
by the university system. It is your responsibility to ensure that your emailed assignment arrives to the right place and on time.
Do not use One Drive to send your assignment.
Your assignment must be a Word document. Put your name and class in your file name on your Word document. Keep a publish copy somewhere safe.
Your declaration must be completed and handed in with your assignment. Assignments will not be marked unless a completed declaration is handed in.
If you have any queries, please ask, or message, me before you start. Good luck!
Please Read very carefully before you start your assignment.
Declaration of Work – Academic Integrity
Refer to the Student Assessment Guide book for full details.
Student Name (full name, including “English” name) ID Number (from your ID card)
PaperFOUN046 Maths for Science Assignment NameSampling in Statistics Teacher
If you have any doubts or questions, please see your teacher before you hand in your work.
STUDENT DECLARATION: you must sign this declaration of academic integrity as part of your assignment.
I confirm and promise
• that this assignment is entirely my own work and is original work
• that I have not committed plagiarism when completing the attached piece of work
• that source materials have been fully referenced
• that this assignment, or substantial parts of it, have not been submitted for assessment
in any other programme, any other semester or paper
• that I have not copied from another student or allowed my work to be copied
• that I have not submitted the same or a very similar assignment to that of another student
• that I have not worked on an assignment with other students
• that I have not assisted someone else to commit academic misconduct by writing their assignment or by sharing files
• that I have not made up data or information
• that I have not committed any form of academic misconduct
• that I have read and am aware of academic integrity and the academic misconduct details in my Student Assessment Guide.
Do not forget to complete this declaration. Assignments will not be marked without it.
FOUN046 Mathematics for Science Assignment Term 3 2020 Total marks 60 You must answer the questions in your own words to demonstrate your understanding of the statistical ideas. Answers must be
presented in paragraphs, or sentences, as prescribed in Academic English I and make use of relevant statistical language. Also each question must be clearly referenced. (6 marks) | {"url":"https://topgradeessaywritings.com/foun046-mathematics-for-science-assignment-term-3-2020-total-marks-60-you-must-answer-the-questions-in-your-own-words-to-demonstrate-your-understanding-of-the-statistical-ideas-answers-must-be-presen/","timestamp":"2024-11-04T07:46:21Z","content_type":"text/html","content_length":"52518","record_id":"<urn:uuid:3c5624de-ff7a-4ebf-bab6-6cb752ea97b9>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00529.warc.gz"} |
University of Essex Physical Science & Math Undergraduate Programs
Situated in Essex, England, the University of Essex is a well-known public research university. Since its founding in 1964, it has maintained a high ranking among the best universities in the UK. The
University of Essex, renowned for its dedication to academic achievement, innovation, and lively campus life. Provides various undergraduate and graduate programs in numerous fields. Prioritizing
research, the university has made substantial contributions to advances in science, humanities, and social sciences. Students enthusiastic about learning about the underlying ideas underpinning
mathematical ideas and the physical world can pursue an engaging and all-encompassing education through the University of Essex’s undergraduate programs in physical science and math.
University of essex
These programs offer an array of academic endeavors spanning areas, including Physics, Mathematics, and Astrophysics. Encouraging a strong understanding of the theoretical and applied components of
mathematics and physical sciences, the programs also foster critical thinking and analytical abilities. Knowledgeable faculty members—including accomplished researchers and experts—guide students
through engaging coursework, lab experiments, and problem-solving exercises. The University of Essex ensures that students participating in these programs are exposed to the most recent developments
in mathematics and physical sciences through its dedication to cutting-edge research. Math and Physical Science undergraduate programs are prime examples of
List of University of Essex Physical Science & Math Undergraduate Programs
Course Name Specialization Discipline Degree Level Duration Years Intake Month Fee Per Year
Bachelor of Economics & Mathematics Economics & Mathematics Physical Science & Math bachelors 3 October, 19530
Bachelor of Economics & Mathematics with Placement Year Economics & Mathematics Physical Science & Math bachelors 4 October, 19530
Bachelor of Economics & Mathematics with Year Abroad Economics & Mathematics Physical Science & Math bachelors 4 October, 19530
Bachelor of Economics & Mathematics Economics & Mathematics Physical Science & Math bachelors 3 October, 19530
Bachelor of Economics & Mathematics with Placement Year Economics & Mathematics Physical Science & Math bachelors 4 October, 19530
Bachelor of Economics & Mathematics with Year Abroad Economics & Mathematics Physical Science & Math bachelors 4 October, 19530
Bachelor of Statistics Statistics Physical Science & Math bachelors 3 October, 19530
Bachelor of Statistics with Placement Year Statistics Physical Science & Math bachelors 4 October, 19530
Bachelor of Statistics with Year Abroad Statistics Physical Science & Math bachelors 4 October, 19530
Bachelor+Master of Science in Mathematics & Data Science Mathematics & Data Science Physical Science & Math bachelors 4 October, 19530
Bachelor of Mathematics Mathematics Physical Science & Math bachelors 3 October, 19530
Bachelor of Mathematics with Placement Year Mathematics Physical Science & Math bachelors 4 October, 19530
Bachelor of Mathematics with Study Abroad Year Mathematics Physical Science & Math bachelors 4 October, 19530
Bachelor+Master of Mathematics Mathematics Physical Science & Math bachelors 4 October, 19530
Bachelor of Mathematics with Physics Mathematics with Physics Physical Science & Math bachelors 3 October, 19530
Bachelor of Mathematics with Physics with Placement Year Mathematics with Physics Physical Science & Math bachelors 4 October, 19530
Bachelor of Mathematics with Physics with Study Abroad Year Mathematics with Physics Physical Science & Math bachelors 4 October, 19530
Bachelor+Master of Science in Mathematics & Data Science Mathematics & Data Science Physical Science & Math bachelors 4 October, 19530
Bachelor+Master of Science in Mathematics & Data Science with Placement Year Mathematics & Data Science Physical Science & Math bachelors 5 October, 19530
Bachelor+Master of Science in Mathematics & Data Science with Study Abroad Year Mathematics & Data Science Physical Science & Math bachelors 5 October, 19530
Bachelor of Statistics Statistics Physical Science & Math bachelors 3 October, 19530
Bachelor of Statistics with Placement Year Statistics Physical Science & Math bachelors 4 October, 19530
Bachelor of Statistics with Study Abroad Year Statistics Physical Science & Math bachelors 4 October, 19530
Have a question or need assistance? Contact us—we’re here to help! | {"url":"https://studyabroadupdates.com/university-of-essex-physical-science-math-undergraduate-programs/","timestamp":"2024-11-06T20:33:26Z","content_type":"text/html","content_length":"648751","record_id":"<urn:uuid:c29411fb-7967-4a20-b520-c0b28e25e3db>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00652.warc.gz"} |
Posterior probability - (Exoplanetary Science) - Vocab, Definition, Explanations | Fiveable
Posterior probability
from class:
Exoplanetary Science
Posterior probability is the probability of a hypothesis being true after considering new evidence. It combines prior knowledge with the likelihood of observing the evidence under that hypothesis,
playing a crucial role in statistical inference and decision-making in various fields, including exoplanet research.
congrats on reading the definition of posterior probability. now let's actually learn it.
5 Must Know Facts For Your Next Test
1. Posterior probability allows researchers to refine their beliefs about a hypothesis after observing new data or evidence.
2. In exoplanet research, posterior probabilities can help determine the likelihood of a planet being habitable based on observed characteristics like size and distance from its star.
3. Calculating posterior probabilities often involves computational methods such as Markov Chain Monte Carlo (MCMC) to manage complex models and large datasets.
4. The concept is integral to Bayesian statistics, which emphasizes updating beliefs with new information rather than adhering to fixed probabilities.
5. Understanding posterior probabilities can lead to improved decision-making regarding mission designs and resource allocation in exoplanet exploration.
Review Questions
• How does posterior probability improve our understanding of exoplanet characteristics when new data is available?
□ Posterior probability enhances our understanding by allowing researchers to adjust their beliefs about an exoplanet's characteristics in light of new data. For instance, if initial
observations suggest a planet is likely habitable, but subsequent data shows it has an inhospitable atmosphere, the posterior probability will reflect this updated understanding. This
continuous refinement process helps scientists make more informed predictions about the planet's potential for life.
• Discuss the role of Bayes' Theorem in calculating posterior probabilities and its application in exoplanet research.
□ Bayes' Theorem serves as a foundational tool for calculating posterior probabilities by combining prior probabilities and likelihood functions. In exoplanet research, it allows scientists to
update their hypotheses about the presence of certain features based on newly acquired observational data. For example, if astronomers initially estimate the likelihood of finding water on an
exoplanet, they can use Bayes' Theorem to adjust this estimate as they gather more information from telescopic observations or mission data.
• Evaluate how advancements in computational techniques impact the calculation and utilization of posterior probabilities in the study of exoplanets.
□ Advancements in computational techniques, such as Markov Chain Monte Carlo (MCMC), have significantly impacted the calculation and utilization of posterior probabilities in exoplanet studies.
These methods enable researchers to efficiently sample from complex probability distributions and incorporate vast amounts of observational data into their models. This leads to more accurate
estimates of posterior probabilities, which inform decision-making about exoplanet missions, improve the assessment of habitability criteria, and enhance our overall understanding of
planetary systems.
© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website. | {"url":"https://library.fiveable.me/key-terms/exoplanetary-science/posterior-probability","timestamp":"2024-11-01T19:22:16Z","content_type":"text/html","content_length":"162693","record_id":"<urn:uuid:701e5d38-e9c3-4767-ae12-e323f16418c4>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00749.warc.gz"} |
Guess The Word - TutorialCup
Guess The Word is an interactive problem. An interactive problem means the data which is given to us is not predetermined. We can print values or call the specific function to interact or get more
information regarding the solution. After each step, we also need to FLUSH the buffer to get the correct answer. Now see what exactly guess the word problem is!!! We have given N-words and the length
of each word is the same let it will be X. In this problem, if we print a number then the interactor gives us an integer value Y. Y represents the number of characters matching the guess word. Guess
the word is always match with any one word among N-words. We only ask a few questions to the interactor by printing the word. We have to find the guess string and print it without exceeding the limit
of the question asked.
Guess the word-an interactive problem!
The limit of the question asked is 10 and the value of N(number of words) is 6. Here we take all the words of length 8. See the input’s value which is given.
words = { “abcdefgh”, “riwqsgrd”, “nhtrgjue”, “kueshtew”, “ticqehac”, “nmvdswqz”}.
We print “abcdefgh” to ask the question from interactor and interactor give an integer value 2. Now, we understand that the current number never be our guess number. Ask another question by printing
“riwqsgrd”. In this case, interactor give an integer value 1. Then we also remove this word from our answer list. Ask 3rd question by printing “ticqehac”. In this case, interactor give an integer
value 8. Here we got guess word because of all the characters of the present word matched with the guess word.
Here only in the 3rd question we got our answer so this is a valid solution.
Step:1 Sort the words in dictionary order.
Step:2 While wordlist.size !=0 and question_asked<=q then:
A) Pick one random word and ask the question using it.
B) If interactor answer equal to the length of the word then return the current word.
C) Create one more list and add all those words whose characters matched by random picked number greater than the answer given by interactor.
D) wordlist = new_wordlist(created in step 2.C).
Step:3 If no word found then return/print -1.
/*C++ Implementation of Guess The Word Problem.*/
using namespace std;
void findSecretWord(vector<string> &wordlist,int guessCnt,int wordlength)
/*Check edge case.*/
int flag=0;
/*sort(wordlist.begin(), wordlist.end()).*/
while (wordlist.size()>0&&guessCnt--)
/*Get random pick*/
string candidate = wordlist[rand() % (wordlist.size())];
/*ask question by printing word.*/
int found;
/*answer to the question asked.*/
/*Checking current word is guess word or not*/
/*Print the Guess Word.*/
/*Logical reduction of the wordlist.*/
vector<string> wordlist2;
for(int i=0;i<wordlist.size();i++)
int match=0;
for(int j=0;j<wordlength;j++)
int main()
/*input values.*/
int n;
/*number of words.*/
/*length of word.*/
int w;
/*maximum limit of question asked.*/
int q;
vector<string> wordlist;
for(int i=0;i<n;i++)
string x;
return 0;
abcdefgh riwqsgrd nhtrgjue kueshtew ticqehac nmvdswqz
Interactor You
Time Complexity
O(N*N) where N is the number of words given to us. Here we can’t take care about time complexity because we use random function then we cant able to guess how to code run. For multiple testing of
code complexity by taking various examples we found the time complexity varies randomly. Here we also see that we always reduce the length of word-list. So, we can say that worst-case time complexity
is O(N*N).
Space Complexity
O(N) where N is the number of words given to us. We use a vector of string to store those values. So, the maximum space used in the above code is O(N). | {"url":"https://tutorialcup.com/interview/array/guess-the-word.htm","timestamp":"2024-11-08T05:27:37Z","content_type":"text/html","content_length":"96967","record_id":"<urn:uuid:ad9344ec-14f7-456e-9a71-0421ba4b1b32>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00459.warc.gz"} |
How Can You Find the Number of Conjugation Permutations in a Group?
• Thread starter physicsjock
• Start date
In summary, the number of conjugation permutations (s) in a group of permutations with 5 objects that also conjugate a and b, where a = (1 4 2)(3 5) and b = (1 2 4)(3 5), is 6. This can be calculated
by finding the centralizer of a in S_5, which is equal to the number of elements in S_5 divided by the size of the conjugacy class of a, which in this case is 20.
I just have a small question regarding the conjugation of permutation groups.
Two permutations are conjugates iff they have the same cycle structure.
However the conjugation permutation, which i'll call s can be any cycle structure. (s-1 a s = b) where a, b and conjugate permutations by s
My question is, how can you find out how many conjugation permutations (s) are within a group which also conjugate a and b.
So for example (1 4 2)(3 5) conjugates to (1 2 4)(3 5) under s = (2 4), how could you find the number of alternate s's in the group of permutations with 5 objects?
Would it be like
(1 4 2) (3 5) is the same as (2 1 4) (35) which gives a different conjugation permutation,
another is
(4 1 2)(3 5), then these two with (5 3) instead of ( 3 5),
so that gives 6 different arrangements, and similarly (1 2 4) (35) has 6 different arrangements,
and each arrangement would produce a different conjugation permutation (s)
so altogether there would be 6x6=36 permutations have the property that
s-1 a s = b ?
Would each of the arrangements produce a unique conjugation permutation (s) ?
I went through about 6 and I got no overlapping conjugation permutations but I find it a little hard to a imagine there would be unique conjugation permutations for each of the 36 arrangements.
Thanks in advance
Science Advisor
Homework Helper
I'm really confused by your question. Every single s will produce a conjugate of a, namely ##sas^{-1}##. Of course, different s and t might give the same conjugate ##sas^{-1}=tat^{-1}##.
But surely that's not what you're asking about... Did you intend to say that you have a fixed a and b in S_5, and you want to count the number of elements s such that ##sas^{-1}=b##?
Yea that's right I want to count the number of s for fixed a and b,
Sorry for not explaining it well,
Is the way I wrote correct for a = (1 4 2)(3 5) and b = (1 2 4)(3 5)?
The first s would be (2 4),
Then rewritting a as (2 1 4)(3 5) the next would be (1 2)
Then rewritting as (4 2 1) (3 5) to get another (1 4)
Then (1 4 2)(5 3) gives (3 5)
and so on
I checked and each of these, (2 4), (1 2), (1 4) and (3 5) correctly conjugate (1 4 2)(35) to b
so would that suggest there are 6 different possible s for a and b?
Since there are 3 arrangements of (1 4 2) and 2 arrangements of (3 5) which give the same permutation.
Thanks for answering =]
Science Advisor
Homework Helper
Yes, that's correct.
You can get a formula for general a and b (of the same cycle type) in S_n as follows. Begin by noting that $$ |\{s\in S_n \mid sas^{-1}=b\}| = |\{s\in S_n \mid sas^{-1}=a\}|. $$ But the RHS is simply
the order ##|C_{S_n}(a)|## of the centralizer of a in S_n, and this is the number you want. Now recall that the order of the centralizer of a is equal to the order of S_n divided by the size of the
conjugacy class of a (this follows, for example, from the orbit-stabilizer formula), and there is a general formula for the latter - see e.g.
Let's work this out for a=(142)(35) in S_5. The size of the conjugacy class of a is (5*4*3)/3=20, so the order of the centralizer of a is 5!/20=6, confirming your answer.
for your help!
Hi there,
Thank you for your question about conjugation of permutation groups. Finding the number of conjugation permutations that can also conjugate a and b can be a bit tricky, but I can provide some
guidance to help you figure it out.
Firstly, let's define what a conjugation permutation is. A conjugation permutation is a permutation that, when applied to a permutation group, results in a conjugate permutation. In simpler terms, it
is a way of rearranging the elements of a permutation group to create a new permutation that has the same cycle structure as the original one.
Now, let's look at the example you provided. You correctly identified that (1 4 2)(3 5) and (1 2 4)(3 5) are conjugate permutations under the conjugation permutation (2 4). However, you also found
other arrangements that also result in the same conjugate permutation, such as (2 1 4)(3 5) and (4 1 2)(3 5).
To find the number of unique conjugation permutations, we need to consider the possible arrangements of the two cycles (1 4 2) and (3 5). Since each cycle has 3 elements, there are 3! = 6 possible
arrangements for each cycle. Multiplying these together, we get 6x6=36 possible arrangements of the two cycles. However, not all of these arrangements will result in a unique conjugation permutation.
For example, (1 4 2)(3 5) and (2 1 4)(3 5) will result in the same conjugation permutation, since the only difference is the starting point of the first cycle. Similarly, (4 1 2)(3 5) and (1 2 4)(5
3) will also result in the same conjugation permutation.
To avoid counting these duplicates, we can divide the total number of arrangements by the number of possible starting points for each cycle, which is 3 for a 3-element cycle. So, the total number of
unique conjugation permutations for (1 4 2)(3 5) and (1 2 4)(3 5) is 36/3=12.
In general, for a permutation group with n elements, the number of unique conjugation permutations will be n!/3 for
FAQ: How Can You Find the Number of Conjugation Permutations in a Group?
1. What is a permutation group conjugate?
A permutation group conjugate is a group element that can be obtained by rearranging the elements of another group element according to a fixed permutation. In other words, two group elements are
conjugates if they are equivalent up to a relabeling of their elements.
2. How are permutation group conjugates related to group isomorphisms?
Permutation group conjugates are closely related to group isomorphisms. A group isomorphism is a bijective map between two groups that preserves the group structure. In other words, two groups are
isomorphic if their elements can be relabeled in such a way that they become conjugates of each other.
3. What is the significance of permutation group conjugates in algebraic structures?
Permutation group conjugates play a crucial role in understanding the structure of algebraic objects such as groups, rings, and fields. They help to identify isomorphic structures, and can also be
used to classify groups based on their conjugacy classes.
4. Can permutation group conjugates be used to simplify calculations in group theory?
Yes, permutation group conjugates can be used to simplify calculations in group theory. By relabeling elements, conjugates can often be transformed into a more manageable form. This can make it
easier to prove group properties, solve equations, and perform other calculations.
5. How are permutation group conjugates related to the concept of symmetry?
In mathematics, symmetry refers to the invariance of a system under certain transformations. Permutation group conjugates are closely related to this concept, as they represent different ways of
rearranging the elements of a group while preserving its structure. This allows for a deeper understanding of symmetry in various mathematical contexts. | {"url":"https://www.physicsforums.com/threads/how-can-you-find-the-number-of-conjugation-permutations-in-a-group.586190/","timestamp":"2024-11-09T22:04:15Z","content_type":"text/html","content_length":"95719","record_id":"<urn:uuid:3756d0fa-76e3-4f1f-b3b4-441b2fda3d93>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00371.warc.gz"} |
Tabular Solution Of The Stress-Cubic Equation
Tabular solution of the stress-cubic equation
Edwin P. Russo, P.E.
Professor Emeritus
Duane J. Jardine
Adjunct Professor
Mechanical Engineering Dept.
University of New Orleans
New Orleans, La.
Principal stresses are defined as the maximum and minimum normal stresses in a plane. Principal stresses are perpendicular to each other and oriented such that shear stresses are zero. For 3D stress
states, principal stresses equal the roots of the general stress-cubic equation:
where I1, I2, and I3 are known as the stress invariants and are given by:
Classical techniques involving the computation of inverse cosines, cosines of multiple angles, and so forth, can be used to find the roots of Eq 1. But the equation can be transformed into a much
simpler form that has only one coefficient:
The transformation provides a convenient way to tabularize the roots of Eq 1. However, for the roots to all be real:
Otherwise, one real and two complex roots will result, which is meaningless for the stress-cubic equation. The Table lists the roots of Eq 5 for various values of the coefficient, Q. Use these roots
and Eq 7 to obtain the roots of the stress-cubic equation. Here's how:
As an example, find the principle stresses given:
Substituting these numbers into the stress-invariant equations gives
The stress-cubic equation becomes:
Eq 8 gives:
From the Table: [1]= 1.0880; [2]= 0.20915; [3]= 0.87889
Eq 7 then gives:
Similarly, σ[2] = 25.6 MPa and σ[3] = 66.0 MPa
Interested readers may want to expand the Table to include finer increments of Q. Besides principle stresses, the method may also help solve other important engineering problems including pump
curves, eigenvalues, hydraulic jumps, control systems, spillway flow, and moving wave/bores.
Sponsored Recommendations
To join the conversation, and become an exclusive member of Machine Design, create an account today! | {"url":"https://www.machinedesign.com/archive/article/21814942/tabular-solution-of-the-stress-cubic-equation","timestamp":"2024-11-03T12:01:10Z","content_type":"text/html","content_length":"231183","record_id":"<urn:uuid:784ca5bf-b818-4fd8-92a9-7f2d0e24b4c5>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00074.warc.gz"} |
Chandra :: Field Guide to X-ray Astronomy :: Dark Energy: Models
The observational evidence for cosmic acceleration is strong, but understanding the origin of this acceleration is one of the greatest unsolved problems in contemporary science. Either Einstein's
theory of gravity has to be modified, or space is permeated by a mysterious form of energy, called dark energy, that causes the acceleration.
The two basic models for dark energy are that it is either energy associated with empty space (vacuum energy) and is constant throughout space and time — equivalent to the so-called "cosmological
constant," or it is an energy field that varies over space and time — called a "scalar field," or "quintessence." Coming up with a theory to explain how this works has proved elusive in both cases.
Vacuum Energy
Vacuum energy is the most straightforward explanation for dark energy. Mathematically it is equivalent to the addition of a constant term, called the cosmological constant, in the equation that
describes the expansion of the Universe. So far, the various probes of dark energy are consistent with a constant value for the vacuum energy.
However, the physical basis for the vacuum energy is a mystery. The cosmological constant was introduced by Einstein in order to produce a static Universe, then dropped when Edwin Hubble and others
showed that the Universe is expanding.
Later, with the development of quantum mechanics, it was realized that the Heisenberg Uncertainty Principle allowed for particles to blink into and out of existence on extremely short times scales
(about a trillionth of a nanosecond for electrons), so that empty space, or the vacuum, is not truly empty. The effects of these so-called virtual particles has been measured in the shift of energy
levels of hydrogen atoms and in particle masses.
Attempts to estimate the energy density associated with the quantum vacuum lead to the absurd result that the vacuum energy density should be at 60 -120 orders of magnitude larger than is observed:
that is a factor ~ 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 or more! No satisfactory explanation for resolving this enormous discrepancy has been put forward.
Advances in understanding the nature of elementary particles, perhaps stimulated by discoveries with the Large Hadron Collider at CERN may shed light on the vacuum energy in the near future.
Another idea is that the Universe is but one of many Universes, each with different values of he vacuum energy. Only those Universes with a small value of the vacuum energy would allow for the
formation of stars and galaxies. In essence, we could not exist in a Universe that has a large value of the vacuum energy, so the fact that we exist and are worrying about it means that it must be
small, so we shouldn't worry about it. It just is.
Scalar Fields or Quintessence
Vacuum energy, or the cosmological constant is, as the name implies, constant in space and time. A more general approach assumes that the vacuum energy can vary over space and time due to the
existence of a new force field which is called a scalar field, or quintessence. One class of such models assumes that the scalar field energy density tracks the energy density of radiation and matter
at very early times and then comes to dominate the energy density of the Universe at later times. This tracking property could provide an explanation for the cosmic coincidence that we live in an
epoch when the energy densities of dark matter and dark energy are so nearly equal.
Many versions of scalar fields have been proposed, but as yet none has emerged as a favorite. In particular, as with vacuum energy, no satisfactory physical explanations exist for why the strength of
the scalar field is so small, or for the relation, if any, between dark matter and dark energy.
An important goal of future research is to distinguish between vacuum energy and scalar fields as dark energy candidates. The most promising way is to use the methods described above to determine the
exact relation between the density and pressure of the dark energy. This relationship is expressed as pressure = w x density, where w is called the "equation of state parameter". For vacuum energy w
= -1, whereas for scalar fields, w can be less than or greater than -1, and it can vary with time. To date, all observations are consistent with w = -1, but other values, as well as variation with
time, are also possible. | {"url":"https://www.chandra.si.edu/xray_astro/dark_energy/index3.html","timestamp":"2024-11-08T07:46:49Z","content_type":"application/xhtml+xml","content_length":"23676","record_id":"<urn:uuid:6cb68641-6aec-49fb-8460-07969b2871fb>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00818.warc.gz"} |
On inferring evolutionary stability in finite populations using infinite population models
Journal of Mathematical Biology, 83:21, 2021. Paper abstract bibtex
Models of evolution by natural selection often make the simplifying assumption that populations are infinitely large. In this infinite population limit, rare mutations that are selected against
always go extinct, whereas in finite populations they can persist and even reach fixation. Nevertheless, for mutations of arbitrarily small phenotypic effect, it is widely believed that in
sufficiently large populations, if selection opposes the invasion of rare mutants, then it also opposes their fixation. Here, we identify circumstances under which infinite-population models do or do
not accurately predict evolutionary outcomes in large, finite populations. We show that there is no population size above which considering only invasion generally suffices: for any finite population
size, there are situations in which selection opposes the invasion of mutations of arbitrarily small effect, but favours their fixation. This is not an unlikely limiting case; it can occur when
fitness is a smooth function of the evolving trait, and when the selection process is biologically sensible. Nevertheless, there are circumstances under which opposition of invasion does imply
opposition of fixation: in fact, for the $\mathsl{n}$-player snowdrift game (a common model of cooperation) we identify sufficient conditions under which selection against rare mutants of small
effect precludes their fixation—in sufficiently large populations—for any selection process. We also find conditions under which—no matter how large the population—the trait that fixes depends on the
selection process, which is important because any particular selection process is only an approximation of reality.
author = {Molina, Chai and Earn, David J. D},
title = {On inferring evolutionary stability in finite populations using infinite population models},
year = {2021},
URL = {https://link.springer.com/article/10.1007/s00285-021-01636-9},
abstract = {Models of evolution by natural selection often make the simplifying assumption that populations are infinitely large. In this infinite population limit, rare mutations that are selected against always go extinct, whereas in finite populations they can persist and even reach fixation. Nevertheless, for mutations of arbitrarily small phenotypic effect, it is widely believed that in sufficiently large populations, if selection opposes the invasion of rare mutants, then it also opposes their fixation. Here, we identify circumstances under which infinite-population models do or do not accurately predict evolutionary outcomes in large, finite populations. We show that there is no population size above which considering only invasion generally suffices: for any finite population size, there are situations in which selection opposes the invasion of mutations of arbitrarily small effect, but favours their fixation. This is not an unlikely limiting case; it can occur when fitness is a smooth function of the evolving trait, and when the selection process is biologically sensible. Nevertheless, there are circumstances under which opposition of invasion does imply opposition of fixation: in fact, for the $\mathsl{n}$-player snowdrift game (a common model of cooperation) we identify sufficient conditions under which selection against rare mutants of small effect precludes their fixation{\textemdash}in sufficiently large populations{\textemdash}for any selection process. We also find conditions under which{\textemdash}no matter how large the population{\textemdash}the trait that fixes depends on the selection process, which is important because any particular selection process is only an approximation of reality.},
journal = {Journal of Mathematical Biology},
volume = {83},
pages = {21} | {"url":"https://bibbase.org/network/publication/molina-earn-oninferringevolutionarystabilityinfinitepopulationsusinginfinitepopulationmodels-2021","timestamp":"2024-11-07T03:33:45Z","content_type":"text/html","content_length":"17704","record_id":"<urn:uuid:2369c411-ef63-4ce8-b36b-72b5e71dafc5>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00325.warc.gz"} |
Dimension Reduction | Data Science Discovery
What is Dimension Reduction?
More importantly why do I need it?
Before we get into the different types of dimension reduction techniques, let’s build some understanding regarding the need.
I was working on a Kaggle data set which had over 4K dimensions. Large number of dimensions adds to the complexity, the computational load in the predictive tasks and basic transformations. Visual
intuition is out of the grasp at this point.
Similarly, think of datasets with variables having several factors which in classification problems are treated as separate variables. Deep learning tasks might involve an even larger set of
variables such as the use case of image classification.
Of course, the most basic steps such as removing independent variables with very high correlation and the variables with near zero variance (example: a variable with a smaller spread in values and
mostly concentrated at a single value), do help but only to a small extent.
What can be done? Enter Dimension Reduction Techniques
Certain Statistical techniques that allow us to transform variables to a lower dimensional space without much loss in information. We wish to find the Latent features in the data or features that
provide useful information. To start with let us understand some basic techniques that help us thin the herd and then develop the intuition on more advanced techniques:
Basic Techniques
Missing Values:
In case a variable has too many missing values making it an unlikely candidate for missing value imputation. We can drop such a variable.
Low Variance:
Very low variance that is it has a very high concentration of observations with the same value. For example if we have a numeric variable with 99% value of 100 and remaining 1% have values in range
of 110-120, it is a variable that does not help your model learn much.
High Correlation:
If there are two variables with correlation say 0.99, we can drop one of these variables. Now, there are some techniques mentioned later on, that will take care of these type of variables during
their process making this step necessary only in certain conditions for example if there is a computational restraint in some of the other techniques, then using this and following it with those
techniques will help a little.
Advanced Techniques
Variable Importance:
This technique might still be heavy computationally if the purpose is to solely reduce the dimensions, but it involves fitting a model such as Random forest and then evaluating which are the most
important variables to the model. This process is synonymous to forward/backward feature selection in Regression.
Factor Analysis:
This is a technique of grouping correlated variables to reduce the dimensional space.
If there are say two sets of variables: X1,X2,X3 are a correlated set and Y1,Y2 are the other correlated set.
Here, Y1 has no correlation with X1 that is, any variable of set A will have low correlation with variable of Set B. However, high correlation if the variables were of the same set. In such a case
this problem could be reduced a set of two latent variables.
The idea behind Factor analysis is to find the representative variable from a set of similar variables.
X1 measures a Car's fuel efficiency
& X2 measures the engine power.
Here X1,X2 are trying to measure how well the Car runs.
Y1,Y2 represent the interiors of the car,
the leg space and so on.
Comfort is the underlying representative variable
being measured via Y1,Y2.
Matrix Based Techniques: Involves Linear Transformation
Principal Component Analysis:
PCA is a novel way to explain the data. Using transformation on the existing set of variables it reduces the dimensions. These transformations are carried out with the prime objective that the
variation present in the data is explained and if we were to reconstruct the old variables from these new variables the error is minimum. We will explore this technique further here.
Latent Dirichlet Allocation:
LDA more popular for topic modeling, but can also be used for dimension reduction. Statistically, it assumes data points to be of two separate multivariate normal distributions with different means
but the same co-variance matrix. Further, a hyper-plane that separates these two data points will be computed.
Unlike PCA, the idea here is not to minimize the reconstruction error, but to maximize the chance of separation of two classes. Hence, LDA is a supervised dimension reduction technique that finds a
subspace that separates the classes as much as possible.
On a 2 D Graph, if a line is shift parallel to itself
such that the entire space can be filled up,
then it is a hyper-plane.
A hyper-plane is a subspace whose dimension is
one less than that of its ambient space.
In different settings, the objects which are
hyper-planes may have different properties.
Other Techniques
Non-negative matrix factorization (NMF) is a method for discovering low dimensional
representations of non-negative data. It tries to find two non-negative matrices (W, H) whose product approximates the non-negative matrix X by matrix factorization. Generalized Low Rank Models can
also be used for dimension reduction.
Neighbor (Graphs) Based Techniques: Involves Non-Linear Transformation
Laplacian Eigenmaps:
Let’s say we take a data point on a graph and draw a connecting edge to it’s closest points. We provide weights to these edges based on the similarity of points. This map of connected points is
treated as our objective function which we are trying to minimize to a low dimensional space.
The locality-preserving character of the Laplacian eigenmap algorithm makes it relatively insensitive to outliers and noise. Laplacian Eigenmaps is thus simply trying to connect similar points and
represent them on a lower dimensional space.
Multidimensional scaling (MDS) is a form of non-linear dimensionality reduction. It involves a matrix of pair-wise distances between all points, and computes a position for the point.
MDS, tries to preserve the euclidean distances while reducing the dimensions. In Isomap, we calculate the geodesic distance instead of euclidean distance in MDS. This method is computationally
intensive however, at the time of release it was hailed as a major development in the field.
What is geodesic distance?
The distance between two vertices in a graph
is the number of edges in a shortest path.
If we wish to measure the distance between poles of the Earth,
Consider a series of points connected to each other,
to form the shortest path between the poles
as the geodesic distance.
Why was this used?
Geodesic distance is more effective
in capturing the structure of the manifold
as compared to euclidean distance.
Locally Linear Embedding:
A non-linear unsupervised technique that computes a new co-ordinate for each data point on a low dimensional manifold. We first look for the neighbors of a data point and then assume that if our
selected point was not available, can we recover it from the identified neighbors. Thus, we have weights assigned that allow us to reconstruct our selected point. Finally, we map this point to a
lower dimensional space while preserving these weights.
T-distributed stochastic neighbor embedding (t-SNE) is a neighborhood preserving embedding. While preserving the local structure of the manifold and mapping it to a low dimensional space, t-SNE also
tries to preserve geometry at all scales.
To put it plainly, when we shift from a high dimensional space to a low dimensional space, it ensures points that were close still are and points far-apart remain so. It calculates the probability of
the similarity of the points in high dimensional space as well as in low dimensional space, and minimizes the difference between them.
A new technique that involves manifold learning and is computationally faster than t-SNE. It tries to preserve the local and global structure and map it onto a low dimensional space using k-nearest
neighbor and some concepts of topology. We will further discuss this technique here.
An autoencoder are a family of methods. It is a neural network whose primary purpose is to learn the underlying manifold or the feature space in the data set. In other words, it tries to encode
(transform input data) in a hidden neural net layer and then decode it to get back as close to the input values as possible. The assumption here is that the transformations resulting in the hidden
layer represent the properties of the data that are of value. We will further discuss this technique in a future post.
Consider, the above image as an example of implementing some of these techniques on the IRIS data set with the axes consisting of the components obtained from application of the respective techniques
(color is with respect to the target variable). Which algorithm is right for you depends on the data set.
There are several dimension reduction techniques available however,here we have only explored some techniques that are currently prevalent in the industry or are truly novel methods.
About Us
Data science discovery is a step on the path of your data science journey. Please follow us on LinkedIn to stay updated.
About the writers:
• Ankit Gadi: Driven by a knack and passion for data science coupled with a strong foundation in Operations Research and Statistics has helped me embark on my data science journey. | {"url":"https://datasciencediscovery.com/index.php/2018/08/08/dimension-reduction/","timestamp":"2024-11-11T11:06:11Z","content_type":"text/html","content_length":"143386","record_id":"<urn:uuid:34d64840-c57e-4b94-983d-8bcc21d076aa>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00568.warc.gz"} |
Synthesis of Differential Control Algorithm of the Digital Tracking Electric Drives of Mobile Robots by (E)-Operator Method
UDC 681.5.58
Synthesis of Differential Control Algorithm of the Digital Tracking Electric Drives of Mobile Robots by (E)-Operator Method
V.V. Safronov, Federal State Unitary Enterprise Central Research Institute of Mechanical Engineering (Roskosmos), Korolev, Russian Federation, vik.saf@yandex.ru
The mathematical Е-operating method is intended for transformation of the of differential equations system to the tracking electric drive mathematical model equations system. In contrast to the known
algorithms for numerical integration, for example, in the Matlab-Simulink, E-method allows to synthesize of the transfer function of tracking electric drives for both sequential and parallel
differential algorithms for working in real time on the basis of both software and hardware shifting registers. It is important to note that these algorithms cannot be implemented using
Matlab-Simulink. Parallel algorithm provides greater performance than serial algorithm. Parallel algorithm can be implemented "conveyor" method of using multiple coprocessors at the same time.Method
is an alternative of the means available in Matlab-Simulink programme for description and analysis of the tracking electric drive model. In the first part of the article the synthesis algorithm using
traditional means of differential calculus and Laplace transforms describes. In the second part of the article a solution of the same problem with the use of the Е-operating method presents. The
results obtained can be used in mechatronics, robotics, automatic control theory, in the practice of building looking for tracking electric drive.
control, witness of the electric drive, parallel and serial algorithm, differential equations, transfer functions, integrators, shift registers, real time
1. Popov E.P. Teoriya lineynykh sistem avtomaticheskogo regulirovaniya i upravleniya [The Theory of the Linear Systems of the Automatic Regulation and Control]. Moscow, Science Publ., 1989. 733 р.
2. Usynin Yu.S. Teoriya avtomaticheskogo upravleniya: uchebnoe posobie dlya vuzov [Theory of Automatic Control: Tutorial for Students]. Chelyabinsk, SUSU Publ., 2010, 176 p.
3. D'yakonov V., Kruglov V. Matematicheskie pakety rasshireniya MATLAB. Spetsial'nyy spravochnik [Mathematical MATLAB Expansion Packs. A Special Handbook]. St. Petersburg, Piter Publ., 2001. 488 р.
4. Domrachev V.G., Matveevskiy V.R., Smirnov Yu.S. Skhemotekhnika tsifrovykh preobrazovateley peremeshcheniy [Circuit Design of Digital Converters Displacements]. Moscow, Еnergoatomizdat Publ., 1987.
392 р.
5. Smirnov Yu.S. Elektromekhatronnye preobrazovateli [Ectromechatronics Converters]. Chelyabinsk, South Ural St. Univ. Publ., 2013. 361 p.
6. Safronov V.V. [Theory and Practice Using of the Encoders Based on Sine-Cosine Rotary Transformer]. Components and Technologies, 2014, no.4, pp. 58–62. (in Russ.)
Bulletin of the South Ural State University. Ser. Computer Technologies, Automatic Control, Radio Electronics, 2015, vol. 15, no. 2, pp. 42-54. (in Russ.) (Information and Communication Technologies
and Systems) | {"url":"https://eneff.susu.ru/EN/publish/Synthesis_of_Differential_Control_Algorithm_of_the_Digital_Tracking_Electric_Drives_of_Mobile_Robots_by__E_-Operator_Method/","timestamp":"2024-11-11T11:08:49Z","content_type":"text/html","content_length":"13831","record_id":"<urn:uuid:81cea6f7-0ec7-4342-b9a8-d0fe08e01d1e>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00016.warc.gz"} |
FTCE Math Study Guide 2020-2021
The Best FTCE General Knowledge Math Prep Book to Help You ACE the FTCE Test!
FTCE Math Study Guide, which reflects the 2020 - 2021 test guidelines, is designed by top FTCE Math instructors and test prep experts to help test takers ace the FTCE General Knowledge Math Test. The
updated version of this comprehensive FTCE Math preparation book includes Math lessons, extensive exercises, sample FTCE Math questions, and quizzes with answers and detailed solutions to help you
hone your math skills, overcome your exam anxiety, boost your confidence—and do your best to ace the FTCE exam on test day. Upon completion of this perfect FTCE General Knowledge Math prep book, you
will have a solid foundation and sufficient practice to ace the FTCE Math test.
55% Off*
Includes FTCE Math Prep Books, Workbooks, and Practice Tests
A perfect book to help you prepare for the FTCE General Knowledge Math Test!
Not only does this all-inclusive prep book offer everything you will ever need to prepare for the FTCE Math test, but it also contains two complete and realistic FTCE Math tests that reflect the
format and question types on the FTCE to help you check your exam-readiness and identify where you need more practice.
FTCE Math Study Guide contains many exciting and unique features to help you prepare for the FTCE Math test, including:
• Content 100% aligned with the 2020 FTCE® test
• Written by FTCE Math instructors and test experts
• Complete coverage of all FTCE Math concepts and topics which you will be tested
• Step-by-step guide for all FTCE Math topics
• Abundant Math skill-building exercises to help test-takers approach different question types that might be unfamiliar to them
• Exercises on different FTCE Math topics such as integers, percent, equations, polynomials, exponents and radicals
• 2 full-length practice tests (featuring new question types) with detailed answers
This FTCE Math prep book and other Effortless Math Education books are used by thousands of students each year to help them review core content areas, brush-up in math, discover their strengths and
weaknesses, and achieve their best scores on the FTCE test.
Recommended by Test Prep Experts | {"url":"https://testinar.com/product.aspx?P_ID=jwNHHZksd%2FRrvk6vnyny8A%3D%3D","timestamp":"2024-11-03T20:07:32Z","content_type":"text/html","content_length":"56785","record_id":"<urn:uuid:bc630aee-d578-4e5d-bdb6-2a9a281fbc15>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00175.warc.gz"} |
agged workflow
This is an attempt to define a workflow that contains reversible stateful computations. In the event of an exception being raised, all insofar successful operations will fold back to their original
state. The implementation uses the notion of reversible computation primitives, which are composed using a free monad that is interpreted with a trampoline.
7 people like this
Like the snippet!
Posted: 11 years ago by Eirik Tsarpalis | {"url":"https://fssnip.net/tags/workflow","timestamp":"2024-11-12T09:52:45Z","content_type":"text/html","content_length":"8671","record_id":"<urn:uuid:0545cc57-beb7-49e7-95a1-6a9cff6c2972>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00668.warc.gz"} |
What is the running time of this algorithm?
• Thread starter zeion
• Start date
In summary: I compare the first two lists of n and n elementsthen I compare the next list of n and 2n elementsthen I compare the next list of n and 3n elementsand I continue to add n for each
succeeding list, soI compare n, 2n, 3n, 4n, ..., kn elementswhich is n * (1 + 2 + 3 + ... + k)= n * (k(k+1)/2)= n * (k^2 + k)/2= O(nk^2)In summary, the worst-case running time for merging k sorted
arrays, each with n elements, into a single sorted array with kn elements, is
Homework Statement
Suppose that you have k >= 1 sorted arrays, each one containing n >= 1 elements, and you wish to combine them into a single sorted array with kn elements.
Homework Equations
The Attempt at a Solution
Can I assume that each array has a size of n elements for a worst case scenario?
Then, if I were to merge them sequentially, ie merge first two, then the third with the first two, the running time would be O(kn), since, the first two will take n comparisons, making a merged list
of 2n elements, then compare to the next list of n elements takes n comparisons, and latch on the rest of the 2n list.
Does that make sense?
zeion said:
Then, if I were to merge them sequentially, ie merge first two, then the third with the first two, the running time would be O(kn), since, the first two will take n comparisons, making a merged
list of 2n elements,
then compare to the next list of n elements takes n comparisons, and latch on the rest of the 2n list.
What if there is no "rest"? Maybe the n list has elements to be slotted
the 2n list?
NascentOxygen said:
What if there is no "rest"? Maybe the n list has elements to be slotted throughout the 2n list?
So you're saying, if the third list had more than 2n elements?
Then I would compare 2n times and latch on the left over from the third list?
I'm not sure how to set up the problem for a worst case since each array can have n >= 1 elements?
zeion said:
If I were to merge them sequentially, ie merge first two, then the third with the first two ... the first two will take n comparisons, making a merged list of 2n elements, then compare to the
next list of n elements takes n comparisons
Are you sure it only takes n compares to merge two arrays of size n? Take a very simple case, how many compares does it take to merge 2 lists with 2 elements each, for example {1, 3} {2, 4}? What
about 2 lists, 3 elements each, for example {1, 3, 5} {2, 4, 6}?
Last edited:
Last edited by a moderator:
rcgldr said:
Are you sure it only takes n compares to merge two arrays of size n? Take a very simple case, how many compares does it take to merge 2 lists with 2 elements each, for example {1, 3} {2, 4}? What
about 2 lists, 3 elements each, for example {1, 3, 5} {2, 4, 6}?
So for the second example, I would compare 1 and 2, then insert the smaller one, then insert the larger one: (1, 2)
Then compare the next two, 3 and 4 and insert in the same way: (1, 2, 3, 4)
Then compare 5 and 6 and get (1,2,3,4,5,6)
So I compared 3 sets of 2 numbers.
rcgldr said:
for example {1, 3} {2, 4}? ... What about 2 lists, 3 elements each, for example {1, 3, 5} {2, 4, 6}?
zeion said:
So for the second example, I would compare 1 and 2, then insert the smaller one, then insert the larger one: (1, 2)
Then compare the next two, 3 and 4 and insert in the same way: (1, 2, 3, 4)
Then compare 5 and 6 and get (1,2,3,4,5,6)
So I compared 3 sets of 2 numbers.
Using this same method how would you merge {1, 2, 5} and {3, 4, 6}? By "insert" do you mean to "insert" into the output array? If so that's not a merge, and if insertion was used, the insertion would
take more compares.
Last edited:
rcgldr said:
Using this same method how would you merge {1, 2, 5} and {3, 4, 6}? By "insert" do you mean to "insert" into the output array? If so that's not a merge, and if insertion was used, the insertion
would take more compares.
Okay so, to "merge" two lists I will simply append either one after the other.
This will take k - 1 appends? Or do I have say that I append each element at a time, so that would take n(k - 1) ?
Then, I suppose I will use a merge sort algorithm to sort the whole list.. which takes worst-case O(nlogn) ?
zeion said:
Then, I suppose I will use a merge sort algorithm to sort the whole list.. which takes worst-case O(nlogn)?
That's the time it takes to sort a single unsorted array of n elements, and O(...) is a relative number, the (n log n) shows how the number of elements affects O(...), without separating moves and
compares (there are fewer compares than moves).
In this case, you have k sorted arrays of n elements each. I'm not sure what your supposed to calculate at "running time", the number of compares and moves as separate counts or the relative overhead
versus k and n.
Maybe one more example, merge {1, 2, 9, 10} , {4, 5, 8, 11}, {3, 6, 7, 12}. How many compares do you need? How many moves do you need? What if you merged all 3 arrays in one pass?
Last edited:
What is a "move"?
So for the example, I would "compare" the 1 with 4.
Then, I will "move" the 4 between the 1 and the 2?
Do I also need to "move" all of 2, 9, 10 forward to make space?
rcgldr said:
I'm not sure what your supposed to calculate at "running time", the number of compares and moves as separate counts or the relative overhead versus k and n.
The questions says, "What is the worst-case running time of this algorithm, as a function of k and n?" The algorithm being to merge the first list with the second, then that one with the third.. etc.
zeion said:
OK, apparently merging wasn't explained to you properly in class. A merge of k arrays, each of of size n, ends up copying the data into a new single array of size k x n. So there's a total of k+1
arrays involved, the k input arrays of size n each, and the output array of size k x n.
Okay so,
I think, for comparing two arrays, I would always have to compare the size of the bigger array.
So for the first method of combining the first two, then with the third etc..
It would take n compares for the first two, then 2n compares for the third with the first two, then 3n, then 4n ...
Each time I finish comparing I would move the element to the kn array, so the moves would be 2n moves for the first two, n moves for all the other ones, so total of k moves.
Does that make sense?
zeion said:
I think, for comparing two arrays, I would always have to compare the size of the bigger array.
How many compares would you have to do to merge {1, 2, 9} and {3, 4, 5, 6, 7, 8}?
zeion said:
Each time I finish comparing I would move the element to the kn array, so the moves would be 2n moves for the first two.
If you merged 2 arrays into an output array, you would need another array to merge that output array with the 3rd input array, alternating between the two output arrays. That would require (2 + 3 +
... + k) x n moves.
If you merged all k arrays in on pass, it would only take k x n moves. If you did merge k arrays in one pass, and assuming you haven't reached the end of any of the k arrays, how many compares would
it take for each element moved to the single output array of size k x n?
FAQ: What is the running time of this algorithm?
What is the running time of this algorithm?
The running time of an algorithm refers to the amount of time it takes for the algorithm to execute and complete its task.
How is the running time of an algorithm measured?
The running time of an algorithm is typically measured in terms of the number of operations or steps it takes to complete its task. This can be expressed as a function of the input size, such as O(n)
or O(n^2), where n represents the size of the input.
What factors affect the running time of an algorithm?
The running time of an algorithm can be affected by a variety of factors, including the size and complexity of the input, the efficiency of the algorithm design, and the hardware and software
environment on which the algorithm is executed.
Why is it important to consider the running time of an algorithm?
The running time of an algorithm is important because it can impact the overall performance and efficiency of a program. A more efficient algorithm can save time and resources, making it a crucial
consideration in many applications.
Can the running time of an algorithm be improved?
Yes, the running time of an algorithm can often be improved through algorithmic optimizations and improvements in hardware and software environments. However, the running time can also be limited by
the inherent complexity of the problem being solved. | {"url":"https://www.physicsforums.com/threads/what-is-the-running-time-of-this-algorithm.583361/","timestamp":"2024-11-09T09:46:21Z","content_type":"text/html","content_length":"137717","record_id":"<urn:uuid:1c648ff3-4953-4380-b2b9-bc41f48aa6b0>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00317.warc.gz"} |
Implementation of fluid-structure interactions for rigid body motion in FEniCS using immersed finite element method
session 3 (Zoom) (17:00–18:30 GMT)
In this work, an implementation of rigid body immersed finite element method in is presented. Immersed finite element method was proposed for resolving complex fluid structure interaction problems
often encounter in many engineering applications. In immersed finite element method, the structure is represented by a Lagrangian mesh moving on top of a Eulerian fluid mesh. This allow for the fluid
mesh to be generated independently from the solid structure and thus greatly simplified the meshing process. The no-slip condition and the FSI force at the fluid-solid interface is enforced using a
mesh-to-mesh interpolation of velocity and FSI coupling force. Classically, the interpolation method employed in immersed finite element is to use a discrete delta function; however, in this work a
method based on transforming basis function between the two domain is employed. This allow for the support of the FSI force interpolation to be the size elements in the fluid domain touching the
structure domain. Support size on which the fluid-structure interaction force is applied is therefore optimal in an element-wise sense. Results from a canonical problem of rigid sphere dropping in a
channel is simulated to demonstrate the implementation. Implementation details and performance of the implementation is discussed. | {"url":"http://fenics2021.com/talks/teeraratkul.html","timestamp":"2024-11-11T23:33:35Z","content_type":"text/html","content_length":"7352","record_id":"<urn:uuid:95ff38a3-9952-43a4-92eb-d91ae4915daf>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00873.warc.gz"} |
rdfsol (c45b2)
Radial Correlation Functions
The RDFSOL command computes radially resolved correlation
functions, such as radial distribution functions or orientational
correlation functions. The function of interest is computed either between
pairs of atoms from two atom selections, or between a pair consisting of
one atom selection and a reference point, which can be either a fixed point
in space or the center of mass of a set of atoms. As the name rdfSOL
suggests, the routine allows for the special treatment of solvent molecules
(TIP3 water is supported by special routines, others need the use of the
PROTotype facility (see
» proto
| The syntax of the RDFSOL command
| General overview
| Set selections
| Locally limiting sets in each frame
| other general options
| Trajectory specifications
| Some limitations/todos to keep in mind
| Just what it says
Syntax for the RDFSOL command
[SYNTAX RadialDistributionFunctions_with_SOLvent]
RDFSOL [ RDF int ] [ DDIP int ] [ QDIP int ] [ HD int ] -
[ setA-spec ] [ setB-spec ] [ around-spec ] -
[ SAME ] [ RMAX real ] [ NBIN int ] -
[ VOLUme real ] [ PRECise ] [ BRUTe ] [ MINI ] -
[ traj-spec ] [ SPFAc int]
setA-spec:: [ SITE ] [ atom-selection ]
[ XREF real ] [ YREF real ] [ ZREF real ]
[ WATEr ]
[ PROTo int ]
setB-spec:: [ SITE ] [ atom-selection ]
[ WATEr ]
[ PROTo int ]
around-spec:: AROUnd [ RAROund real ] [ LOCAl ] -
[ atom-selection ]
[ XREF real ] [ YREF real ] [ ZREF real ]
traj-spec:: [ FIRStu int ] [ NUNIt int ] [ BEGIn int ] -
[ STOP int ] [ SKIP int ]
atom-selection::= see
» select
General overview and options
RDFSOL calculates radially resolved pair-distribution or angular
correlation functions between two sets of atoms (setA and setB).
The type of function computed is selected by keywords:
RDF - Radial Distribution Function
If one of the two sets is WATEr then distribution functions
for the oxygen and hydrogens are computed.
If both sets are WATEr then the hydrogen-hydrogen distribution
function will also be evaluated.
If the first set is not water and the keyword SITE is present,
the center of mass of the set is taken as single center. Else
the average over all points in setA is taken.
DDIP - Dipole-Dipole correlation function. If one or both sets are
not WATEr the center of mass and dipole moment of this set
is used (no matter whether the keyword SITE is present or
not). In this case setA must not be a fixed point in space
since the dipole moment is not defined in this case.
QDIP - Charge-Dipole correlation function. As with RDF setA can
either be a SITE or the average of all points.
The integer after each function to be calculated gives the unit number
the respective function is to be written to.
SiteA/B specifications
SetA and SetB are two sets for which the chosen function is evaluated
for all pairs A-B. Both can be WATEr, in which all TIP3 residues present
will be included. In this case the oxygen positions will be used as the
centers of the molecules. In both cases the center of mass (and set
dipole moment if needed) can be used if the keyword SITE is present.
SetA can be a fixed point in space: (XREF/YREF/ZREF) (if SITE is present
but no atom selection (0/0/0) will be used as default).
Finally for each of the two sets a previously defined prototype set
» proto
) can be used. In this case
the center of geometry (or mass with keyword MASS) and dipole of each
individual set member will be used in the requested functions.
For both sets WATEr is the default.
Limiting Sets
If only a subset which is localized around a certain point should be
used in each frame this can be achieved by the AROUnd keyword. If it is
present setA will be re-selected in each frame. If the keyword LOCAl is
present setB will also be re-selected. RAROund <real> is the radius
around the selected center within which an atom must lie to be available
for evaluation in this frame. The center itself can either be a fixed
point in space (XREF/YREF/ZREF) or an atom selection of which the center
of mass will be used.
Other options
SAME - if this keyword is present, only setA is used for both
sets, thus calculating auto-functions (this algorithm
should be faster than the general one if setA and setB
use the same selection)
RMAX <real> - the maximum distance up to which a pair A-B is evaluated
(default: 7.5A)
NBIN <int> - the number of bins used to sample (each bin is RMAX/NBIN
(default: 150)
VOLUme <real> - the volume of the total system, necessary for the
normalization. If not specified by the user and crystal
is in use, the resulting cell volume will be used.
Finally, if crystal is not used and no volume
is specified, and if both sets are localized (see AROUnd),
the volume of the limiting sphere will be used.
PRECise - if RDFs are calculated and one or both sets contain
WATEr, some pairs including water hydrogens will be
missed since only oxygen distances are evaluated.
If PRECise is present, these pairs are also included
which results in a slightly diminished efficiency of the
cubing algorithm
BRUTe - use a simple double loop algorithm rather than a cubing
MINI - use 'real' minimum image conventions. Currently only one
function can be calculated if MINI is used. Its major use
is the computation of the distance dependent Kirkwood
G-factor (with DDIP, second column). Here, one needs to go
'into the corners' (i.e. sqrt(3)/2 * L for a cubic box)
without counting pairs twice.
(caution: needs lots of memory)
SPFAc - if images are present, the number of total
atoms/pairs/cubes may change from frame to frame.
So an estimate of the needed space needs to be made
before reading the trajectory so SPFAc times the actual
values is allocated
(default: 3)
Trajectory specifications
These are the usual specs. The trajectory is read NUNIt units starting
with FIRSTu reading from frame BEGIn to STOP where SKIP frames are
skipped between reading.
Caveats and Comments
- When computing dipole-dipole correlations for a set which is not WATEr,
only its center of mass and dipole moment for primary atoms will be
evaluated. So if a part of a large molecule which is re-centered
bysegment (e.g. a protein) and "sticks out" of the primary box, some
pairs may not be sampled.
- Normalization of RDFs differs slightly from that used in COOR ANAL
- no excluded volume correction
- point-point (e.g. two SITEs or DDIP for two non-WATEr sets...) not yet
(See also test/c30test/rdfsol.inp test/c30test/rdfsol2.inp testcases)
RDFSOL RDF 10 SETA WATER SAME RMAX 7.5 NBIN 150 PRECISE -
FIRSTUNIT 11 NUNIT 1
This will calculate g_OO, g_OH and g_HH for all waters in the simulated
system up to 7.5 A into 150 bins. One trajectory file will be read from
unit 11 and the result output to unit 10.
RDFSOL RDF 10 SETA WATER SAME RMAX 7.5 NBIN 150 PRECISE -
AROUND RAROUND 7.5 LOCAL SELECT ATOM PROT 1 NH END -
FIRSTUNIT 11 NUNIT 1
The same as above but only waters around the NH of residue 1 of segment
PROT will be considered.
RDFSOL RDF 10 QDIP 11 DDIP 12 SETA WATER SAME RMAX 7.5 NBIN 150 PRECISE -
FIRSTUNIT 13 NUNIT 1
Same sets as in the first example but here all three functions are
calculated at once (i.e. the trajectory is only read once). | {"url":"https://academiccharmm.org/documentation/version/c45b2/rdfsol","timestamp":"2024-11-08T17:41:37Z","content_type":"text/html","content_length":"25325","record_id":"<urn:uuid:53db3e63-f934-4b99-b8a1-221238e68624>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00219.warc.gz"} |
On the exponential instability of N-body systems
We reconsider the old problem of the growth of numerical errors in N-body integrations. We analyze the effects of successive encounters and show that these tend to magnify errors on a time scale
which is comparable with the crossing time. This conclusion is based on an approximate treatment of encounters which can be analyzed in three ways: by construction of a master equation, by
approximate analytic methods, and by simulation. However, a deeper discussion of the manner in which errors propagate from one star to another implies that the true rate of growth varies as ln ln N/t
[cr]. Next we study the growth of errors in N-body simulations, in which all gravitational interactions are correctly included. We confirm our earlier results and recent work of Kandrup & Smith that
the rate of growth of errors per crossing time increases as N increases up to about 30, but that for larger systems the rate of growth is approximately independent of N. In this limit, errors grow
with an e-folding time which is nearly one-tenth of a crossing time. This demonstrates that the N-dependence observed in the pioneering investigation by Miller, who considered systems with N ≤ 32,
cannot be extended to larger N. We also investigate the rate of growth of errors in N-body systems with softened potentials. For example, when the softening radius is held at a fixed fraction of the
size of the system, the rate of growth of errors varies approximately as N^-1/3 when N is large enough. In the final section of the paper we summarize arguments to show that two-body interactions,
and not collective effects, are the underlying physical mechanism causing the growth of errors in spherical stellar systems in dynamic equilibrium. We explain the distinction between this instability
and two-body relaxation, and discuss its implications for N-body simulations. For example, it can be shown that the accurate simulation of a system up to the time of core collapse would require
computations with O(N) decimal places. After core collapse the rate of growth of errors is still faster, because of three-body interactions involving binaries.
All Science Journal Classification (ASJC) codes
• Astronomy and Astrophysics
• Space and Planetary Science
Dive into the research topics of 'On the exponential instability of N-body systems'. Together they form a unique fingerprint. | {"url":"https://collaborate.princeton.edu/en/publications/on-the-exponential-instability-of-n-body-systems","timestamp":"2024-11-14T20:54:34Z","content_type":"text/html","content_length":"54809","record_id":"<urn:uuid:7d5ea4c2-ed68-4696-9c83-6134ee4e6380>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00274.warc.gz"} |
problems with product of vector of symbols with square matrix
problems with product of vector of symbols with square matrix
I am trying to do some experiments with symbols (variable vector) and multiplications with a coefficient matrix.
The code is the following:
A = matrix(QQ,[
k = A.transpose().kernel()
basis = k.basis()[0]
t = 'real'
x = vector([x1,x2,x3,x4])
print "x",x
xT = x.transpose()
print "xT",xT
print "A*x",A*x
print "xT*A",xT*A
with the following output:
x (x1, x2, x3, x4)
xT [x1]
A*x (2*x1 + x2 + 2*x3 - 6*x4, -x1 + 2*x2 + x3 + 7*x4, 3*x1 - x2 - 3*x3 - x4, x1 + 5*x2 + 6*x3)
Traceback (most recent call last):
File "", line 1, in <module>
File "/tmp/tmpuVBZ96/___code___.py", line 27, in <module>
exec compile(u'print "xT*A",xT*A
File "", line 1, in <module>
File "element.pyx", line 2751, in sage.structure.element.Matrix.__mul__ (sage/structure/element.c:19587)
File "coerce.pyx", line 856, in sage.structure.coerce.CoercionModel_cache_maps.bin_op (sage/structure /coerce.c:8169)
TypeError: unsupported operand parent(s) for '*': 'Full MatrixSpace of 4 by 1 dense matrices over Symbolic Ring' and 'Full MatrixSpace of 4 by 4 dense matrices over Rational Field'
As you can see, A*x was successful, but xT*A is giving an exception. Do you have any idea on why? How would you solve this?
1 Answer
Sort by ยป oldest newest most voted
First, you should notice that when you typed xT = x.transpose(), you got the following deprecation warning :
DeprecationWarning: The transpose() method for vectors has been deprecated, use column() instead
(or check to see if you have a vector when you really want a matrix)
See http://trac.sagemath.org/10541 for details.
exec(code_obj, self.user_global_ns, self.user_ns)
In particular, x.transpose() leads to a column matrix:
sage: x.transpose()
So it is OK if you multiply on the right, not on the left (which explains why xT*A did not work):
sage: A * (x.transpose())
[2*x1 + x2 + 2*x3 - 6*x4]
[ -x1 + 2*x2 + x3 + 7*x4]
[ 3*x1 - x2 - 3*x3 - x4]
[ x1 + 5*x2 + 6*x3]
If you want to multiply on the left, you should use x.row():
sage: x.row()
[x1 x2 x3 x4]
sage: (x.row()) * A
[ 2*x1 - x2 + 3*x3 + x4 x1 + 2*x2 - x3 + 5*x4 2*x1 + x2 - 3*x3 + 6*x4 -6*x1 + 7*x2 - x3]
That said, vectors are not matrices, they are vertical/horizontal agnostic and adapt themselves to the situation:
sage: A*x
(2*x1 + x2 + 2*x3 - 6*x4, -x1 + 2*x2 + x3 + 7*x4, 3*x1 - x2 - 3*x3 - x4, x1 + 5*x2 + 6*x3)
sage: x*A
(2*x1 - x2 + 3*x3 + x4, x1 + 2*x2 - x3 + 5*x4, 2*x1 + x2 - 3*x3 + 6*x4, -6*x1 + 7*x2 - x3)
which explains why A*x worked.
edit flag offensive delete link more | {"url":"https://ask.sagemath.org/question/24077/problems-with-product-of-vector-of-symbols-with-square-matrix/","timestamp":"2024-11-10T21:39:48Z","content_type":"application/xhtml+xml","content_length":"56660","record_id":"<urn:uuid:2d3e68f2-8b2f-4d74-863a-59a77c755eac>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00459.warc.gz"} |
Barrows Calculator - CalculatorsPot
Barrows Calculator
Published on
The Barrows Calculator is a sophisticated tool used predominantly in the field of irrigation design and other agricultural applications that involve water systems. This calculator helps to determine
the physical and hydraulic properties of water systems, which are critical for ensuring efficient water flow and system design. This article aims to demystify the workings of the Barrows Calculator,
providing a step-by-step explanation of its use, complete with an example to illustrate its practical application.
Introduction to the Barrows Calculator
In essence, the Barrows Calculator is used to calculate the head loss in irrigation pipes, which is a measure of the energy lost by water as it flows through a pipe system. Head loss is a crucial
calculation in hydraulic engineering, affecting everything from the design of the pipe system to the power requirements of the pump that moves the water.
Purpose and Functionality
The main purpose of the Barrows Calculator is to optimize the design of pipe systems for irrigation by calculating the head loss over a particular distance. By understanding the head loss, engineers
can make informed decisions about the diameter of the pipes, the materials used, and the necessary power of the pumps.
The functionality of the Barrows Calculator revolves around key hydraulic formulas and principles, including the Reynolds Number, the Friction Factor, and the Darcy-Weisbach Equation. Here’s how
these concepts come into play:
1. Reynolds Number (Re): This dimensionless number helps predict the type of flow (either turbulent or laminar) within the pipe, which influences the friction calculations.
2. Friction Factor (f): This factor is used to estimate the head loss due to friction within the pipe. It varies depending on the flow type and the roughness of the pipe’s interior.
3. Head Loss (h_f): The Darcy-Weisbach equation is used to calculate the head loss due to friction over the length of the pipe, which is essential for ensuring the pump delivers the right amount of
Key Inputs and Formulas
To use the Barrows Calculator effectively, you will need the following inputs:
• Diameter of the Pipe (D)
• Pipe Length (L)
• Flow Rate (Q)
• Pipe Roughness (e)
• Viscosity of the Fluid (ν)
• Gravity (g)
Step-by-Step Example
Let’s calculate the head loss for a 12-inch diameter pipe, 500 feet long, with a flow rate of 3 cubic feet per second (cfs), a pipe roughness of 0.00015 feet, and water viscosity of (1.1 \times 10^
{-5}) ft²/s.
Relevant Information Table
Input Parameter Symbol Value
Diameter (inches) D 12
Length (feet) L 500
Flow Rate (cfs) Q 3
Roughness (feet) e 0.00015
Viscosity (ft²/s) ν (1.1 \times 10^{-5})
Gravity (ft/s²) g 32.2
The Barrows Calculator is an indispensable tool in hydraulic engineering, particularly in designing efficient irrigation systems. By accurately calculating the head loss in pipes, it ensures that the
systems are not only efficient but also economically viable. Understanding how to use this calculator can significantly impact the effectiveness of water system designs, making it a critical skill
for engineers and designers in the agricultural sector.
Leave a Comment | {"url":"https://calculatorspot.online/mechanical-engineering/barrows-calculator/","timestamp":"2024-11-03T16:30:14Z","content_type":"text/html","content_length":"101497","record_id":"<urn:uuid:39eaf37f-aef7-4983-b40e-79bb5b75fb42>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00696.warc.gz"} |
The major requires at least 41 credit hours (55 with the Addenda requirements), including the requirements listed below.
1. Introductory Course. One (1) course:
□ SOAD-A 100 Pathways: Introduction to Art, Design and Merchandising
SOAD-A 100 Pathways: Introduction to Art, Design and Merchandising
Explores the fields of art, design, and merchandising within the contemporary landscape of creating and making. Identifies where these disciplines have mutually reinforcing values and
opportunities for interdisciplinary study. Provides a common experience for art, design and merchandising students.
☆ Spring 2025CASE AHcourse
Fall 2024CASE AHcourse
2. Forums of Exchange. Complete one (1) of the following options:
□ Both of the following.
i. Intermediate Forum. One (1) course:
○ SOAD-A 201 Forum of Exchange I
SOAD-A 201 Forum of Exchange I
Interdisciplinary experiences serve as the foundation for discussion and written/verbal responses. Students select from a range of activities including lectures, cross-critiques, site
visits, interviews, class visits, museum exhibitions, and performances to observe, question and compare art, design and merchandising disciplines.
May be repeated for a maximum of 4 credit hours.
S/F grading.
ii. Advanced Forum. One (1) course:
○ SOAD-A 401 Forum of Exchange II
SOAD-A 401 Forum of Exchange II
Interdisciplinary experiences in art, design and merchandising serve as the foundation for discussion and written/verbal responses. Students select from a range of activities
including lectures, cross-critiques, site visits, interviews, class visits and selected exhibitions.
May be repeated for a maximum of 4 credit hours.
S/F grading.
□ Project Management for Comprehensive Design. One (1) course:
☆ SOAD-C 281 Project Management for Comprehensive Design
SOAD-C 281 Project Management for Comprehensive Design
SOAD-C 280
Covers key components of the management and execution of a creative project. Focuses on scope management, project time management, and communications.
3. Comprehensive Design Sequence.
a. Creative Core: 3D Design. One (1) course:
☆ SOAD-A 103 Creative Core: 3D Design
SOAD-A 103 Creative Core: 3D Design
Volume, space, material, and physical force studies provide the basis for exploration of three-dimensional form; includes carving, construction, modeling, and casting using wood, plaster,
Styrofoam, clay, etc.
Credit given for only one of FINA-F 101 or SOAD-A 103.
○ Spring 2025CASE AHcourse
Fall 2024CASE AHcourse
b. Introduction to Comprehensive Design. One (1) course:
☆ SOAD-C 280 Introduction to Comprehensive Design
SOAD-C 280 Introduction to Comprehensive Design
SOAD-A 103, SOAD-A 104, or consent of instructor
Focuses on skill building across the spectrum of modalities: drawing, physical model building, and 3D modeling. Uses human-centered design strategies to explore how human perceptions and
activities affect design decisions.
Credit given for only one of AMID-C 280 or SOAD-C 280.
○ Spring 2025CASE AHcourse
Fall 2024CASE AHcourse
c. Issues in Comprehensive Design. One (1) course:
☆ SOAD-C 380 Topical Issues in Comprehensive Design
SOAD-C 380 Topical Issues in Comprehensive Design
SOAD-C 280; or consent of instructor
Focuses on the combination of empathy, creativity, and rationality needed to understand the context of a problem and develop solutions that fit the particular context.
May be repeated with different topics for a maximum of 8 credit hours in AMID-C 380 and SOAD-C 380.
d. Issues in Collaborative Design. One (1) course:
☆ SOAD-C 381 Topical Issues in Collaborative Design
SOAD-C 381 Topical Issues in Collaborative Design
1–4 credit hours
SOAD-C 380; or consent of instructor
Employs design thinking strategies to understand challenges facing a community partner. The class will then focus on a co-design participatory approach, with project partners to offer
solutions to the project. Classes may partner with regional organizations and companies.
May be repeated with different topics for a maximum of 8 credit hours in AMID-C 381 and SOAD-C 381.
e. Special Problems in Comprehensive Design. One (1) course:
☆ SOAD-C 480 Special Problems in Comprehensive Design
SOAD-C 480 Special Problems in Comprehensive Design
SOAD-C 381; or consent of instructor
Leverages a unique combination of design thinking and strategic planning to design, prototype and test solutions to real problems faced by communities, businesses, individuals, and
organizations. Provides opportunity to act as a consultant for a real client to design and propose a strategic plan, implement accepted plans, and evaluate the impact of the solution.
May be repeated with different topics for a maximum of 6 credit hours in AMID-C 480 and SOAD-C 480.
f. Intensive Seminar in Comprehensive Design. One (1) course:
☆ SOAD-C 481 Intensive Seminar in Comprehensive Design
SOAD-C 481 Intensive Seminar in Comprehensive Design
SOAD-C 380; or consent of instructor
Provides opportunities to engage in the research, testing, and prototyping required for larger scale projects. Completed projects may range from public art to installations to furniture
design. Includes discussions of materiality, fabrication, form, use, and structure as part of design and fabrication.
g. Design Practice. One (1) course:
☆ SOAD-W 200 Design Practice
SOAD-W 200 Design Practice
SOAD-A 103; or consent of instructor
Gain a working understanding of methods for thinking about problems and solutions through a human-centered approach. Explore the entire design process, including problem scoping,
defining, researching, modeling, prototyping, producing, and testing, resulting in viable human-centered, transparent, and imaginative solutions.
h. Design Research, Methods, and Process. One (1) course:
☆ SOAD-W 250 Design Research, Methods, and Process
SOAD-W 250 Design Research, Methods, and Process
SOAD-W 200; or consent of instructor
Introduction to fundamental approaches to design research, methods for exploring design situations, and the generation and application of design insights processes. Create a series of
visual/textual artifacts that will help analyze and generate design for key moments in human-information interaction.
i. Design in Context and Culture. One (1) course:
☆ SOAD-W 300 Design in Context and Culture
SOAD-W 300 Design in Context and Culture
SOAD-W 250; or consent of instructor
Explore the relationships between design, context, and culture. Based on user research, create a series of cultural and context-based artifacts that will help analyze and design
strategies and specific experiences within a design.
j. Design of Systems. One (1) course:
☆ SOAD-W 350 Design of Systems
SOAD-W 350 Design of Systems
SOAD-W 300; or consent of instructor
Explores "systems thinking" for designers. People are the center of the systems we create when we develop new designs. The digital and physical spaces that we design and inhabit are
rapidly accelerating in complexity. This course introduces systems-level approaches to design.
4. Capstone.
a. Design Capstone: Research & Development. One (1) course:
☆ SOAD-W 400 Design Capstone
SOAD-W 400 Design Capstone
SOAD-W 350, SOAD-C 480, SOAD-C 481, or consent of instructor
Capstone in the design studio sequence. Students will demonstrate a complete understanding of project production, development of an integrated design solution, and creation of an
expressive and innovative solution that mediates existing conditions.
b. Design Capstone: Studio Project. One (1) course:
☆ SOAD-W 450 Design Capstone II: Studio Project
SOAD-W 450 Design Capstone II: Studio Project
1–6 credit hours
SOAD-W 400; or consent of instructor
Students, working individually or in teams, will complete complex projects that resolve issues of design on multiple scale levels and across systems. By the end of the course, students
will demonstrate a understanding of project production, the development of an integrated design solution, and the creation of an expressive and innovative solution that mediates existing
5. Addenda Requirements*.
a. Mathematics. One (1) course:
☆ MATH-D 117 Introduction to Finite Mathematics II
☆ MATH-J 113 Introduction to Calculus with Applications
☆ MATH-M 106
☆ MATH-M 118 Finite Mathematics
☆ MATH-M 119 Brief Survey of Calculus I
☆ MATH-M 211 Calculus I
☆ MATH-M 213
☆ MATH-S 118 Honors Finite Mathematics
☆ MATH-S 211
☆ MATH-V 118 Finite Mathematics with Applications
☆ MATH-V 119 Applied Brief Calculus I
MATH-D 117 Introduction to Finite Mathematics II
MATH-D 116 or consent of the department
MATH-D 116 and MATH-D 117 is a two-course sequence
Topics for the course are taken from MATH-M 118. Credit for the College of Arts and Sciences Foundations requirement in Mathematical Modeling or the College's N&M Breadth of Inquiry
requirement will be given only upon completion of both MATH-D 116 and MATH-D 117 with a passing grade.
Credit given for only one of MATH-A 118, MATH-M 118, MATH-S 118, MATH-V 118; or MATH-D 116 and MATH-D 117.
○ Spring 2025CASE MMcourse
Fall 2024CASE MMcourse
○ Spring 2025CASE NMcourse
Fall 2024CASE NMcourse
MATH-J 113 Introduction to Calculus with Applications
MATH-J 112 with a grade of C- or higher; or consent of department
For Groups students only. MATH-J 113 can count toward the College of Arts and Sciences Foundations requirement in mathematical modeling and the College of Arts and Sciences natural and
mathematical sciences Breadth of Inquiry requirement for Groups students
A survey of calculus.
Credit given for only one of MATH-J 113, MATH-M 119, MATH-V 119, MATH-M 211, or MATH-S 211.
○ Spring 2025CASE MMcourse
Fall 2024CASE MMcourse
○ Spring 2025CASE NMcourse
Fall 2024CASE NMcourse
MATH-M 118 Finite Mathematics
R: To be successful, students will demonstrate mastery of two years of high school algebra as indicated by an appropriate ALEKS score or completion of MATH-M 014, MATH-M 018, or MATH-J
Sets, counting, basic probability, including random variables and expected values. Linear systems, matrices, linear programming, and applications.
Credit given for only one of MATH-A 118, MATH-M 118, MATH-S 118, MATH-V 118; or MATH-D 116 and MATH-D 117.
○ Spring 2025CASE MMcourse
Fall 2024CASE MMcourse
○ Spring 2025CASE NMcourse
Fall 2024CASE NMcourse
MATH-M 119 Brief Survey of Calculus I
R: To be successful, students will demonstrate mastery of two years of high school algebra, one year of high school geometry, and pre-calculus as indicated by an appropriate ALEKS score
or completion of MATH-M 025 or MATH-M 027
Introduction to calculus. Primarily for students from business and the social sciences.
Credit given for only one of MATH-J 113, MATH-M 119, MATH-V 119, MATH-M 211, or MATH-S 211.
○ Spring 2025CASE MMcourse
Fall 2024CASE MMcourse
○ Spring 2025CASE NMcourse
Fall 2024CASE NMcourse
R: To be successful, students will demonstrate mastery of two years of high school algebra, one year of high school geometry, and pre-calculus, and trigonometry as indicated by an
appropriate ALEKS score or completion of MATH-M 027
Limits, continuity, derivatives, definite and indefinite integrals, applications.
A student may receive credit for only one of the following: MATH-J 113, MATH-M 119, MATH-V 119, MATH-M 211, or MATH-S 211.
○ Spring 2025CASE MMcourse
Fall 2024CASE MMcourse
○ Spring 2025CASE NMcourse
Fall 2024CASE NMcourse
MATH-S 118 Honors Finite Mathematics
Hutton Honors College membership
R: To be successful students will demonstrate mastery of two years of high school algebra as indicated by an appropriate ALEKS score or completion of MATH-M 014, MATH-M 018, or MATH-J 111
Designed for students of outstanding ability in mathematics. Covers all material of MATH-M 118 and additional topics from statistics and game theory. Computers may be used in this course,
but no previous experience is assumed.
○ Spring 2025CASE MMcourse
Fall 2024CASE MMcourse
○ Spring 2025CASE NMcourse
Fall 2024CASE NMcourse
MATH-V 118 Finite Mathematics with Applications
R: To be successful, students will demonstrate mastery of two years of high school algebra as indicated by an appropriate ALEKS score or completion of MATH-M 014, MATH-M 018, or MATH-J
Sets, counting, basic probability, linear modelling, and other discrete topics. Applications to various areas depending on topic. Possibilities include social and biological sciences and
consumer mathematics.
Credit given for only one of MATH-A 118, MATH-M 118, MATH-S 118, MATH-V 118; or MATH-D 116 and MATH-D 117.
○ Spring 2025CASE NMcourse
Fall 2024CASE NMcourse
MATH-V 119 Applied Brief Calculus I
R: To be successful, students will demonstrate mastery of two years of high school algebra, one year of high school geometry, and pre-calculus as indicated by an appropriate ALEKS score
or completion of MATH-M 025 or MATH-M 027
Introduction to calculus. Variable topic course with emphasis on non-business topics and applications. The topic(s) will be listed in the Schedule of Classes each semester.
A student may receive credit for only one of the following: MATH-J 113, MATH-M 119, MATH-M 211, MATH-S 211, or MATH-V 119.
○ Spring 2025CASE NMcourse
Fall 2024CASE NMcourse
b. Studio Art Courses. Six (6) credit hours:
☆ SOAD-A 101 Creative Core: Color
☆ SOAD-A 211 Cross-Disciplinary Workshops in Art, Design, and Merchandising
SOAD-A 101 Creative Core: Color
Explores color's fundamental principles and formal elements; its contextual meanings and sociological connotations; and its significance within the fields of art, design and
merchandising. Cultivates visual sensitivity, develops aesthetic knowledge and the production of creative work through studio practices.
Credit given for only one of SOAD-A 101 or FINA-F 102.
○ Spring 2025CASE AHcourse
Fall 2024CASE AHcourse
SOAD-A 211 Cross-Disciplinary Workshops in Art, Design, and Merchandising
A variable topic studio course that focuses on skill development and technical manipulation of materials within the specific traditions of a particular discipline. Emphasizes fundamental
principles of art and design within a discipline. Designed around a variable topic such as image, time, narrative, space, materials, and markets.
May be repeated with a different topic for a maximum total of 12 credit hours in SOAD-A 111 and SOAD-A 211.
○ Spring 2025CASE AHcourse
Fall 2024CASE AHcourse
c. Art History Courses. Six (6) credit hours:
6. Major GPA, Hours, and Minimum Grade Requirements.
a. Major GPA. A GPA of at least 2.000 for all courses taken in the major—including those where a grade lower than C- is earned—is required.
b. Major Minimum Grade. Except for the GPA requirement, a grade of C- or higher is required for a course to count toward a requirement in the major.
c. Major Upper Division Credit Hours. At least 18 credit hours in the major must be completed at the 300–499 level.
d. Major Residency. At least 18 credit hours in the major must be completed in courses taken through the Indiana University Bloomington campus or an IU-administered or IU co-sponsored Overseas
Study program.
• * Courses used to fulfill addenda requirements require a grade of C- or higher and do not count toward the Major GPA or Major Hours.
Major Area Courses
• Unless otherwise noted below, the following courses are considered in the academic program and will count toward academic program requirements as appropriate:
□ Any course at the 100–499 level with the SOAD-C or SOAD-W subject area prefix—as well as any other subject areas that are deemed functionally equivalent
□ Any course contained on the course lists for the academic program requirements at the time the course is taken—as well as any other courses that are deemed functionally equivalent—except for
those listed only under Addenda Requirements
□ Any course directed to a non-Addenda requirement through an approved exception
Exceptions to and substitutions for major requirements may be made with the approval of the unit's Director of Undergraduate Studies, subject to final approval by the College of Arts and Sciences.
The Bachelor of Science degree requires at least 120 credit hours, to include the following:
1. College of Arts and Sciences Credit Hours. At least 100 credit hours must come from College of Arts and Sciences disciplines.
2. Upper Division Courses. At least 36 credit hours (of the 120) must be at the 300–499 level.
3. College Residency. Following completion of the 60th credit hour toward degree, at least 36 credit hours of College of Arts and Sciences coursework must be completed through the Indiana University
Bloomington campus or an IU-administered or IU co-sponsored Overseas Study program.
4. College GPA. A College grade point average (GPA) of at least 2.000 is required.
5. CASE Requirements. The following College of Arts and Sciences Education (CASE) requirements must be completed:
a. CASE Foundations
i. English Composition: 1 course
ii. Mathematical Modeling: 1 course
b. CASE Breadth of Inquiry
i. Arts and Humanities: 4 courses
ii. Natural and Mathematical Sciences: 3 courses
iii. Social and Historical Studies: 4 courses
c. CASE Culture Studies
i. Diversity in the United States: 1 course
ii. Global Civilizations and Cultures: 1 course
d. CASE Critical Approaches: 1 course
e. CASE Foreign Language: Proficiency in a single foreign language through the first semester of the second year of college-level coursework
f. CASE Intensive Writing: 1 course
g. CASE Public Oral Communication: 1 course
6. Major. Completion of the major as outlined in the Major Requirements section above.
Most students must also successfully complete the Indiana University Bloomington General Education program. | {"url":"https://bulletin.college.indiana.edu/programs/4215/CMPDSGNBS/","timestamp":"2024-11-11T22:53:56Z","content_type":"text/html","content_length":"80845","record_id":"<urn:uuid:4ba5abc8-23aa-4202-98fe-8002b3784aba>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00858.warc.gz"} |
Carlo Ruiz in Visalia, CA // Tutors.com
Hello, I'm Carlo! I am an online math tutor based in Visalia with over two years of professional experience tutoring students and friends. I tutor middle school math, high school math, algebra,
trigonometry, and Calculus. I also have earned my CRLA tutor certification from College of the Sequoias.
My style is a calm, friendly and to the point style with plenty of examples. While tutoring I make sure to gauge the student’s understanding ensuring the student doesn't have gaps in their knowledge.
If tutoring in person, I offer the student to keep the math paper with our work from the session and if online, I offer to send the pdf of my work.
A bit about me is I enjoy reading a lot, cooking, working on my robots learning, and exercising. I aspire to become a roboticist one day!
Math doesn’t have to be so hard and I believe that anyone can do it. If I could learn it then so can you! I look forward to helping!
Grade level
Elementary school, Middle school, High school, College / graduate school, Adult learner
Type of math
General arithmetic, Pre-algebra, Algebra, Trigonometry, Pre-calculus, Calculus
Photos and videos
No reviews (yet)
Ask this tutor for references. There's no obligation to hire and we’re
here to help
your booking go smoothly.
Services offered | {"url":"https://tutors.com/ca/visalia/math-tutors/carlo-ruiz-xyFq3utaY?midtail=Oc38KGA03S","timestamp":"2024-11-12T10:17:39Z","content_type":"text/html","content_length":"171973","record_id":"<urn:uuid:69db7dd8-ad7b-417a-b161-8e5f8e79eb67>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00710.warc.gz"} |
Getting Started Guide
We Set the Standards!
Getting Started
his guide will provide you with an individualized approach to
using Exemplars Math. The Guide to Getting Started is an effective staff
development tool, that can be used alone or in a workshop setting.
This step-by-step tutorial for using the Exemplars Math program
helps your students use performance assessment tasks that reflect state,
national and Common Core standards in mathematics. You will gain an
understanding of how to use the Exemplars activities with your students.
We did not bind the guide, so you could easily make copies to share with
your colleagues. You can also download additional copies from Exemplars
Web site, please visit http://www.exemplars.com/assets/files/the guide.pdf.
For more information: Please call 800-450-4050; send an e-mail to
info@exemplars.com; and be sure to visit our Web site at www.exemplars.
com, where we continue to add new material. We also encourage you
to participate in our blog at www.exemplars.com/blog. Engage in the
conversation and let us know what issues are important to you and what
your needs are. We look forward to hearing from you.
Getting Started
Exemplars tasks can
serve as instructional
tools as well as assessment tools.
Getting Started cont.
In order to help administrators and teachers meet their growing
achievement needs, Exemplars has aligned all of its material (Math Pre K–12,
Science K–8, Reading, Writing and Research 5–8 and Developing Writers
K–4) with state, national and Common Core standards, as well as grade-level
expectations. We continue to align our material with a variety of curriculum
and resources, to make the implementation of Exemplars easier in your
classroom. To access our current alignments please visit: http://www.exemplars.
Math alignments include:
Common Core Standards
State Standards
NCTM Standards
National STEM Standards
Everyday Mathematics
Impact Math
The successful experience of Exemplars users and workshop participants
is reflected in this guide. For more professional development ideas please
visit www.exemplars.com. Exemplars customizes professional development
workshops to meet the needs of individual schools and districts. Please contact
us to arrange professional development for your teachers or for product
information regarding Pre K–K Math; K–8 Math; Secondary Math; K–8
Algebra; K–8 Science; 5–8 Reading, Writing and Research; K–4 Developing
Writers and K–5 Spanish Exemplars and our tools for the differentiated
Getting Started
Math Guide
Table of Contents
Introduction ......................................................................5
Quick Start ........................................................................6
School/Teacher Tips .........................................................7
Using Exemplars in the Classroom ...............................9
Problem Solving .............................................................13
Assessing Student Work ...............................................14
Examining Student Work .............................................16
Formative Assessment...................................................17
Using Technology ..........................................................19
Getting Started: A Teacher’s View.................................20
Using Journals in Mathematics ...................................22
Introducing Rubrics to Students...................................26
Exemplars Rubrics .........................................................31
Sample Task, A Puzzle ..................................................39
Exemplars Rubric Scoring Notes ................................49
Task Graphic Organizer ...............................................50
Preliminary Planning Sheet .........................................51
KWL Chart .......................................................................52
For more information please contact us via e-mail
at info@exemplars.com, or call us at 1-800-450-4050.
© 2016 Exemplars®
We Set the Standards!
Exemplars can be used...
•To help instruct your students in problem solving and applying concepts in
meaningful, real-life situations
The key is to listen
and learn about the
thinking students use
when approaching
•To explore how your class is performing in problem solving and give you an
overall feel of your students’ skills
•To diagnose the ability of particular students to apply and use math concepts
and solve problems
•To help your students learn to self-assess their mathematical skills using the
rubrics and anchor papers
•To help your students communicate mathematically by sharing their
thinking process verbally and in written form
Exploring how your
classroom is performing; diagnosing
students’ abilities to
apply math concepts;
helping your students
self-assess; and using
that information to
alter instruction and
learning, all adds up
to effective formative
Getting Started
Quick Start – Read This Page!
any teachers find it useful to select a few Exemplars tasks to teach their
students strategies for problem solving. They can also use tasks for
evaluating students’ problem-solving skills.
An Exemplars task includes four parts:
1.The first page highlights the problem to be given to the students. It is large
enough to make a transparency or simply photocopy it to distribute to
2.There are rich teacher notes designed to help you prepare for the problem.
Differentiated material is included (for grades Pre K–8), offering a more
challenging and a more accessible version and solution for each task.
3.Each lesson has a task-specific rubric at four levels describing what a student
would do at each level of performance; Novice, Apprentice, Practitioner and
4.Anchor papers (examples of student work at each level of performance) are
included with each task. These are annotated to focus on particular aspects
of student performance. These may also be used for student self- and peerassessment.
Getting Started
School Tips
• To get students accustomed to the rubric and the expected performance
criteria, give them an Exemplars task to practice, then share the anchor
papers on overhead slides. Discuss and critique them, and score them
as a class.
• If a task is not going well or it seems too hard, leave it. Do not use it
as an assessment piece, but use it as an instructional piece, and solve
it together. It is important for students to see a model of what the
teacher is looking for in a response, and you will already have gained
important assessment information.
• If you are concerned students are going to be uneasy about getting
started, conduct a group brainstorm ... and/or a 15-minute conference.
Students can share ideas, strategies, and ask questions of each other
before getting started. The brainstorming might include; identifying
different approaches students might use, information that needs to be
known, and types of mathematical representations they would use to
communicate results.
• One task may not work for all students. With our differentiated
material we have included a more challenging and a more accessible
version of each task, and you can use the CD-ROM or the subscription’s
digital library CD and modify/personalize the tasks further by using
the “cut” and “paste” editing tools. Please see page 19 for more “Using
Technology” tips.
For a closer look at an Exemplars classroom please refer to the piece written
by Stacey Dement, a first grade teacher in Converse, Texas, describing how
she got started with Exemplars. (p. 20)
Getting Started
School Tips cont.
• Some schools assign one person to be “in charge” of distributing
Exemplars tasks. This person might provide a monthly preview of
available tasks at faculty meetings or copy tasks s/he thinks might
complement the work currently being done in classrooms.
• Other schools separate the hard copy Exemplars tasks into three binders;
Grades K–2, Grades 3–5 and Grades 6–8 to make sharing resources
• Some schools copy and file Exemplars tasks by the content area
•Make Exemplars as user-friendly as possible for the maximum number
of teachers and staff in your site-licensed school by sharing or copying
the CD on your computer server.
•Use Exemplars in parent conferences. Let students lead parents through
the problems. Have parents work with the problems to see what kind
of mathematics their children are doing. Ask parents to show what
arithmetic skills are required.
• Introduce the Exemplars student rubrics to your students. Students can
keep copies of the rubric, and teachers can put up a large rubric chart
on the wall so that students can refer to it as they wish.
• Teachers can start an Exemplars journal to help them document their
personal experiences as they learn to use Exemplars.
• Students can record their “math thinking” in journals while solving
Exemplars tasks. On page 22 we have included the article, Using
Journals In Mathematics, written by Lori Jane Dowell Hantelmann, an
Elementary Mathematics Specialist for the Regina Public Schools in
Saskatchewan, Canada.
Getting Started
Using Exemplars in the Classroom
When planning units we recommend using the backwards design process (see
example on page 11) as a means to assist the teacher with ensuring that units
of study are aligned with local, state, national or Common Core mathematical
standards. The process is as follows:
1.Select Standards: These are the standards that you will assess during the
course of the unit. It is important to choose a balance of content and skill
standards for the unit. It is also important to limit the number of standards
you select to three-five total for a typical four-week unit of study. Select
standards that embrace important ideas and skills for the students at your
grade level and for the topic you are teaching.
In our subscriptions, each of the Exemplars tasks has been plotted on a matrix
that shows its relationship to the National Council of Teachers of Mathematics
Standards. Each task has also been keyed to arithmetic skills. If you have one
of our Best of Math CD-ROMs, you can search for tasks containing the specific
standard you wish to teach. You can select tasks that meet the standards and
cover specific skills. This will help you keep problem solving and skills in
How to Access Exemplars Alignments: In order to help administrators and
teachers meet their growing achievement needs, Exemplars has aligned all of its
material (Math Pre K–12, Science K–8, Reading, Writing and Research 5–8 and
Developing Writers K–4) with state, national and Common Core standards,
as well as grade-level expectations. We continue to align our material with a
variety of curriculum and resources, to make the implementation of Exemplars
easier in your classroom. To access our current state and product alignments
please visit: http://www.exemplars.com/resources/alignments2.Build Essential Questions: Essential questions address the Big Ideas,
Concepts, Skills and Themes of the unit. These questions shape the unit;
focus and intrigue students on the issues or ideas at hand; and are openended with no obvious right answer. These questions should be important
and relevant to the students and allow for several standards if not all of
the standards selected to be addressed. These questions should engage a
wide range of knowledge, skills and resources and pose opportunities for
culminating tasks or projects where students can demonstrate how they
have grappled with the problem.
Getting Started
Exemplars tasks are
powerful, culminating
assessment activities.
3. Select Design Culminating Tasks: This final task or project should encompass
and help assess each of the standards selected and should enable students
to answer or demonstrate understanding of the answer to the essential
question. The task should be multi-faceted, allow for multiple points of entry
and be performance based. It should allow students to apply their skills and
knowledge learned in meaningful and in depth ways. Exemplars tasks that
match the standards selected can be powerful culminating tasks. Consider
what criteria you will use to assess student learning both before, during and
after the unit.
4.Develop Learning and Teaching Activities: These activities and tasks should
address the standards selected and guide student learning towards what
they need to know and be able to do in order to achieve the standards. Select
relevant Exemplars tasks that assist with teaching appropriate content, skills
and/or problem-solving strategies. There are three major types of learning
and teaching activities:
• Introductory Activities are used to pre-assess students’ prior knowledge
and to generate student interest in the unit of study. These activities tend
to be interactive, exploratory and stimulating.
Exemplars tasks are
great for instruction
AND assessment.
• Instructional Activities are used to provide opportunities for students
to learn and demonstrate specific skills, knowledge and habits of mind.
These are usually sequenced and scaffolded, tied to specific standards
and evidences, interesting, engaging, in-depth, active and interactive.
These activities can also be used for formative assessment during the
course of the unit to measure student progress.
• Assessment Activities and the Culminating Activity are used to assess
both students’ progress towards attainment of the standards and for
summative purposes at the end of the unit. These activities usually
involve some type of product or performance by the student.
* All activities selected, both Exemplars tasks and other activities, should be based
upon their utility in helping students learn and demonstrate the knowledge
and skills identified in the standards selected. Activities should accommodate
a range of learning styles and multiple intelligences and be developmentally
appropriate. Activities should also have a purposeful and logical progression for
both knowledge and skill attainment.
5.Assess Student Products and Performances. Use the Exemplars rubric to
assess relevant knowledge, skills and problem-solving strategies as students
work on and complete Exemplars problem-solving tasks. Collect and use
examples of student work that demonstrate the criteria selected and the
different levels of performance. Allow opportunities for students to selfassess using the Exemplars rubric or one of our student rubrics.
Getting Started
An Example of the Backwards Design Process
NCTM Content Standards and Evidences (Grades 3–5):
Understand meanings of operations and how they relate to each other.
•Understand various meanings of addition, subtraction, multiplication
and division.
•Develop fluency in adding, subtraction, multiplying and dividing
whole numbers.
Understand patterns, relations and functions.
• Describe, extend and make generalizations about geometric and
numeric patterns.
Essential Question: How does understanding patterns help us to solve
mathematical problems?
Culminating Task (Grades 3–5): A Puzzle (p. 39)
For the culminating activity the teacher may choose to use the regular task, the
more accessible version or the more challenging version. To use this task as a
culminating activity, students will also be asked to do three things: draw and/or
represent their solution in some form, discuss how they used their knowledge of
fractions to solve the problem, and communicate their learning to the class via a
class presentation or demonstration.
Learning and Teaching Activities
Introductory Activities: These might include a KWL (Know-Want to KnowLearned) chart (p. 52), exploration of pattern blocks to build a variety of
patterns, and recognizing numeric patterns as well as other patterns in the world
around us.
Getting Started
Instructional Activities: These might include more focused activities around
building different patterns using pattern blocks, numbers and other materials.
Activities could also include having students represent the patterns they create
in mathematical form. Some Exemplars tasks to use for instruction include:
Bug Watching (K–2)
Frogs and Flies (K–2)
Pattern Block Walls (3–5)
Checkerboard Investigation (3–5)
Tower Problem (3–5)
Batches of Beavers (3–5)
Day Lily Dilemma (6–8)
Bridges (6–8)
Building Block Dilemma (6–8)
Assessment Activities: During the course of the unit select two or three
problem-solving activities to use for assessment purposes. These will help
inform instruction by providing information about how students are progressing
towards the standards and about their problem-solving abilities. Students can
self-assess, and then revise and edit their solutions to the problem.
Products and Performances: Student products and performances will include
all work done when problem solving; as well as other work done around pattern
identification and extension.
Tools for Planning and Implementation
Task Organizers and Preliminary Planning Sheets
Task organizers and/or preliminary planning sheets can be invaluable resources
for teachers. They are used to anticipate what a student might do on a particular
task, and enable the educator to foresee the teaching that should be done before
the task is used for assessment.
These planning tools can be easily integrated into the backward design process.
They ask the educator to think about the underlying mathematical concepts
to anticipate, what strategies and reasoning the student might use to solve the
task and their possible solutions. These also encourage the educator to consider
the mathematical language students might use in their solution as well as the
mathematical connections they might make. The latter two are particularly
important because these areas are often overlooked in problem solving – yet,
they are a central part of the NCTM standards.
Please see samples of each on page 50 and 51.
Getting Started
Problem Solving
Listed below are a set of problem-solving steps that we recommend teachers
use with their students. These steps can be posted and taught to students
when using Exemplars tasks for instructional purposes. When beginning
problem solving with students it is important to model this process and
go through it a number of times as a whole group until each student feels
comfortable using them independently. Teachers have also printed these steps
for students to keep in their problem-solving folders and refer to each time
they start a new problem. It is always a good idea to teach students how to
use different problem-solving strategies for solving problems (step #4 below)
such as drawing a picture, working backwards or solving a simpler problem.
It is also important to keep in mind that students have many of their own
creative strategies for problem solving. Encourage them to use these strategies.
Problem-Solving Steps
1. Read the problem.
In the beginning, it is
important to model
the problem-solving
process repeatedly
with students as a
whole group until
each student feels
comfortable using it
2. Highlight the important information.
3. What do you know and need to find out?
Find Out
4. Plan how to solve the problem: what skills are needed, what strategies
can you use, and what ideas will help you?
5. Solve the problem.
6. Draw and write about your solution and how you solved the problem.
7. Check your answer.
8. Share a connection or observation about this problem.
9. Be sure to provide your students with a copy of the rubric or use one of
Exemplars student rubrics. We publish student rubrics that are written
in language more easily understood by younger students. Take a look
at the examples on pages 27–38 of this guide. Visit our Web site www.
exemplars.com for more examples.
10. What instructional materials/technology manipulatives will you need?
Getting Started
Assessing Student Work
The Rubrics
An exciting component of the Exemplars program is our rubrics (pgs. 31–33).
These scoring rubrics allow you to examine a student’s work against a set of
desired learner responses (analytic assessment criteria), and determine where
the student is on a developmental continuum. Students may use the rubric to
Before You Begin – Familiarize Your Students
In order to be successful, many students need to first understand the rubric that
is being used to assess their performance before it is implemented. To facilitate
this, we recommend that teachers introduce the idea of a rubric to their students
by collectively developing one (or several) that does not address a specific
content area, but rather other areas of performance, quality or self-evaluation.
For suggestions on getting started with rubrics and/or sample “non-content
area” rubrics, refer to the piece on page 26 called, Introducing Rubrics to Students,
written by Deb Armitage, Exemplars consultant.
Exemplars Classic 3-Level Rubric Criteria
The Exemplars Classic Rubric (p. 31) was used for the first eight volumes of the
subscription as well as for the Best of Math Exemplars CD-ROMs. It uses three
analytic assessment criteria, that are located along the top of the rubric.
• The first, Understanding, asks the question, “How well did the student
understand the problem?”
• The second, Strategies, Reasoning and Procedures, draws your
attention to the strategies the student uses in solving the problem. It
asks whether or not there is evidence of mathematical reasoning, and
points to the appropriate application of mathematics procedures the
child uses in solving the problem.
•Finally, Communication focuses on both the student’s own explanation
of his/her solution, and the use of mathematical representation,
terminology and notation.
• Levels of Performance: Along the left column of the rubric, you will
see the four levels of performance that are used to describe what type
of learner strategies and understandings each student exhibits as s/he
completes each task. Read through the descriptions of the Novice,
Apprentice, Practitioner and Expert categories to understand the
distinctions between each.
Getting Started
Exemplars Classic 5-Level Rubric Criteria
Exemplars current Standards-Based Rubric (pgs. 32–35) has been modified
to reflect revisions in the NCTM Process Standards. These process standards
highlight ways students should acquire and use the five content standards
(Number and Operations, Algebra, Geometry, Measurement, and Data
Analysis and Probability), and are more specifically defined than in the past.
The levels of performance — Novice, Apprentice, Practitioner and Expert
— have not changed. The criteria for assessing student performance parallel
the NCTM process standards (Problem Solving, Reasoning and Proof,
Communication, Connections, and Representation). Even though there are
more criteria, five compared to three in the original rubric, the language of the
rubric is more precise and will make assessment easier. There is also a glossary
of terms used in the rubric to help teachers as they assess.
The revisions to this rubric make it easier for teachers to pinpoint students’
specific strengths and weaknesses in the problem-solving processes, providing
more diagnostic information to teachers in assisting students to meet high
standards. Of course if you prefer, you may still continue to use the Exemplars
Classic Rubric.
Anchor Papers
In assessing student work, it helps to select anchor papers. These are examples
of previously scored student work at each level of performance. This will
make your assessment more efficient. The anchor papers follow each
Exemplars task.
Remember that anchor papers are guides to be used with judgment. At times,
you may need to step back to see where a particular student’s solution fits.
Getting Started
Please note: Do
not panic if each
of the four levels
of performance
(Novice, Apprentice,
Practitioner, Expert)
are not represented
in a particular
problem. It is possible to not have any
Novices, or in some
cases, there may not
be any Experts. Your
students may have a
great deal of experience concentrated
in one performance
Examining Student Work in the Classroom
After the students have worked with the task, you can use the Exemplars
scoring rubric to look at student work. But first you should look at the work
and make your own observations about what you see.
1. Sort the work into groups by level of performance: Novice, Apprentice,
Practitioner or Expert.
2. Describe what the work in each group has in common:
3. Return to the Exemplars scoring rubric and see if your descriptions fit with
the Exemplars descriptions.
4. Examine any discrepancies in scoring and try to understand what causes
them. Write some of your thoughts: (Please refer to our rubric scoring notes
located in the back of this guide).
Getting Started
Formative Assessment
Assessment That Improves Performance
Exploring how your classroom is performing, diagnosing students’ abilities
to apply math concepts, helping your students self-assess, and using that
information to alter instruction and learning, all adds up to effective formative
assessment. Formative assessment’s sole purpose is to improve student
performance – in comparison to summative assessment, which focuses on
accountability and is used to label, sort or classify students’ performance.
The conditions for successful formative assessment include:
1.The student and teacher share a common understanding of what
constitutes quality work. That is, they have the same standards for
2.Both student and teacher can compare the student’s performance to the
standards that have been mutually set.
• The student self-assesses as s/he is working on the task and
upon completion of the task.
• The teacher may assess the student’s work while it is in
progress or when completed.
3.Teaching and learning activities are adjusted, so the gap between the
standard and the performance is closed.
• After the teacher assesses the student’s performance, they will
provide feedback (guidance), enabling the student to improve
her/his performance.
• The teacher also assesses the instruction that preceded the
performance. The teacher will adjust future instruction based
on this assessment.
• The student will use what they have learned from the
assessment to improve future performances.
Helping students become effective self-assessors has an enormous impact
on student performance. Studies by Paul Black and Dylan William1, 2 show that
effective classroom assessment has a greater impact on student achievement
than any other approach.
Getting Started
Using Exemplars to Implement Formative Assessment
in Your Classroom
Exemplars is designed to help teachers integrate formative assessment into
their classrooms. Each math task provides information that students can use
for self-assessment.
Annotated Anchor Papers and Rubrics
All of our material includes anchor papers (student work samples), general
assessment rubrics and task-specific rubrics. These tools enable teachers to
define for their students what type of performance is expected on lessons
throughout their units of study. Students’ peer- and self-appraisals encourage
them to compare their work using student rubrics and annotated anchor
papers. This process demonstrates success as well as offers opportunities for
refinement. Based on student performance, teaching and learning strategies
can be adjusted.
Assessment and Teaching Strategies
Tied to Successful Formative Assessment
According to Wiliam and Black1, 2 the assessment and teaching strategies
closely tied to successful formative assessment are:
• Effective Questions - Ask meaningful questions, increase wait time for
student answers, and have rich follow-up activities that extend student
thinking. “Put simply, the only point of asking questions is to raise
issues about which a teacher needs information or about which the
students need to think.” 2 (13)
• Appropriate Feedback - Black and Wiliam found that giving grades
does not improve performance. Using tasks and oral questioning that
encourages students to show understanding, providing comments
on what was done well and what needs improvement, and giving
guidance on how to make improvements should be focused on instead
of grades.
• Peer- and Self-Assessment - Peer- and self-assessments “secure aims
that cannot be achieved in any other way.” 2 (15) Achieving success
requires that students have a clear understanding of the standards, and
that they be taught the skills of peer- and self-assessment.
1. Black, Paul and Dylan William, Assessment and Classroom Learning. Assessment in Education, Vol. 5, No. 1,
1998. Carfax Publishing Ltd.
2. Black, Paul and Dylan William, Working Inside the Black Box: Assessment for Learning in the Classroom. Phi
Delta Kappan, September 2004.
Getting Started
Using Technology
If you have received an Exemplars subscription, you have received a print version as well as a digital library on a CD-ROM. If you have received a Best of
Math Exemplars CD-ROM you have received only a CD.
The CD in the subscription is a digital archive. It contains exactly the same
material that you will find in the print copy of the subscription. You may copy
it to your hard drive, put it on your network to share with any teacher in your
site-licensed school, or print any task from the PDF files.
Best of Math Exemplars CD-ROM I, II or III
The Best of Math Exemplars is a browser-based, searchable CD-ROM that contains
performance tasks. You can search for particular problems by standard and
grade level, by clicking on a particular NCTM standard and the appropriate
developmental level — K–2, 3–5 or 6–8. The tasks that correlate with that standard
and developmental level will appear. You can copy the Best of Math Exemplars to
your hard drive or put it on your site-licensed school’s internal network.
Altering Tasks
While each Exemplars CD offers a differentiated version of each task, you may
still wish to revise or alter a task. You might want to make a problem more
challenging or more accessible for your students by changing the numbers or
wording. Or, you might want to personalize a task with names and places of
things that students are familiar with. With either CD, you can highlight the
words in the Adobe® Acrobat PDF file and copy to a word processing program
using the “text” tool.
Congratulations! You have just been introduced to one of the most effective
ways teachers across the country are using Exemplars in their classrooms!
Our activities will give you important information about your students as
you become more aware of how they think.
This is the beginning of developing a reliable scoring system. Reliability
means that colleagues will apply consistent standards to their students, and
therefore all students will receive the same message about what constitutes
good work.
Getting Started
Success Stories
Getting Started: A Teacher’s View
Stacey Dement, 1st Grade Teacher, Elolf Elementary, Converse, Texas
“Oh, No! Not another pilot program!” I heard mumbled from the back of
the library. I guessed mine were not the only eyes rolling toward the ceiling.
The Exemplars program is designed to assess students’ problem-solving and
mathematical communication skills. It also supports higher-level thinking and
extension of mathematical reasoning.
At a loss about where to begin, I began planning in my mind. How could I
begin teaching mathematical communication when many of my students had
problems communicating their ideas in writing? I decided I had to start with
the class as a whole group, modeling the behaviors and expectations I wanted
students to utilize. Then I planned to continue breaking the whole class into
smaller groups for successive problem-solving experiences. This “plan of
attack” allows them to gradually take more responsibility for problem solving
and communication. Only after modeling and allowing large and small group
practice would students be assessed independently.
I started by discussing the assessment procedures, rubric and holistic scoring
associated with Exemplars. I explained the levels of Novice, Apprentice,
Practitioner and Expert to my students and told them the expectations for
each. We created a poster for the classroom explaining the scoring rubric
in “kid words” that all the students could understand and remember. We
posted it on our Exemplars bulletin board for reference during the year. I also
provided an “exemplary word wall” for the students’ use when writing.
During the year, as we encountered new mathematical terms, we added a
word card for each term to our word wall. This was an invaluable tool for
the students. I made it a point to send information on Exemplars to parents
in our newsletter, and I always provided families with an Exemplars problem
based on the concept we were working on in class. Many students reported
enjoyment at working with family members.
I introduced the first three or four Exemplars tasks to the whole class on
the floor. As a class, we would read the task and analyze exactly what we
were being asked. We would then discuss and decide how best to solve the
problem, recording any differences in opinion.
Getting Started
Following those same basic steps, I broke the class into small groups to solve
their Exemplars problems. Then… I became invisible. I was only allowed to
observe, facilitating group cooperation if necessary. This forced both small and
partner groups to work more independently.
After two whole-class, four small group and two partner Exemplars, I felt
my students had enough experience to begin independently solving their
problems. I watched as each student improved in one way or another. My
students became eager to work on Exemplars.
After a year of teaching the program I look back and don’t have to roll
my eyes. I discovered that Exemplars was not a program after all, but an
assessment philosophy with valuable tools for students and teachers.
Implementing Exemplars was not difficult, but implementing it incrementally
made all the difference. Even young students can become independent
problem solvers and learn to communicate their reasoning and mathematical
thinking processes though open-ended Exemplars tasks. The key is in working
incrementally with students, allowing them to feel successful with group
experiences before being assessed independently.
Getting Started
Success Stories
Using Journals in Mathematics
Lori Jane Dowell Hantelmann, Elementary Mathematics Specialist,
Regina Public Schools, SK Canada
y teaching philosophy has changed significantly since I began teaching
in 1990. My students sit at tables and mathematical concepts are taught
in a problem-solving context instead of being presented in isolation through
discrete workbook pages. One change in my teaching is due to the classroom
research into mathematics journals I completed recently for my Masters of
Education. My students became active participants in their own learning
as they confidently solved realistic problems and explained their ideas in
mathematics journals, and I was excited to be part of this rewarding learning
In past years, when attempting to have children write in a mathematics
journal, I would read: “This was easy. I like math.” My students were not able
to successfully reflect or share what they understood about problem solving
or mathematics through their writing. Frustrated, I began to read about using
math journals in the classroom.
What is a math journal?
My students write in a notebook to answer open-ended questions using
numbers, symbols, pictures and words, and their writing can best be described
as written conversations. A math journal is a place where every student has
the opportunity to verbalize their math knowledge to their teacher, internally
to themselves, and to their classmates. Students’ writing becomes a source
for social interaction as they read journal entries to partners and the whole
class, talk about their learning and listen to others share different levels of
mathematical reasoning.
Getting Started
What did I learn about using math journals?
My classroom research on mathematics journals lead me to recognize four
important steps needed to help students write reflectively about mathematics.
1.Teachers need to model the writing process and the language of
mathematics. First I modeled my own problem-solving process by
thinking out loud as I solved problems and as I recorded my reflections
on chart paper. Students soon began to contribute their own ideas
about problem solving but continued to model the writing process
by recording their comments on chart paper. Students copied these
sentences into their math journals. Modeling the writing process took
longer than I expected as students needed to become familiar with
reflective writing and the language of mathematics, the words and
symbols unique to mathematics. Once students became familiar with
the vocabulary necessary to communicate in mathematics they began to
independently express their own thoughts on paper.
2.Teachers need to ask open-ended questions to guide students in their
writing. I learned how to ask open-ended questions to help students
think about their own understanding of problem solving and to guide
their writing. I began my research by using a list of questions I found
in Writing to Learn Mathematics, by J. Countryman (1992). Students
answered my questions verbally at first and became comfortable
sharing their thoughts and ideas with others. It was through their
participation in our verbal discussions that students learned how to
reflect upon their own knowledge of mathematics and to record their
ideas on paper. I soon adapted the questions I found to better meet the
needs of my students and to match the problems we were solving. Here
are examples of questions I used in my research:
1. Why was this problem easy?
2. Would this problem be easier today than yesterday? Why?
3. What did you do to solve this problem?
4. Are numbers important in solving this problem? Why?
5. Did graphs help you to solve the problem? Why?
Getting Started
Using Journals in Mathematics cont.
3.Students need to revisit similar tasks to increase their confidence
as problem solvers and their knowledge of problem solving. As a
teacher of young children, I quickly realized that involving students in
rewriting similar problem-solving tasks to the problems they just solved
was important in developing their confidence as problem solvers and
in understanding the process. Children could not always solve the task
independently the first time and were enthusiastic to help rewrite the
task and solve it again. I noticed they were more successful in solving
the second task.
For example, we solved the task “Space Creatures” (Exemplars, 1996,
March): On a new planet the astronauts discovered unusual creatures.
The features they counted were 15 eyes, 3 horns, 12 legs and 7 arms.
More of the creatures had scales than fur on their bodies. Draw your
creatures and make a graph for each creature’s features.
The next week, we revisited a similar problem by writing our own
version of “Space Creatures” called Super Robots: Students were
visiting a robot factory. They saw and counted 13 eyes, 10 legs, 8 knobs,
9 arms and 4 antennas. More bodies were triangles than rectangles.
Draw a graph first for each of your robots and then draw a picture of
your robots to match your graphs.
4.Teachers need to support students in recognizing their individual
problem-solving styles. Students discovered they were different in
the strategies they used to solve problems. I guided students to solve
“Space Creatures” by drawing pictures of the creatures before making
their graphs. For Super Robots, I asked students to make their graphs
before drawing their pictures.
Some of the students became frustrated when I asked them to begin
their problem-solving task with a graph. I was curious to know why, so
my first journal question asked students: “Is it easier to draw a picture
first or draw a graph first?” Twelve children chose to draw their picture
first and 11 children chose to create their graph first. Surprised with the
split in their choices, I asked students to explain: “Why do you think it
was easier to draw the graph or picture first?” The children wrote:
The picture was easier because you could count the objects better
than the graph. (mathematics journal, Oct. 23, June)
It’s easier to draw the graph first. Yes, because I knew what my
Robot would look like. The graph helped me to count the features.
(mathematics journal, Oct. 23, Cassy)
Getting Started
Their responses led me to realize the importance of drawing pictures in
the problem-solving process for some children, but not all children. I needed
to listen carefully to students so I would know how to best support them in
recognizing their individual needs as problem solvers.
For four months I had supported students in becoming problem solvers
and reflective writers in mathematics. In the end I questioned the students to
see how they had changed since September. Students had become confident
and independent writers, and they understood what it meant to be a problem
solver. Two students wrote:
I can think of more answers. (mathematics journal, Dec. 3, Mitchel)
I’ve learned how to do harder problems. (mathematics journal, Dec.
3, Sam)
I now believe writing belongs in mathematics and is as important in
developing students’ mathematical knowledge as numbers and computation.
It was my student-to-teacher interactions and my open-ended questions that
guided students to write reflectively. I will always have a classroom of diverse
learners and I now feel confident I can meet individual needs of students and
lead them in their learning. Mathematics journals will guide my teaching.
Brewer, R. (1994-98). Exemplars. Underhill, Vermont: Exemplars.
Countryman, J. (1992). Writing to Learn Mathematics. Portsmouth, NH: Heinemann
Education Books, Inc.
Getting Started
Introducing Rubrics to Students
By Deb Armitage, Exemplars Consultant
A rubric is an assessment guide that reflects content standards and
performance standards. Rubrics describe the features expected for student
work to receive each of the levels/scores on the chosen scale. An assessment
rubric tells us what is important, defines what work meets a standard, and
allows us to distinguish between different levels of performance.
Students need to understand the assessment guide that is being used to assess
their performance. Teachers often begin this understanding by developing
rubrics with students that do not address a specific content area. Together,
they develop rubrics around classroom management, playground behavior,
homework, lunchroom behavior, following criteria with a substitute teacher,
etc. Developing rubrics with students around the best chocolate chip
cookie, sneaker, crayon, etc. is also an informative activity to help students
understand performance levels. After building a number of rubrics with
students, a teacher can introduce the Exemplars rubric. Since the students will
have an understanding of what an assessment guide is, they will be ready to
focus on the criteria and performance levels of the rubric.
The following pages demonstrate a series of rubrics that were developed by
teachers to stir your imagination as you decide what assessment guide would
be best to begin with your students.
It is very important for your students to develop their “own” first rubric.
Getting Started
We Set the Standards!
Lunchroom Behavior Rubric
We Set the Standards!
Student Set-Up Rubric
We Set the Standards!
Chocolate Chip Cookie Rubric
We Set the Standards!
Bathroom Rubric
We Set the Standards!
Exemplars® Classic Rubric
• Uses a very efficient and sophisticated strategy
leading directly to a solution.
• Employs refined and complex reasoning.
• Applies procedures accurately to correctly solve the
problem and verify the results.
• Verifies solution and/or evaluates the reasonabless
of the solution.
• Makes mathematically relevant observations and/or
• The solution shows a deep
understanding of the problem
including the ability to identify the
appropriate mathematical concepts
and the information necessary for
its solution.
• The solution completely addresses
all mathematical components
presented in the task.
• The solution puts to use the
underlying mathematical concepts
upon which the task is designed.
• There is no explanation of the solution, the
explanation can not be understood or it is
unrelated to the problem.
• There is no use or inappropriate use of mathematical
representations (e.g. figures diagrams, graphs,
tables, etc.).
© 2012 Exemplars®
• There is a clear, effective explanation detailing how
the problem is solved. All of the steps are included
so that the reader does not need to infer how and
why decisions were made.
• Mathematical representation is actively used as a
means of communicating ideas related to the
solution of the problem.
• There is precise and appropriate use of
mathematical terminology and notation.
• There is some use of mathematical terminology
and notation appropriate of the problem.
• There is a clear explanation.
• There is appropriate use of accurate mathematical
• There is effective use of mathematical terminology
and notation.
• Uses a strategy that is partially useful, leading some • There is no use, or mostly inappropriate use, of
way toward a solution, but not to a full solution
mathematical terminology and notation.
of the problem.
• There is an incomplete explanation; it may
• Some evidence of mathematical reasoning.
not be clearly presented.
• Could not completely carry out mathematical
• There is some use of appropriate mathematical
• Some parts may be correct, but a correct answer is
not achieved.
• Uses a strategy that leads to a solution of the
• Uses effective mathematical reasoning.
• Mathematical procedures used.
• All parts are correct and a correct answer is
• The solution is not complete
indicating that parts of the problem
are not understood.
• The solution addresses some, but
not all of the mathematical
components presented in the task.
• No evidence of a strategy or procedure, or uses a
strategy that does not help solve the problem.
• No evidence of mathematical reasoning.
• There were so many errors in mathematical
procedures that the problem could not be solved.
Strategies, Reasoning, Procedures
Practitioner • The solution shows that the student
has a broad understanding of the
problem and the major concepts
necessary for its solution.
• The solution addresses all of the
mathematical components
presented in the task.
• There is no solution, or the solution
has no relationship to the task.
• Inappropriate concepts are applied
and/or procedures are used.
• The solution addresses none of the
mathematical components
presented in the task.
Exemplars Classic 3-Level Rubric
Reasoning and Proof
No connections
are made.
No attempt is
made to construct
Arguments are made
with no
mathematical basis.
Arguments are
made with some
mathematical basis.
No correct reasoning
nor justification for
reasoning is present.
No awareness of audience
or purpose is
Little or no
communication of an
approach is evident
Everyday, familiar
language is used to
communicate ideas.
An attempt is
made to construct
representations to
record and
problem solving.
Some correct
reasoning or
justification for
reasoning is present
with trial and error,
or unsystematic
trying of several
Some awareness of
audience or purpose is
communicated, and may
take place in the form of
paraphrasing of the task.
Some communication of an
approach is evident
through verbal/written
accounts and explanations,
use of diagrams or objects,
writing, and using
mathematical symbols.
Some formal math
language is used, and
examples are provided to
communicate ideas.
Exemplars® Classic 5-Level Rubric
© 2012 Exemplars®
Some attempt to
relate the task to
other subjects or
to own interests
and experiences
is made.
Classic 5-Level Math
Problem Solving
No strategy is chosen,
or a strategy is chosen
that will not lead to a
Little or no evidence of
engagement in the task
A partially correct
strategy is chosen, or a
correct strategy for only
solving part of the task
is chosen.
Evidence of drawing on
some previous
knowledge is present,
showing some relevant
engagement in the task.
*Based on revised NCTM standards.
We Set the Standards!
We Set the Standards!
A systematic approach
and/or justification of
correct reasoning is present.
This may lead to...
• clarification of the task.
• exploration of
• noting patterns,
structures and
Planning or monitoring of
strategy is evident.
Note: The expert must
achieve a correct answer.
Evidence of analyzing the
situation in mathematical
terms, and extending prior
knowledge is present.
Adjustments in strategy, if
necessary, are made along
the way, and / or alternative
strategies are considered.
An efficient strategy is
chosen and progress
towards a solution is
Note: The practitioner must
achieve a correct answer.
Precise math language and
symbolic notation are used
to consolidate math thinking
and to communicate ideas.
A sense of audience and
purpose is communicated.
Communication at the
Practitioner level is achieved,
and communication of
argument is supported by
mathematical properties.
Abstract or
are constructed
to analyze
thinking, and
clarify or
© 2012 Exemplars®
are used to
extend the
and accurate
are constructed
and refined to
solve problems
or portray
A sense of audience or
purpose is communicated.
Communication of an
approach is evident through
a methodical, organized,
coherent sequenced and
labeled response.
Formal math language is
used throughout the
solution to share and clarify
Exemplars® Classic 5-Level Rubric
Evidence is used to justify
and support decisions made
and conclusions reached.
This may lead to...
• testing and accepting
or rejecting of a
hypothesis or
• explanation of
• generalizing and
extending the solution to
other cases.
Deductive arguments are
used to justify decisions and
may result in formal proofs.
Arguments are constructed
with adequate mathematical
A correct strategy is chosen
based on mathematical
situation in the task.
Evidence of solidifying prior
knowledge and applying it
to the problem solving
situation is present.
Reasoning and Proof
Problem Solving
Exemplars Classic
5-Level Rubric
Rubric cont.
Exemplars NCTM Classic 5-Level Rubric
The Exemplars Rubric is based on the following NCTM Standards:
Problem Solving
Instructional programs from pre-kindergarten through grade 12 should enable
all students to:
•Build new mathematical knowledge through problem solving
•Solve problems that arise in mathematics and in other contexts
•Apply and adapt a variety of appropriate strategies to solve problems
•Monitor and reflect on the process of mathematical problem solving
Reasoning and Proof
Instructional programs from pre-kindergarten through grade 12 should enable
all students to:
•Recognize reasoning and proof as fundamental aspects of mathematics
•Make and investigate mathematical conjectures
•Develop and evaluate mathematical arguments and proofs
•Select and use various types of reasoning and methods of proof
Instructional programs from pre-kindergarten through grade 12 should enable
all students to:
•Organize and consolidate their mathematical thinking through
•Communicate mathematical thinking coherently and clearly to peers,
teachers and others
•Analyze and evaluate the mathematical thinking and strategies of others
•Use the language of mathematics to express mathematical ideas
Instructional programs from pre-kindergarten through grade 12 should enable
all students to:
•Recognize and use connections among mathematical ideas
•Understand how mathematical ideas interconnect and build on one
another to produce a coherent whole
•Recognize and apply mathematics in contexts outside of mathematics
Instructional programs from pre-kindergarten through grade 12 should enable
all students to:
•Create and use representations to organize, record and communicate
mathematical ideas
•Select, apply and translate among mathematical representations to solve
•Use representations to model and interpret physical, social and
mathematical phenomenon
Getting Started
NCTM Standard Rubric Glossary of Terms:
Separating an abstract entity into its constituent elements,
determining essential features, examining carefully and in
Formulating a theory without proof
The process in which a conclusion follows necessarily from
the premises or facts presented. Moves from general to
Produced with least waste of time and effort
Engagement: Occupying attention or efforts
Quality judged
Hypothesis: An assumed proposition
Justification: A reason, fact or circumstance that is shown or proven right or
Phenomenon: A fact, occurrence or circumstance
Methodical, planned or ordered
Getting Started
We Set the Standards!
Exemplars® Classic Primary Math Rubric
Exemplars® Classic Primary Math Rubric
© Exemplars®, 2012
I understand the
problem. My answer is
correct. I used a rule,
and/or verified that my
strategy is correct.
We Set the Standards!
Wow, awesome!
Meets the
I understand the
problem and my
strategy works. My
answer is correct.
Okay, good try.
Unclear if
I understand only part
of the problem. My
strategy works for part
of the problem.
I did not understand the
Problem Solving
Makes an effort.
No or little
I noticed something in my
work, and used that to
extend my answer and/or
I showed how this problem
is like another problem.
I noticed something
about my math work.
I tried to notice
something, but it is not
about the math in the
I did not notice anything
about the problem or the
numbers in my work.
I used another math
representation to help
solve the problem and
explain my work in
another way.
I made a math
representation to help
solve the problem and
explain my work, and it is
labeled and correct.
I tried to use a math
representation to help
solve the problem and
explain my work, but it
has mistakes in it.
I did not use a math
representation to help
solve the problem and
explain my work.
Copyright ©2005, revised 2016 by Exemplars, Inc. All rights reserved.
I used a lot of specific
math language and/or
notation accurately
throughout my work.
I used math language
and/or math notation
accurately throughout
my work.
I used some math
language and/or math
I used no math language
and/or math notation.
Exemplars® Standard Primary Math Rubric
I showed that I knew more
about a math idea that
I used in my plan. Or, I
explained my rule.
All of my math thinking is
Some of my math
thinking is correct.
My math thinking is not
Reasoning and
Exemplars® Standard Primary Math Rubric
Exemplars® Classic Rubric for Student Self Evaluation
© 2006 Exemplars®
We Set the Standards!
Getting Started
Sample Task
A Puzzle
Marc, Amanda and Jacob were putting a 100piece jigsaw puzzle together. Marc placed 1/4
of the pieces in the puzzle. Amanda placed
1/5 of the pieces into the puzzle. Jacob placed
the remaining pieces in the puzzle and told
Marc and Amanda that he knew exactly how
many pieces he had put in the puzzle. How
many pieces did Jacob put in the jigsaw puzzle?
Show all your math thinking.
We Set the Standards!
Getting Started
Sample Task cont.
A Puzzle
Suggested Grade Span
Grades 3–5
Grade(s) in Which Task was Piloted
Grade 5
Marc, Amanda and Jacob were putting a 100-piece jigsaw puzzle together. Marc placed 1/4 of the
pieces in the puzzle. Amanda placed 1/5 of the pieces into the puzzle. Jacob placed the remaining
pieces in the puzzle and told Marc and Amanda that he knew exactly how many pieces he had put in
the puzzle. How many pieces did Jacob put in the jigsaw puzzle? Show all your math thinking.
Alternative Versions of Task
More Accessible Version:
Marc, Amanda and Jacob were putting a 100 piece jigsaw puzzle together. Marc placed 1/2 of the
pieces in the puzzle. Amanda placed 1/4 of the pieces into the puzzle. Jacob placed the remaining
pieces in the puzzle and told Marc and Amanda that he knew exactly how many pieces he had put in
the puzzle. How many pieces did Jacob put in the jigsaw puzzle? Show all your math thinking.
More Challenging Version:
Marc, Amanda and Jacob were putting a jigsaw puzzle together. Marc placed 1/4 of the pieces in the
puzzle. Amanda placed 1/5 of the pieces into the puzzle. Jacob placed 55 pieces in the puzzle. How
many pieces are in the jigsaw puzzle? Show all your math thinking.
NCTM Content Standards and Evidence
Number and Operations Standard for Grades 3–5
Instructional programs from pre-kindergarten through grade 12 should enable students to —
• Understand numbers, ways of representing numbers, relationships among numbers, and
number systems
• NCTM Evidence: Develop understanding of fractions as parts of unit wholes,
as parts of a collection, as locations on number lines, and as divisions of whole
• Exemplars Task-Specific Evidence: This task requires students to understand that
1/4 of a set of a collection of 100 things is 25 and that 1/5 of a set of a collection of
100 things is 20.
Time/Context/Qualifiers/Tip(s) From Piloting Teacher
This is a short- to medium-length task. Students should be aware of fractions as part of a collection.
We Set the Standards!
Getting Started
Sample Task cont.
This task can be linked with a free time activity of working on a crossword puzzle. This task also links
with Fraction Action by L. Leedy, Each Orange Had 8 Slices by P. Giganti, Jr. or Eating Fractions by B.
Common Strategies Used to Solve This Task
This was a challenging task for most fifth graders. Most started by finding the number of puzzle
pieces placed by Marc and Amanda. They then subtracted that number from 100 to find the number
of pieces that Jacob placed. Some kids tried adding the fractions and coming up with Jacob putting
11/20 of the puzzle together. They then had to find 11/20 of the 100 pieces.
Possible Solutions
Original Version:
Marc placed 1/4 of 100 = 25 pieces.
Amanda placed 1/5 of 100 = 20 pieces.
That left Jacob (who placed 11/20 of the pieces) placing (100 – 45) 55 pieces.
More Accessible Version:
Marc placed 1/2 of 100 = 50 pieces.
Amanda placed 1/4 of 100 = 25 pieces.
That left Jacob (who placed 1/4 of the pieces) placing (100 – 75) 25 pieces.
More Challenging Version:
Marc and Amanda placed (1/4 + 1/5 ) 9/20 of the pieces of the puzzle which means Jacob placed
(20/20 – 9/20) 11/20 of the pieces of the puzzle which is equal to 55 pieces. Dividing by 11 gives you
1/20 of the puzzle equal to 5 pieces. Then 20/20 of the puzzle or the whole puzzle is equal to (5 x 20)
100 pieces.
Task-Specific Assessment Notes
General Notes
Students should be allowed to use manipulatives if they feel they may help.
The Novice will not have a strategy that will lead to a solution. They will not be able to relate 1/4 of
the 100 pieces to 25 pieces. They will not have a working understanding of representing fractions as
parts of a collection.
The Apprentice will have some working knowledge of fractions and will be able to engage in part
of the problem. They may be able to find that Jacob placed 11/20th of the puzzle but not be able to
compare that to the number of pieces.
We Set the Standards!
Getting Started
Sample Task cont.
The Practitioner will be able to find a correct solution. They will show a good understanding of
fractions as part of a collection. A sense of audience or purpose is communicated and a systematic
approach and/or justification of correct reasoning is present.
The Expert will have all the Practitioner has and more. They may make a strong connection between
decimals, fractions and percents. They may make a connection between this problem and other
problems they have solved or a problem in the real world.
We Set the Standards!
Getting Started
Adding 1/4 + 1/5 = 2/9
is not a strategy that will
solve the problem.
The student could not find
the number of pieces in a
fraction of the puzzle.
We Set the Standards!
Getting Started
The student does not have a
strategy to find the number of
remaining pieces.
Knowing that 1/4 of the puzzle is
25 pieces and 1/5 of the puzzle is 20
pieces solves part of the problem.
The circle graph is accurate
but did not help the student
solve the problem.
We Set the Standards!
Getting Started
A correct solution
is achieved.
We Set the Standards!
The observation
is correct.
The student’s work communicates the student’s strategy
and mathematical reasoning.
Getting Started
The equations help communicate the student’s mathematical
reasoning and strategy.
We Set the Standards!
Getting Started
Expert cont.
A correct solution
is achieved.
The student makes a
connection to a similar
The student solved with
fractions and decimals to
verify the solution.
We Set the Standards!
Getting Started
Exemplars® Standard Rubric Scoring Notes
Please copy and distribute these for use amongst your teachers.
Communication Connections Representation
and Proof
Grade Level
Student’s Name:
Communication Connections Representation
and Proof
Grade Level
Student’s Name:
© 2006 Exemplars®
Communication Connections Representation
and Proof
© 2006 Exemplars®
Grade Level
Student’s Name:
Communication Connections Representation
and Proof
Grade Level
Student’s Name:
© 2006 Exemplars®
Communication Connections Representation
and Proof
© 2006 Exemplars®
Grade Level
Student’s Name:
We Set the Standards!
Communication Connections Representation
and Proof
Student’s Name:
© 2006 Exemplars®
Getting Started
Grade Level
© 2006 Exemplars®
Task Organizer*
*Developed by Carol McNair.
We Set the Standards!
Graphic Organizer
We Set the Standards!
Preliminary Planning Sheet
Possible Solution(s)
Problem Solving
Program Link
GE(s) Addressed
Underlying Mathematical
Standard(s) Addressed
Title of Task
*Developed by Deb Armitage and Shelia Rivers
Related Tasks
Mathematical Language
*Preliminary Planning Sheet for a Mathematics Portfolio Piece/Task
What I Know
We Set the Standards!
What I Want to Know
KWL Chart
What I Learned | {"url":"https://studylib.net/doc/18741247/getting-started-guide","timestamp":"2024-11-08T02:26:04Z","content_type":"text/html","content_length":"116601","record_id":"<urn:uuid:ad51d822-ffe2-47e4-8979-fa30c9035ba5>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00595.warc.gz"} |
Applied Mathematics: Optimization
e-books in Applied Mathematics: Optimization category
Convex Optimization: Algorithms and Complexity
by Sebastien Bubeck - arXiv.org , 2015
This text presents the main complexity theorems in convex optimization and their algorithms. Starting from the fundamental theory of black-box optimization, the material progresses towards recent
advances in structural and stochastic optimization.
(6767 views) Data Assimilation: A Mathematical Introduction
by K.J.H. Law, A.M. Stuart, K.C. Zygalakis - arXiv.org , 2015
This book provides a systematic treatment of the mathematical underpinnings of work in data assimilation. Authors develop a framework in which a Bayesian formulation of the problem provides the
bedrock for the derivation and analysis of algorithms.
(6224 views)
An Introduction to Nonlinear Optimization Theory
by Marius Durea, Radu Strugariu - De Gruyter Open , 2014
Starting with the case of differentiable data and the classical results on constrained optimization problems, continuing with the topic of nonsmooth objects involved in optimization, the book
concentrates on both theoretical and practical aspects.
(7962 views) Universal Optimization and Its Application
by Alexander Bolonkin - viXra.org , 2017
This book describes new method of optimization (''Method of Deformation of Functional'') that has the advantages at greater generality and flexibility as well as the ability to solve complex problems
which other methods cannot solve.
(6167 views) Optimization Algorithms: Methods and Applications
by Ozgur Baskan (ed.) - InTech , 2016
This book covers state-of-the-art optimization methods and their applications in wide range especially for researchers and practitioners who wish to improve their knowledge in this field. It covers
applications in engineering and various other areas.
(7594 views) Decision Making and Productivity Measurement
by Dariush Khezrimotlagh - arXiv , 2016
I wrote this book as a self-teaching tool to assist every teacher, student, mathematician or non-mathematician, and to support their understanding of the elementary concepts on assessing the
performance of a set of homogenous firms ...
(7166 views) A Practical Guide to Robust Optimization
by Bram L. Gorissen, Ihsan Yanıkoğlu, Dick den Hertog - arXiv , 2015
The aim of this paper is to help practitioners to understand robust optimization and to successfully apply it in practice. We provide a brief introduction to robust optimization, and also describe
important do's and don'ts for using it in practice.
(7695 views) Optimization Models For Decision Making
by Katta G. Murty - Springer , 2010
This is a Junior level book on some versatile optimization models for decision making in common use. The aim of this book is to develop skills in mathematical modeling, and in algorithms and
computational methods to solve and analyze these models.
(11812 views) Linear Programming
by Jim Burke - University of Washington , 2012
These are notes for an introductory course in linear programming. The four basic components of the course are modeling, solution methodology, duality theory, and sensitivity analysis. We focus on the
simplex algorithm due to George Dantzig.
(8386 views) Discrete Optimization
by Guido Schaefer - Utrecht University , 2012
From the table of contents: Preliminaries (Optimization Problems); Minimum Spanning Trees; Matroids; Shortest Paths; Maximum Flows; Minimum Cost Flows; Matchings; Integrality of Polyhedra; Complexity
Theory; Approximation Algorithms.
(9452 views) Robust Optimization
by A. Ben-Tal, L. El Ghaoui, A. Nemirovski - Princeton University Press , 2009
Written by the principal developers of robust optimization, and describing the main achievements of a decade of research, this is the first book to provide a comprehensive and up-to-date account of
this relatively new approach to optimization.
(11351 views) Lectures on Optimization: Theory and Algorithms
by John Cea - Tata Institute of Fundamental Research , 1978
Contents: Differential Calculus in Normed Linear Spaces; Minimization of Functionals; Minimization Without Constraints; Minimization with Constraints; Duality and Its Applications; Elements of the
Theory of Control and Elements of Optimal Design.
(11242 views) Iterative Methods for Optimization
by C.T. Kelley - Society for Industrial Mathematics , 1987
This book presents a carefully selected group of methods for unconstrained and bound constrained optimization problems and analyzes them in depth both theoretically and algorithmically. It focuses on
clarity in algorithmic description and analysis.
(11316 views) Applied Mathematical Programming Using Algebraic Systems
by Bruce A. McCarl, Thomas H. Spreen - Texas A&M University , 2011
This book is intended to both serve as a reference guide and a text for a course on Applied Mathematical Programming. The text concentrates upon conceptual issues, problem formulation, computerized
problem solution, and results interpretation.
(12858 views) Optimal Stopping and Applications
by Thomas S. Ferguson - UCLA , 2008
From the table of contents: Stopping Rule Problems; Finite Horizon Problems; The Existence of Optimal Rules; Applications. Markov Models; Monotone Stopping Rule Problems; Maximizing the Rate of
Return; Bandit Problems; Solutions to the Exercises.
(13545 views) The Design of Approximation Algorithms
by D. P. Williamson, D. B. Shmoys - Cambridge University Press , 2010
This book shows how to design approximation algorithms: efficient algorithms that find provably near-optimal solutions. It is organized around techniques for designing approximation algorithms,
including greedy and local search algorithms.
(16502 views) Applied Mathematical Programming
by S. Bradley, A. Hax, T. Magnanti - Addison-Wesley , 1977
This book shows you how to model a wide array of problems. Covered are topics such as linear programming, duality theory, sensitivity analysis, network/dynamic programming, integer programming,
non-linear programming, and my favorite, etc.
(20173 views) Linear Complementarity, Linear and Nonlinear Programming
by Katta G. Murty , 1997
This book provides an in-depth and clear treatment of all the important practical, technical, computational, geometric, and mathematical aspects of the Linear Complementarity Problem, Quadratic
Programming, and their various applications.
(12072 views) Optimization and Dynamical Systems
by U. Helmke, J. B. Moore - Springer , 1996
Aimed at mathematics and engineering graduate students and researchers in the areas of optimization, dynamical systems, control systems, signal processing, and linear algebra. The problems solved are
those of linear algebra and linear systems theory.
(14653 views) Notes on Optimization
by Pravin Varaiya - Van Nostrand , 1972
The author presents the main concepts mathematical programming and optimal control to students having diverse technical backgrounds. A reasonable knowledge of advanced calculus, linear algebra, and
linear differential equations is required.
(12265 views) Optimization Algorithms on Matrix Manifolds
by P.-A. Absil, R. Mahony, R. Sepulchre - Princeton University Press , 2007
Many science and engineering problems can be rephrased as optimization problems on matrix search spaces endowed with a manifold structure. This book shows how to exploit the structure of such
problems to develop efficient numerical algorithms.
(18353 views) Convex Optimization
by Stephen Boyd, Lieven Vandenberghe - Cambridge University Press , 2004
A comprehensive introduction to the subject for students and practitioners in engineering, computer science, mathematics, statistics, finance, etc. The book shows in detail how optimization problems
can be solved numerically with great efficiency.
(19562 views) | {"url":"https://www.e-booksdirectory.com/listing.php?category=408","timestamp":"2024-11-02T08:16:13Z","content_type":"text/html","content_length":"27948","record_id":"<urn:uuid:31614aed-7b62-499b-9cf6-91df6f2d09c6>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00164.warc.gz"} |
What are the odds?
Back in the early 1980s, a friend of mine had TRS-80 Model III in their house. His mother was a writer, and had gotten it to use for writing books.
I believe it was this TRS-80 where I saw a version of Monopoly that taught me a strategy I had never learned as a kid:
The computer bought EVERYTHING it landed on, even if it had to mortgage properties to do so.
This simple strategy was a gamble, since you could end up with too many mortgaged properties and no source of income from rent. But. by doing this, the computer would end up owning so many random
pieces it made it difficult to own a monopoly yourself.
And since then, that is how I played Monopoly. When it worked, it created some awfully long games (if no one had a monopoly to quickly drive other players bankrupt when they landed on their hotels).
In more modern times, I have watched YouTube videos concerning Monopoly strategies. They will break down the statistical odds of landing on any particular piece of property. For example, you know a
pair of dice can produce values from 2 to 12. If truly random, the odds of getting a “1 and 1” should be the same as getting a “3 and 5.” Rather than focus on dice rolls, the strategies take into
consideration things that alter locations: Go To Jail, Advance to Go, Take a Ride on Reading, etc.
These cards existing means there are more chances for a player to end up on Go, Reading Railroad, Jail, etc. This means the property squares after those spots have more chances of being landed on.
Board with board games. Move along…
But I digress… As Internet Rabbit Holes do, this led me to watch other videos about statistics in things like card games. In a randomly shuffled deck, there should be as much chance for the first
card to be Ace of Spaces as there is for it to be Three of Clubs. It is random, after all.
For that first card drawn, that is a 1 out of 52 chance to be any card in the deck. (52 cards in the deck.)
But as a game plays on, there are fewer cards, so the odds of getting any of the remaining cards increases. For the second card drawn, you now know there is a 0% chance of getting whatever the first
card is, and a 1 in 51 chance of getting any of the rest.
And so it continues…
For games like Blackjack or 21, you do not really care if it is a Ten of Diamonds or a King of Hearts or a Queen of Clubs or a Jack of Spades. They all have the value of 10 in the game. Thus, the
likelihood of drawing a ten card is much higher than any other card in the deck.
You have four suits (clubs, spades, hearts, diamonds) so there are four of each card – Aces, Two through Ten, Jacks, Queens, and Kings. This means there are 16 cards in the deck that could be a value
of 10 in the game. When you draw the first card, you should have a 16 in 52 chance of it being a ten card. That is about a 33% chance!
If you pay attention to what cards have been seen (either by you having it, or seeing it face up with another player), you can eliminate those cards from the possibilities — changing the odds of what
you will get.
This is basically what I understand card counting to be. If you play a game, and you know you’ve seen three Kings so far (either in your hand, or played by others), you now know instead of four
chances to draw a King, you only have one.
Math is hard. Make the computer do it.
I know this has been done before, and quite possible even on a Radio Shack Color Computer, but I thought it might be fun to create a program that displays the percentage likelihood of drawing a
particular card from a deck. I suppose it could have the Blackjack/21 rule, where it treads 10, Jack, Queen and King the same, versus a “whole deck” rule where each card is unique (where you really
need an 8 of Clubs to complete some run in poker or whatever game that does that; I barely know blackjack, and have never played poker).
I plan to play with this when I get time, but I decided to post this now in case others might want to work on it as well.
I envision a program that displays all the cards on the screen with a percentage below it. As cards are randomly drawn, that card’s percentage goes to 0% and all the others are adjusted.
It might be fun to visualize.
More to come, when time allows…
This site uses Akismet to reduce spam. Learn how your comment data is processed. | {"url":"https://subethasoftware.com/2024/10/10/what-are-the-odds/","timestamp":"2024-11-04T03:07:29Z","content_type":"text/html","content_length":"73718","record_id":"<urn:uuid:83ec8cde-8ea9-4018-a8d3-7a54151159ec>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00167.warc.gz"} |
Foreword: Introduction to Quantum Mechanics: A Time-Dependent Perspective - University Science Books
Introduction to Quantum Mechanics: A Time-Dependent Perspective
David Tannor Weizmann Institute of Science
The importance of the time dependent formulation of quantum mechanicsis evident to theorists and experimentalists alike, but this wasn’talways so. Even though Schroedinger’s time dependent equation
was the birth of modern quantum theory, and even though Schroedinger knewabout the harmonic oscillator coherent states and their classical- like motion, attention quickly turned to the time
independent Schroedinger equation. There were good reasons for this: physics was focused on line spectra and their predictions, and for this purposethe time dependent formulation is indirect.
Tradition quickly grow around this chronology, so that textbooks often begin with the timedependent Schroedinger equation or introduce it in Chapter 1, only todrop it like a hot potato for the
remainder of the book, with the exception perhaps of a cameo appearance in the derivation of Fermi’s Golden Rule.
Now the advent of explicitly time dependent experiments, and especially the thrust toward many body systems (where computational and experimental requirements mean that no eigenstates can be found or
measured, or ever needs to be measured, beyond at most the first few) makes it imperative that students see time dependence put to good use as soon as possible. Indeed even in the heyday of line
spectra the abandonment of the time dependent Schroedinger equation was far too wide a swing of the pendulum. It is enough of a shock for students of quantum mechanics that matter is a wave, and
that Schroedinger’s wave is a probability amplitude, etc. Adding to that shock is that quantum physics was done with still, motionless, {\it stationary states}. Absurd questions like “how does
the particle get past the node” come up if one is divorced from time dependence.That teachers and textbook writers took this question seriously underscores the poverty of an education only about
$H\psi = E\psi$.Using a time dependent approach, some familiar intuition is still intact from the classical world, and the classical intuition we allhavecan be put to good use to understand
quantum mechanics at ahigh level rather quickly.
The history of overemphasis on the time independent Schroedingerequation is there for anyone to see in the old textbooks. What is shocking is that it is there for everyone to see in most of the new
ones too! Such“academic momentum” for textbook writers is a well known phenomenon. Thankfully, David Tannor’s book breaks free of this syndrome and takes a far more balanced approach, employing
time dependent and independent approaches in the appropriate circumstances. The student of this text will come away well prepared to tackle today’s explicitly time dependent experiments, and other
stationary experiments with a strong link to a simple time dependent picture via Fourier transform. This is the fun way to learn quantummechanics applied to molecular dynamics.
All of the pillars of time dependent methodology are here: wavepackets, correlation functions, semiclassical methods, and numericalmethods. The treatment of strong fields,
femtosecond multi-pulsecontrol of reactions, photodissociation, and reactive scattering, all of which are treated in the latter half of the book, are made accessible banking on classical intuition
and the preparation in timedependent quantum and semiclassical methods in the first half.
My hope and expectation is that this book is not only a new and muchneeded vehicle for training the current generation of students, but also the impulse required to change the momentum of textbook
writers of the future, toward a balanced approach to quantum molecular dynamics.
The author says in his introduction essentially that this is the book I promised to write, but never did. He is half right: I promised to write a book, but the one I planned would not have been as
good as this one, nor as comprehensive.
Eric Heller
Harvard University | {"url":"https://uscibooks.aip.org/introduction-to-quantum-mechanics-a-time-dependent-perspective-foreword/","timestamp":"2024-11-12T14:10:09Z","content_type":"text/html","content_length":"107313","record_id":"<urn:uuid:f1b99227-e76b-4bb8-8d23-07e699c31044>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00410.warc.gz"} |
Calculating How Many Tokens Your LPs are Worth | PrivacySwap 2.0
Calculating How Many Tokens Your LPs are Worth
Many people ask about how to find out the value of their LP tokens. Here we will try to provide that info for you. A little technical, but pretty easy once you understand how it works.
LP tokens are representations of your share of the total liquidity pool that you guy after providing liquidity of two tokens For example, in the PRV2-BNB liquidity pool, you need to add using PRV and
BNB. In other words, in order to find out how much your tokens are worth, you just need a few variables.
The number of LP tokens you have for the specific pool you are calculating for.
The total number of LP tokens in circulation.
The current amount and price of tokens in the pool.
With these three variables, you can calculate the value of your LP tokens. Let us show you how.
Step 1: Find out the amount of LP tokens you have. If you're staked in farms, head to the farm you're staked in and take a look.
Step 2: Find out the total amount of LP tokens in circulation. You do this by selecting "Details" and then "View on BscScan". Look for "Total Supply" in the page that opens on BscScan.
Although LP tokens ALL bear the same name "Cake-LP", different LPs have different LP contract addresses. This means that the PRV2-BUSD LP token will be called Cake-LP, and so will the PRV2-BNB LP
token. But they are NOT the same LP, and the "Total Supply" for PRV2-BUSD LP and PRV2-BNB LP will differ. So make sure you are looking at the "Total Supply" of the right LP. (How to check comes in
the later steps!)
Step 3: Take the amount of LP tokens you have staked found in Step 1, and divide it by the "Total Supply" in Step 2, and you get your share of the LP. In this case,
$Your LP Ratio = 771.788 / 52,582.617314 = 0.0146776261704743$
Which also means that you own 1.4677% of the LP's assets.
Step 4: Select "Contract" to be taken to the smart contract's page to be taken to the contract's page.
Step 5: On the contract's page, you will see a dropdown that displays all the current assets within this contract address.
This dropdown clearly shows that the assets in this pool are BUSD and PRV. Which means that this is the PRV-BUSD LP. You can repeat the above steps for other LPs and found out exactly how much assets
are in each of these LPs.
This pool has 247,612.09520648 BUSD and 11,177.07254118 PRV.
Step 6: Calculating what your LPs are worth is simply using your LP Ratio obtained in Step 3, and multiplying it by each of the assets in Step 5.
$Value of Current Holdings = Your LP Ratio * Amount of Assets$
In the example above.
$Value of BUSDHoldings = 0.0146776261704743*247,612.09520648 = 3,634.357768728605 BUSD$
$Value of PRVHoldings = 0.0146776261704743*11,177.07254118 = 164.0528924397133 PRV$ | {"url":"https://docs.privacyswap.finance/guides/calculating-how-many-tokens-your-lps-are-worth?fallback=true","timestamp":"2024-11-12T00:45:41Z","content_type":"text/html","content_length":"208295","record_id":"<urn:uuid:8065ad45-e9cd-423e-9e65-7bda06ce74bc>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00868.warc.gz"} |
tics Homework Help
Biostatics Homework Help
Biostatistics is a branch of applied biostatistics related to applications in the field of health sciences and biology. Biostatistics is a word framed by combination of two words- biology and
biostatistics. It is also terms as biometry or biometrics. It is basically the application of biostatistics to a wide variety of biological or life related issues, including Clinical trials, Public
health, Medicines, Genetics, and Ecological and Environmental issues.
We have homework help online experts offering quality assistance in various areas of the subject. Under biostatistics there are three kinds of information namely nominal, ordinal and interval data.
Measurable strategies for investigations depends on these different kinds of information and such details are clear signs of changeability and focal inclination. Our experts hold high qualifications
from universities of USA in different branches of biostatistics. We have native homework solvers for descriptive biostatistics, tabulation and graphical presentation, and inferential biostatistics.
So, you can get university homework help from native writers in your subject and branch with a guarantee of accurate homework answers delivered within the promised deadline. We understand the
importance of biostatistics as it plays an essential role in identification and development of disease treatment and estimating their impact, identification of the risk factors associated with
various diseases, designing, monitoring, analysing and interpreting the results for reporting of clinical studies, development of statistical methodology to identify solutions of issues arising from
public health data and locating, defining and measuring the extent of any disease. The focus of the whole subject and related research studies is to improve the health of individual and community as
a whole. Here we ensure top notch homework help in submitting unique research based answers capable of scoring a high grade every time.
We have a team of experts in mathematics and statistics with years of experience in handling academic assignments ensuing that every students in USA gets best online biostatistics assignment help.
Correlation, Regression, Logistic Regression, Odds Ratio, Chi-square statistic, Discriminant Analysis, Survival Analysis, and so on is some of the important topics that are combined with biology and
statistics to solve the numerous problems. Our statistics homework help tutors are well-versed with all these calculations and applications of biostatistics in public health, medicine, biology and
system biology. With the help of our professionals students can cover different aspects of probability in health and medicine, confidence intervals, statistical vs. medical significance, some
statistical tests over quantitative data, ROC curves and much more. So, if you are looking for highly qualified biostatistics assignment help tutors with years of clinical teaching experience and
research, then you are at the right place. We offer best online biostatistics help with homework service where students can connect directly with our tutors online through live chat and online
sessions. Students can take help in their quizzes and exams through regular support from our tutors. We provide quick and easy resources for students seeking support from an online statistics
homework help tutor at an affordable price.
Biostatistics homework solvers are qualified to work with a large volume of data and seek to simplify the data collected by primary experimentation and yield inferences that can be used to solve
primary public health and other science problems. Many fields now depend on the complexities of data analytics in the age of interdisciplinary education, which can be easily overcome with
professionals who assist in completing tasks with ease and perfection. Our experts have an enriched experience with continuous data growth and knowledgeable about different methods used for data
processing from easy to more complex datasets; where follow-up is more sporadic, recurrent dropout is a concern that can easily not be discarded, and the result measurements are not usually
distributed as a consequence. One way of approaching these data may be different multi-level models with complex regression lines that involve intense expertise and in-depth knowledge of the topic
provided by our homework experts.
All these services are completely customized where our biostatistics tutors understand your requirement as well as expectations in details and work on your assignments from scratch. They ensure that
the instructions from your professor are met and university standards are take care of while working on biostatistics assignment problems. We offer help with biostatistics assignments as a premium
experience to every student where you get affordable price even in case of urgent submissions. Our team of project managers, writers, proof readers and editors makes sure tat every assignment is
delivered well on time while meeting all key requirements. Thus, you get exactly what is expected of our online biostatistics tutors making it possible to achieve a desired grade and improving your
academic performance.
Special benefits of using our biostatistics help online
Several statistics homework help services are available online claiming of expert help and quality work. Among a long list of assignment helpers we differentiate our services by hiring only Ph.D.
certified experts holding an enriched experience in the field of biostatistics. Actual professional having impeccable knowledge and perfect understanding of university standards and guidelines
ensures that every biostatistics assignment solution is flawless and accurate in every aspect.
Moreover you get several value added services along with our biostatistics help online where you get money-back guarantee on every assignment. You can come back for multiple revisions, proofreading
and be assures of 100% original work. Some other benefits you get when you take our biostatistics help online are:
1. Free consultation with experts on Biostatistics topics
2. 100% plagiarism free work
3. Maintaining stringent deadline
4. Brainstorming sessions to help you grasp the subject
5. 24x7 customer support team
6. Proper referencing style as per university standards
7. Exciting offers and discounts for members
8. Lowest price Guarantee
Along with the above benefits, we provide high-quality papers written flawlessly helping students from different universities and colleges achieve excellence. We take extra care to draft your
biostatistics paper and ensure that every paper is proof read and edited before it reaches your mailbox. Thus, every data and information is well checked before finalizing the document.
So, don’t waste your valuable time and money in searching for best biostatistics help online and connect with one of our expert to get the awesome experience of personalized biostatistics assignment | {"url":"https://www.abcassignmenthelp.com/biostatistics-homework-help","timestamp":"2024-11-04T23:39:22Z","content_type":"text/html","content_length":"90417","record_id":"<urn:uuid:850129d3-9217-45ec-bb0b-223c06dd1441>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00281.warc.gz"} |
Implication - (Mathematical Logic) - Vocab, Definition, Explanations | Fiveable
from class:
Mathematical Logic
Implication is a logical connective that represents a relationship between two propositions, often expressed as 'if P, then Q', where P is the antecedent and Q is the consequent. This relationship
indicates that whenever P is true, Q must also be true, establishing a foundation for reasoning and argumentation in various logical frameworks. Understanding implication is crucial for constructing
valid arguments, analyzing proofs, and working with formal languages and semantics.
congrats on reading the definition of Implication. now let's actually learn it.
5 Must Know Facts For Your Next Test
1. An implication can be symbolically represented as P โ Q, where P is the antecedent and Q is the consequent.
2. In truth tables, an implication is only false when the antecedent is true and the consequent is false; in all other cases, it is considered true.
3. Implication forms the basis for many proof strategies, allowing for deductions based on established premises.
4. In natural deduction systems, implications are often used to introduce new assumptions and derive conclusions.
5. Understanding implications helps in deciphering the semantics of first-order languages and evaluating well-formed formulas.
Review Questions
• How does implication serve as a foundational concept in constructing valid arguments?
□ Implication acts as a critical connective that links premises to conclusions in logical reasoning. By establishing a clear relationship between an antecedent and its consequent, implication
allows us to derive new statements based on accepted truths. For example, if we know that 'if it rains (P), then the ground will be wet (Q)', we can logically conclude that if it indeed
rains, we can expect the ground to be wet. This ability to infer conclusions from given premises underlies many proof techniques.
• Discuss how implication operates within natural deduction systems and its role in proof strategies.
□ In natural deduction systems, implication is utilized to introduce new assumptions and derive further conclusions through a structured sequence of logical steps. By assuming an antecedent
(P), one can demonstrate that its corresponding consequent (Q) must follow. This technique allows for a systematic approach to proving complex statements by breaking them down into simpler
components. The introduction and elimination rules for implication provide tools for manipulating assumptions and conclusions effectively during proofs.
• Evaluate the significance of truth tables in understanding the properties of implications within propositional logic.
□ Truth tables play a vital role in illustrating the behavior of implications in propositional logic by providing a visual representation of their truth values under different scenarios. By
constructing a truth table for an implication like P โ Q, we see that it only evaluates to false when P is true and Q is false. This insight into how implications work helps students grasp
not just individual logical statements but also how they interact with each other in proofs and more complex expressions. Analyzing truth tables fosters a deeper understanding of logical
connections that are essential for both natural deduction and semantic interpretations.
ยฉ 2024 Fiveable Inc. All rights reserved.
APยฎ and SATยฎ are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website. | {"url":"https://library.fiveable.me/key-terms/mathematical-logic/implication","timestamp":"2024-11-12T17:20:41Z","content_type":"text/html","content_length":"167105","record_id":"<urn:uuid:3d6512ae-b34d-4c5b-be3a-841fe1a684e0>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00663.warc.gz"} |
Misleading graphs
There are various clues that can lead to questioning the accuracy of a graphical representation.
The scale on the horizontal axis may have the units unevenly spaced or the vertical axis may not start at 0 when recording frequency.
In some graphs, the area of bars may be meant to represent frequency but may not be proportional to the frequency represented in different bars.
In pie charts, where the circle is meant to represent 100% of the data, the segments of the pie may be incorrectly labelled and not add up to 100%.
Misleading scales
One of the most common ways for graphs to be misleading is with the inconsistent use of scale.
Teaching scales in graphs
When presenting graphs from the media it is important to emphasise the context as well as particular misleading features. Often confusion is created from misuse of scale.
Misuse of area
One way for graphs to be misleading is to use an area representation that is not proportional to the value or frequency being represented.
Teaching area for graphs
In presenting graphs from the media to students it is important to emphasise the context as well as particular misleading features. Often area is not used correctly to represent frequency.
Area in graphs
The focus of considering the graph presented here is the importance of the area of each portion of a bar representing the value or frequency of the variable plotted.
Scale activity
The activity for an example of a graph with a misleading scale is based on discussion questions and then a redrawn graph. | {"url":"https://topdrawer.aamt.edu.au/Statistics/Misunderstandings/Misleading-graphs","timestamp":"2024-11-01T19:27:47Z","content_type":"text/html","content_length":"82201","record_id":"<urn:uuid:29e76aa3-a686-4bdc-8d33-0d9f5800d8be>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00890.warc.gz"} |
In the introduction to this chapter, we saw that learning can be done in a frequentist way by counting data. In most cases, it will be sufficient, but it is also a narrow view of the notion of
learning. More generally speaking, learning is the problem of integrating data into the domain knowledge in order to create a new model or improve an existing model. Therefore, learning can be seen
as an inference problem, where one updates an existing model toward a better model.
Let's consider a simple problem: modeling the results of tossing a coin. We want to test if the coin is fair or not. Let's call θ the probability that the coin lands on its head. A fair throw would
have a probability of 0.5. By tossing the coin several times we want to estimate this probability. Let's say the i^-th toss outcome is v[i] = 1 if the coin shows a head and 0 otherwise. We also
assume there is no dependence between each toss, which means observations are i.i.d. And finally, we consider each toss... | {"url":"https://subscription.packtpub.com/book/data/9781784392055/3/ch03lvl1sec19/summary","timestamp":"2024-11-07T06:37:14Z","content_type":"text/html","content_length":"86699","record_id":"<urn:uuid:d108077b-14f9-420e-a83e-0e819cb72e82>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00470.warc.gz"} |
How do you remember the metric system? | Socratic
How do you remember the metric system?
1 Answer
Metric system is in decimals (base $10$) notation and hence very easy to calculate. Further, it uses standard names for different powers of $10$ and hence easy to remember.
Metric system is easiest to remember. At the central level we have units like
gram for weight
meter for length
*liter for volume
And then ten times are decagram, decameter, decaliter
hundred times is hectogram, hectometer, hectoliter
thousand times is kilogram, kilometer, kiloliter
one-tenth is decigram, decimeter, deciliter
one-hundredth is centigram, centimeter, centiliter
one-thousandth is milligram, millimeter, mililiter
Apart from this, mega is one million times ${10}^{6}$
Giga is one billion times ${10}^{9}$
Micro is one-millionth ${10}^{-} 6$
Nano is one-billionth ${10}^{-} 9$
Pico is ${10}^{-} 12$
Femto is ${10}^{-} 15$
Impact of this question
4170 views around the world | {"url":"https://socratic.org/questions/how-do-you-remember-the-metric-system#256990","timestamp":"2024-11-08T06:14:35Z","content_type":"text/html","content_length":"34061","record_id":"<urn:uuid:2de53e8d-1983-4976-a29f-1ce8e6973278>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00728.warc.gz"} |
Airy Bi Function
#include <boost/math/special_functions/airy.hpp>
namespace boost { namespace math {
template <class T>
calculated-result-type airy_bi(T x);
template <class T, class Policy>
calculated-result-type airy_bi(T x, const Policy&);
}} // namespaces
The function airy_bi calculates the Airy function Bi which is the second solution to the differential equation:
The final Policy argument is optional and can be used to control the behaviour of the function: how it handles errors, what level of precision to use etc. Refer to the policy documentation for more
The following graph illustrates how this function changes as x changes: for negative x the function is cyclic, while for positive x the value tends to infinity:
This function is implemented entirely in terms of the Bessel functions cyl_bessel_i and cyl_bessel_j - refer to those functions for detailed accuracy information.
In general though, the relative error is low (less than 100 ε) for x > 0 while only the absolute error is low for x < 0 as the following error plot illustrate:
Since this function is implemented in terms of other special functions, there are only a few basic sanity checks, using test values from functions.wolfram.com.
This function is implemented in terms of the Bessel functions using the relations: | {"url":"https://www.boost.org/doc/libs/1_86_0/libs/math/doc/html/math_toolkit/airy/bi.html","timestamp":"2024-11-06T05:06:18Z","content_type":"text/html","content_length":"10433","record_id":"<urn:uuid:8c101f59-e3c1-4dbe-a123-7c8a299c1868>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00700.warc.gz"} |
In the days from 12 November to 16 November 2018, the meeting was held in Lauria. All the schools of the project participated. These days, guests have been engaged in training activities that affect
math and science.
Our attention has always been directed to the construction of a vision of mathematics not reduced to a set of rules to be memorized and applied. Our need is to create a positive motivation towards
mathematics that leads the students to discover the pleasure of studying math. This is possible through a teaching-learning process that stimulate curiosity and wish to get always involved; briefly
it is didactics based on research-action and on discovery.
The most important goal for a student is to find out the solution of a problem. Once they reach the goal, they are able to think of reaching everyone else. The student must know that he/she can get
wrong while trying to solve a problem, because errors are an integral part of the resolution process and represents a moment of cognitive growth. Another essential element is the sharing of ideas,
strategies, representations because when you make your thought explicit, two effects are produced: first, the students clarify their own ideas, then sharing ideas offers other people the possibility
to acquire the original thought and transform it, elaborate it and complete the process of building new knowledge.
All that can be achieved only by structuring the activities carried out with a "teaching action" at various levels, it must be a laboratory didactics that makes the pupil feeling as the protagonist
and the builder of his own knowledge, and guarantees the educational success of everyone. The laboratory practice faces situations that force a student to make a decision, to organize and
continuously reorganize the information available (problem solving). The essentially operational approach allows everyone to be involved through the concreteness of doing.
So, the problem is not to let the students learn mathematics, but let them to perceive it as alive. Making the students understand maths first of all means to let the students love it and do not
perceive it as arid, separate from reality. | {"url":"https://twinspace.etwinning.net/46567/pages/page/522500","timestamp":"2024-11-04T02:45:29Z","content_type":"text/html","content_length":"29494","record_id":"<urn:uuid:39c04049-0611-42da-a758-79b9691418f4>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00059.warc.gz"} |
Convert W/(m·°C) to cal/(s·cm·°C) (Thermal conductivity)
W/(m·°C) into cal/(s·cm·°C)
Direct link to this calculator:
Convert W/(m·°C) to cal/(s·cm·°C) (Thermal conductivity)
1. Choose the right category from the selection list, in this case 'Thermal conductivity'.
2. Next enter the value you want to convert. The basic operations of arithmetic: addition (+), subtraction (-), multiplication (*, x), division (/, :, ÷), exponent (^), square root (√), brackets and
π (pi) are all permitted at this point.
3. From the selection list, choose the unit that corresponds to the value you want to convert, in this case 'Watt per Meter-Celsius [W/(m·°C)]'.
4. Finally choose the unit you want the value to be converted to, in this case 'Calories per Seconds-Centimeters-Celsius [cal/(s·cm·°C)]'.
5. Then, when the result appears, there is still the possibility of rounding it to a specific number of decimal places, whenever it makes sense to do so.
Utilize the full range of performance for this units calculator
With this calculator, it is possible to enter the value to be converted together with the original measurement unit; for example, '796 Watt per Meter-Celsius'. In so doing, either the full name of
the unit or its abbreviation can be usedas an example, either 'Watt per Meter-Celsius' or 'W/(m·°C)'. Then, the calculator determines the category of the measurement unit of measure that is to be
converted, in this case 'Thermal conductivity'. After that, it converts the entered value into all of the appropriate units known to it. In the resulting list, you will be sure also to find the
conversion you originally sought. Alternatively, the value to be converted can be entered as follows: '85 W/(m·°C) to cal/(s·cm·°C)' or '97 W/(m·°C) into cal/(s·cm·°C)' or '28 Watt per Meter-Celsius
-> Calories per Seconds-Centimeters-Celsius' or '70 W/(m·°C) = cal/(s·cm·°C)' or '13 Watt per Meter-Celsius to cal/(s·cm·°C)' or '55 W/(m·°C) to Calories per Seconds-Centimeters-Celsius' or '40 Watt
per Meter-Celsius into Calories per Seconds-Centimeters-Celsius'. For this alternative, the calculator also figures out immediately into which unit the original value is specifically to be converted.
Regardless which of these possibilities one uses, it saves one the cumbersome search for the appropriate listing in long selection lists with myriad categories and countless supported units. All of
that is taken over for us by the calculator and it gets the job done in a fraction of a second.
Furthermore, the calculator makes it possible to use mathematical expressions. As a result, not only can numbers be reckoned with one another, such as, for example, '(52 * 94) W/(m·°C)'. But
different units of measurement can also be coupled with one another directly in the conversion. That could, for example, look like this: '67 Watt per Meter-Celsius + 10 Calories per
Seconds-Centimeters-Celsius' or '37mm x 79cm x 22dm = ? cm^3'. The units of measure combined in this way naturally have to fit together and make sense in the combination in question.
The mathematical functions sin, cos, tan and sqrt can also be used. Example: sin(π/2), cos(pi/2), tan(90°), sin(90) or sqrt(4).
If a check mark has been placed next to 'Numbers in scientific notation', the answer will appear as an exponential. For example, 3.642 153 053 276 2×1021. For this form of presentation, the number
will be segmented into an exponent, here 21, and the actual number, here 3.642 153 053 276 2. For devices on which the possibilities for displaying numbers are limited, such as for example, pocket
calculators, one also finds the way of writing numbers as 3.642 153 053 276 2E+21. In particular, this makes very large and very small numbers easier to read. If a check mark has not been placed at
this spot, then the result is given in the customary way of writing numbers. For the above example, it would then look like this: 3 642 153 053 276 200 000 000. Independent of the presentation of the
results, the maximum precision of this calculator is 14 places. That should be precise enough for most applications. | {"url":"https://www.convert-measurement-units.com/convert+W+m+C+to+cal+s+cm+C.php","timestamp":"2024-11-04T14:13:26Z","content_type":"text/html","content_length":"55557","record_id":"<urn:uuid:93b2e228-f317-4a2d-a164-0a63fd53a200>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00160.warc.gz"} |
Physically-Based Fluid Modeling using Smoothed Particle Hydrodynamics
CHAPTER 5
Conclusions and Future Work
Initially it was thought that the SPH computations would be the primary bottleneck in this algorithm. The intent was to implement the full set of SPH equations and then eliminate complex calculations
which may not be necessary for a graphics animation. It turned out that reducing the number of particles gave sufficient speed while maintaining "fluid" motion. Unfortunately this benefit was
counteracted by the lack of sufficient speed in the surface evaluation algorithm.
Overall the integration of Smoothed Particle Hydrodynamics with traditional particle systems has been shown to be a successful method for modeling fluid motion for computer graphics. It is a step
beyond existing fluid modeling methods in that it can accurately model large scale movement of fluid due to the use of hydrodynamic equations of motion. It can be used as either a testing ground for
CFD simulations using SPH, giving a scientist an interactive method of testing out simulation parameters before running a full blown numerical simulation. On the other hand it is also useful as a
modeling tool, giving an animator a physically based method of creating animations of fluid movement.
Further research is needed into the surface evaluation problem. The use of the Cell Volume structure might be improved, as well as the load distribution between the SPH and surface calculations
(since the SPH computations are running quickly, it makes sense to give some of that processing power to the surface computation). It might also be useful to take advantage of frame coherence: since
the surface may not change a lot during one time step only the areas which do move need recalculation. The "continuation" method of polygonization (Bloomenthal, 1988; Wyvill, 1986b) may also be
useful as a combined evaluation/generation method.
Other methods of rendering the fluid could also be researched. Volumetric rendering of non-grid based data (e.g. particles) is one possible approach. Various graphics techniques such as transparency
could also prove useful.
Integrating thermal energy into this equation of state may be appropriate for modeling melting and cooling of the fluid. Thermal energy is successfully used in the molecular dynamics fluid models
(Tonnesen, 1991; Terzopoulos, 1989). It would seem to be the next logical step for this algorithm. The next logical step from the multi-processing aspect of this system would be distribution of the
computations. The SPH calculations and/or the surface evaluation could easily be run on an SGI Challenge (or Challenge Array), assuming a fast network and some code reorganization.
Other future work involves the definition and use of more complicated obstacles, involving rounded, or other more complex surfaces. This would open the door for creating more "real world" situations
in which to model fluid movement. | {"url":"http://plunk.org/~trina/thesis/html/thesis_ch5.html","timestamp":"2024-11-06T14:16:54Z","content_type":"text/html","content_length":"4082","record_id":"<urn:uuid:f1e83fad-d9a0-4ed0-808d-23e4eec07be7>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00600.warc.gz"} |
Method estimating integral mean value of enhancement factor and path latitude for whistlers below magnetic latitude 10 deg
The propagation of whistler-mode waves in the ionosphere is assumed to be ducted propagation. In other words, the waves will be presumed as traveling along magnetic field lines. An analytic solution
of the whistler dispersion equation is obtained in terms of the electron density models of the ionosphere, the transverse gradient of the electron density required for guiding whistler mode waves
along a magnetic field line, and an empirical formula for the magnetic field lines of the 1980's IGRF (n=8). Here, the electron density enhancement factor is defined as N sub c/N sub g-1. N sub c is
the electron density at the center of the duct; N sub g is the background electron density at the latitude where the duct center lies. Thus, the integral mean value of the enhancement factor required
and the path latitude can be analytically determined by a set of observed values: N sub m F2 and h sub m F2 of the ionosphere, and D (whistler dispersion).
JPRS Report: Science and Technology. China
Pub Date:
July 1987
□ Augmentation;
□ Earth Ionosphere;
□ Integrals;
□ Latitude;
□ Magnetic Fields;
□ Wave Propagation;
□ Whistlers;
□ Electron Density (Concentration);
□ Estimates;
□ Geophysics | {"url":"https://ui.adsabs.harvard.edu/abs/1987rstc.rept...21W/abstract","timestamp":"2024-11-12T20:08:40Z","content_type":"text/html","content_length":"35966","record_id":"<urn:uuid:1b9625ed-7020-4c37-87d1-e22ae04a6582>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00401.warc.gz"} |
Black hole firewalls, smoke and mirrors
The radiation emitted by a black hole (BH) during its evaporation has to have some degree of quantum coherence to accommodate a unitary time evolution. We parametrize the degree of coherence by the
number of coherently emitted particles Ncoh and show that it is severely constrained by the equivalence principle. We discuss, in this context, the fate of a shell of matter that falls into a
Schwarzschild BH. Two points of view are considered: that of a stationary external observer and that of the shell itself. From the perspective of the shell, the near-horizon region has an energy
density proportional to Ncoh2 in Schwarzschild units. So, if Ncoh is parametrically larger than the square root of the BH entropy SBH1/2, a firewall or more generally a "wall of smoke" forms and the
equivalence principle is violated while the BH is still semiclassical. To have a degree of coherence that is parametrically smaller than SBH1/2, one has to introduce a new sub-Planckian gravitational
length scale, which likely also violates the equivalence principle. And so our previously proposed model which has Ncoh=SBH1/2 is singled out. From the external-observer perspective, we find that the
time it takes for the information about the state of the shell to get re-emitted from the BH is inversely proportional to Ncoh. When the rate of information release becomes of order unity, the
semiclassical approximation starts to break down and the BH becomes a perfect reflecting information mirror.
ASJC Scopus subject areas
• Nuclear and High Energy Physics
• Physics and Astronomy (miscellaneous)
Dive into the research topics of 'Black hole firewalls, smoke and mirrors'. Together they form a unique fingerprint. | {"url":"https://cris.bgu.ac.il/en/publications/black-hole-firewalls-smoke-and-mirrors","timestamp":"2024-11-02T01:26:33Z","content_type":"text/html","content_length":"56990","record_id":"<urn:uuid:aced17df-08e7-406a-8967-9331f22dd8e8>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00476.warc.gz"} |
What's your mean?
Can you work out the means of these distributions using numerical methods?
The probability density functions for two related, but unknown, distributions are given in the following accurately plotted chart.
It is known that the means of the distributions are whole numbers, and that the two pdfs only have a single turning point.
By numerically estimating the required integrals, what can you deduce with certainty about the two means?
Getting Started
Although numerical integration is not exact, you might like to try to do a numerical integration which you KNOW is smaller than the mean and then do another integration which you can be very sure
gives a value which is larger than the mean.
Don't forget that you can break an area down into rectangles or trapezia.
Student Solutions
Probability Density Functions
The probability density function, or PDF, is a function which describes the probability of a random variable taking on certain values. For a continuous random variable, the probability that the
variable lies between two values is given by the integral of the density function between these values.
We know that the sum of the probabilities of all possible outcomes is 1. So the integral of the PDF over all possible values of the variable is equal to 1.
We also need to know how to calculate the mean of the variable from the PDF. We first recall the definition of mean: $ \bar{x}=\Sigma x \,Pr(\hbox{X=x})$
Because the integral of the PDF gives us the probabilities of the variable occuring, the equation for the mean becomes $$ \bar{x}=\int xf(x) \,dx $$ where $f(x)$ is the density function.
Integrating using area approximation
We are now ready to find the means of our two PDFs. However, because we do not know their exact form we will have to approximate for the integrals.
For example, consider a variable with the distribution function as below.
We wish to calculate $ Pr(1/2 \leq X \leq 3/4) $ which we can find by calculating the area of the shaded rectangle:
$$ Pr(1/2 \leq X \leq 3/4) =\int^{3/4}_{1/2} \,dx= base \times height = ({3\over 4} - {1\over 2}) \times 1 = {1\over 4} $$
Red Line Mean
Applying the same idea to the red line in the problem, we can estimate the area under the curves using rectangles and trapeziums. Two such trapeziums are marked below in green.
To find the area of the trapezium, we use the result $ Area(trapezium) = {h \times (a+b) \over 2} $
This gives us the probability that our variable lies within the small trapezium of height 1. To find the mean, we then need to multiply this probability by the value of the variable in this interval.
We approximate here, by using the midpoint of the trapezium height.
Take for example the above trapezium on the right, where the variable ranges from 10 to 11. We approximate by taking the value of the variable as 10.5, and mutiply this by the probability of the
region to get the mean. The table below gives our estimates of these values.
│h= │a= │b= │Area │Midpoint │Mean │
│0.5│0 │0.01 │0.005 │0.75 │0.00375 │
│1 │0.015│0.1 │0.0575│1.5 │0.08625 │
│1 │0.1 │0.15 │0.0125│2.5 │0.3125 │
│1 │0.15 │0.155│0.1525│3.5 │0.53375 │
│1 │0.155│0.135│0.145 │4.5 │0.6525 │
│1 │0.135│0.12 │0.1275│5.5 │0.70125 │
│1 │0.12 │0.085│0.1025│6.5 │0.66625 │
│1 │0.085│0.06 │0.0725│7.5 │0.54375 │
│1 │0.06 │0.045│0.0525│8.5 │0.44625 │
│1 │0.045│0.035│0.04 │9.5 │0.38 │
│1 │0.035│0.025│0.03 │10.5 │0.315 │
│1 │0.025│0.02 │0.0225│11.5 │0.25875 │
│1 │0.02 │0.015│0.0175│12.5 │0.21875 │
│1 │0.015│0.01 │0.0125│13.5 │0.16875 │
│1 │0.01 │0.01 │0.01 │14.5 │0.145 │
The sum of the means in the right hand column is 5.4325. Because the question tells us the mean is an integer, we should also approximate the mean in the region 15 to 20.
As the probabilities in this range are so low, it is easier to approximate the area as a very flat rectangle. Remembering that the area under the PDF is the same as the probability of the variable
being in that region, we find $$ Pr(15 \leq X \leq 20)=5 \times 0.005=0.025 $$ Again we use the midpoint approximation, and find $$ \bar{x}=17.5 \times 0.025 = 0.4375 $$
Summing over all the means, this gives us $ \bar{x}=5.4325 + 0.4375 = 5.87 \approx 6 $
We leave the grey line for you to compute. You might want to find an even closer estimation of the mean, and then find the relationship between the two PDFs.
Teachers' Resources
Why do this problem?
gives an opportunity to practise numerical integration in the context of probability distributions. It will really allow students to get into the meaning of probability density functions in terms of
areas and probabilities. Instead of simply requiring an explicit calculation, students will need to engage with decisions concerning limits and integration.
Possible approach
The first stage of the problem is to realize that a numerical integration is needed to calculate the mean. Once the class has realised that this is the case, they will need to start to perform the
integrations. This will require various choices as there are many ways in which this can be done. To facilitate this, you might like to print off copies of the graph for students to draw on.
Key questions
How do we relate a probability density function to a probability?
How do the two graphs relate to each other?
What is the graphical interpretation of an integral?
How important will the effect of the second graph be?
What happens for values larger than $20$? Are these values relevant?
Possible extension
How might you try to estimate the variance for these distributions numerically?
Possible support
First try to show that numerically the area under the red curve is 1. You can then use the decomposition into rectangles and trapezia to try to work out the mean. | {"url":"https://nrich.maths.org/problems/whats-your-mean","timestamp":"2024-11-13T01:55:55Z","content_type":"text/html","content_length":"48568","record_id":"<urn:uuid:19e90af8-5ec2-41dc-9597-5f0f7181c593>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00568.warc.gz"} |
Dynamics 365:
In Microsoft Dynamics 365, sometimes we need to send automated notifications to end users. These notifications can come from the server (i.e. back-end automation using workflows, plugins, etc) or
from the code loaded in the user’s web browser (i.e. front-end or client side automation using JavaScript). The latter notifications will make up the scope of this blog post. We will cover form level
notifications (with and without persistence), field level notifications and popup boxes notifications using JavaScript.
Summary of the key syntax
Here is a high-level summary of the main Microsoft Dynamics 365 JavaScript notifications demonstrated in this blog post.
• Form Level Notifications – Set Notification
formContext.ui.setFormNotification(message, messageType, uniqueId);
• Form Level Notifications – Clear Notification
• Field Specific Notifications – Set Notification
formContext.getControl(fieldLogicalName).setNotification(message, uniqueId);
• Field Specific Notifications – Clear Notification
• Alert Box
• Confirmation Box
• Prompt Box
prompt(message, sampleText);
Form Level Notifications
Here is an example of form level notifications applied to the Contact entity’s form. The notification options available at form-level are: Error, Warning and Information. These notification options
have different icons as shown in the image below.
Below is the JavaScript code that was used to generate the notifications shown in the image above. Therefore, every time a user opens up a contact form, these notifications will always appear at the
top of the form.
function FormLevelNotification(executionContext) {
var formContext = executionContext.getFormContext();
formContext.ui.setFormNotification("Example of an ERROR notification. ", "ERROR");
formContext.ui.setFormNotification("Example of a WARNING notification. ", "WARNING");
formContext.ui.setFormNotification("Example of an INFORMATION notification.", "INFO");
Instead of having persistent notifications (ever present whenever a user opens the contact form), it is commons for organizations to implement a modified version of these notifications i.e. where the
notifications above expire after a specifies amount of time. Therefore, every time a user opens a contact form (or any other form where this functionality is implemented), they get notifications
similar to image above, but after a specified amount of time, the notifications disappear and user end up with form like the one below.
This modified option where the form level notifications disappear after a specified amount of time can be accomplished using the code below. In code below, I have chosen an expiration time of 30
seconds. Therefore, the form level notifications would only persist for the first 30 seconds after a user opens up a contact form.
function FormLevelNotificationWithExpiration(executionContext) {
var formContext = executionContext.getFormContext();
var notificationTime = 30000; // 30 seconds in milliseconds
var errorId = "error";
var warningId = "warning";
var infoId = "info";
//Set notifications
formContext.ui.setFormNotification("Example of an ERROR notification. ", "ERROR", errorId);
formContext.ui.setFormNotification("Example of a WARNING notification. ", "WARNING", warningId);
formContext.ui.setFormNotification("Example of an INFORMATION notification.", "INFO", infoId);
//Clear the notifications after the specified amount of time time e.g. 5 seconds
function () {
Field Specific Notifications
To guide users in completing a form correctly, you can provide field specific notification like the ones below. The example below shows the subscription section on the Contact entity’s form.
Logically, the Subscription Start Date must precede the Subscription End Date. Whenever a users enters a Subscription End Date that precedes the Subscription Start Date, the field level notifications
appear advising the user that “The Subscription End Date cannot be before Subscription Start Date” as show below.
The functionality shown in the image above was accomplished using the following JavaScript functions.
//Validation of the TV Subscription Dates
function TvSuscriptionDateValidation(executionContext) {
var formContext = executionContext.getFormContext();
var tvSubStartDateLogicalName = "hos_tvsubstartdate";
var tvSubEndDateLogicalName = "hos_tvsubenddate";
var startDateField = formContext.getAttribute(tvSubStartDateLogicalName);
var endDateField = formContext.getAttribute(tvSubEndDateLogicalName);
var endDateFieldControl = formContext.getControl(tvSubEndDateLogicalName);
var startDate, endDate;
if (startDateField != null && endDateField != null) {
startDate = startDateField.getValue();
endDate = endDateField.getValue();
if (IsDate(startDate) && IsDate(endDate) && startDate > endDate ) {
//Display an error message if the dates entered are logically invalid.
endDateFieldControl.setNotification("The Subscription End Date cannot be before Subscritpion Start Date.", "tvEndDate");
else {
//Verify that the field contains date instead of a null value
function IsDate (input) {
if (Object.prototype.toString.call(input) === "[object Date]") {
return true;
return false;
Popup Notifications
Sometimes you really to get the user’s attention, that is, prevent them from interacting with the form until they have acknowledged your message. To accomplish that endeavor using JavaScript, you can
use the popup boxes. There are 3 types of popup boxes (i.e. alert box, confirmation box and prompt box). In this section, we will show what these popup look like in Dynamics 365 and provide an
example of how to implement them.
Alert Box
Here is an example of an Alert Box in Dynamics 365. The user can acknowledge the message by pressing OK and proceed interact with the form.
The functionality shown in the image above was accomplished using the following JavaScript function.
function AlertBox() {
alert("This is an example of a JavScript Alert window ");
//Alternatively way of writing an Alert Box with the Window prefix:
//window.alert("This is an example of a JavScript Alert window ");
Confirmation Box
Here is an example of an Confirmation Box in Dynamics 365. The user is given two options. Depending on the user’s choice, you can proceed to add more logic. In this example, we use an alert box to
notify user of the choice that was selected.
The functionality shown in the image above was accomplished using the following JavaScript function.
function ConfirmBox() {
var confirmationText;
if (confirm("Would you like to proceed?")) {
confirmationText = "You pressed OK!";
} else {
confirmationText = "You pressed Cancel!";
//Using the alert notification to show the option selected
//Alternatively way of writing an Confirm Box with the Window prefix:
//window.confirm("Press a button: 'OK' or 'Cancel'");
Prompt Box
Here is an example of an Prompt Box in Dynamics 365. The user is given a text box where they can enter an answer as well as two options (i.e. OK and Cancel). Depending on the user’s responses, you
can proceed to add more logic. In this example, we use an alert box to notify user of the text that was entered, if they click on OK.
The functionality shown in the image above was accomplished using the following JavaScript function.
function PromptBox() {
var userText = prompt("Please enter your name below", "This is sample text ");
//Using the alert notification to show the text entered in the prompt box
alert("You said your name is: \n" + userText);
//Alternatively way of writing an Prompt Box with the Window prefix:
//window.prompt("Please enter some text below", "This is sample text ");
Best Practices and Recommendations
For demonstration purposes in this blog post, I included the notification messages in the code. However, for easier maintenance and to give more flexibility to the end users, who may not be Dynamics
365 software developers, it is recommended to put the notification messages in the Dynamics 365 Configuration Data and then create JavaScript helper function(s) that retrieve the data at runtime
using a key. Using this approach, end users with the appropriate security roles can update the JavaScript notification messages anytime without calling upon the services of a Dynamics 365 software
developer with JavaScript experience. | {"url":"https://itsfascinating.com/d365/tag/information/","timestamp":"2024-11-02T09:14:56Z","content_type":"text/html","content_length":"48418","record_id":"<urn:uuid:4a429389-288e-4ebb-a635-4af8456c7379>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00676.warc.gz"} |
Predicting the True Reserve of a Steeply Dipping Deposit in a Multi-Deviation Angle Exploration Operation.
Borehole deviation is the most precise method of delineating ore bodies in hard rock exploration. In order to achieve a precise delineation and consequently a reliable reserve estimate, borehole
trajectories should ideally intercept the ore bodies at 90°. However, because of rock mechanical properties and imperfectness in orientation of borehole trajectories and other factors, borehole
trajectories hardly intercept the ore bodies at 90°. Consequently, the ore reserve estimated from borehole data are liable to differ from true values. This in-turn affects the value of the mineral
projects and the entire investment profile. In this paper, we have studied the impact of borehole deviation on ore reserve for a range of deviation angles from 10°, 15°, 20°, 25°, 30°; using
geometrical modelling. Most importantly, we have been able to develop a mathematical model relating the true reserve to the false reserve through coefficient of variation of the false reserves from
their true values. This coefficient has been estimated for various ranges of deviation angles. We have also shown that this coefficient of variation depends on the angle of deviation only and not on
ore body thickness or reserve. Consequently, they can be applied to estimate the true reserve from the false reserve for any deposit once the angles of deviation are known.
When a vertical hole is being drilled and it drifts away from vertical trajectory and becomes inclined, the hole may be said to have deviated from the vertical.
In many circumstances this gives a negative result. However the deviation of hole from the vertical due to varying mechanical properties of rocks and other factors is made use of in mineral
exploration (Marjoribanks 1997). Indeed the drill hole is given some inclination so that it deviates to such an extent as to intercept the ore body at 90° and thus depict the true thickness of the
body. In most circumstances, the hole does not intercept the ore body at 90°. This is also a deviation since the exploratory borehole drifts away from the designed trajectory. This deviation
negatively affects the exploration process as it gives a false impression of the ore body thickness and consequently reserve estimate. Dominy et’al (2004) emphasizes that deviation of exploratory
holes from the designed trajectory is one of the factors causing uncertainties in reserve estimation which in turn affects the accuracy of mine feasibility. Arsentiev A.I (1972) opines that
uncertainties about the reserve of a deposit can lead to underestimation or overestimation of the value of a mineral project depending whether the mine planner in a risk averter or a risk taker.
This research studies how true reserve can be predicted from false reserve caused by hole deviation.
Read the full version of this article you can by download web version of our magazine.
Download PDF | {"url":"http://br.kz/en-newspaper-br/tpost/xv1kbhyzt1-better-safe-than-sorry","timestamp":"2024-11-04T13:35:20Z","content_type":"text/html","content_length":"41267","record_id":"<urn:uuid:d5ba15f5-045c-41f8-9557-f07614b41881>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00568.warc.gz"} |
SetTest: Group Testing Procedures for Signal Detection and Goodness-of-Fit version 0.3.0 from CRAN
It provides cumulative distribution function (CDF), quantile, p-value, statistical power calculator and random number generator for a collection of group-testing procedures, including the Higher
Criticism tests, the one-sided Kolmogorov-Smirnov tests, the one-sided Berk-Jones tests, the one-sided phi-divergence tests, etc. The input are a group of p-values. The null hypothesis is that they
are i.i.d. Uniform(0,1). In the context of signal detection, the null hypothesis means no signals. In the context of the goodness-of-fit testing, which contrasts a group of i.i.d. random variables to
a given continuous distribution, the input p-values can be obtained by the CDF transformation. The null hypothesis means that these random variables follow the given distribution. For reference, see
[1]Hong Zhang, Jiashun Jin and Zheyang Wu. "Distributions and power of optimal signal-detection statistics in finite case", IEEE Transactions on Signal Processing (2020) 68, 1021-1033; [2] Hong Zhang
and Zheyang Wu. "The general goodness-of-fit tests for correlated data", Computational Statistics & Data Analysis (2022) 167, 107379.
Author Hong Zhang and Zheyang Wu
Maintainer Hong Zhang <hzhang@wpi.edu>
License GPL-2
Version 0.3.0
Package repository View on CRAN
Install the latest version of this package by entering the following in R:
Any scripts or data that you put into this service are public.
SetTest documentation
built on Sept. 12, 2024, 7:41 a.m. | {"url":"https://rdrr.io/cran/SetTest/","timestamp":"2024-11-12T07:20:45Z","content_type":"text/html","content_length":"25813","record_id":"<urn:uuid:eb7f5b4e-fa9b-4c17-b7fb-7879847ba94b>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00769.warc.gz"} |
Surrogate Models
Berkeley Lab researchers have been heavily involved in ‘traditional’ physics-based numerical simulations for a number of years. These are often computationally intensive, especially when run at
increasingly higher resolutions to determine finer structure.
More recently, there has been considerable progress and promise in the use of AI/ML as surrogate models. These include approaches that approximate the full simulation with entirely data-driven
approaches and other approaches that run surrogate models alongside low-resolution traditional simulations to model finer-scale phenomena. These models usually require high-fidelity simulations for
training but once trained, can produce new simulation data at orders of magnitude faster than traditional approaches, offering a transformative potential to run more complex scientific simulations.
Our researchers and engineers are actively tackling several challenges in producing effective AI/ML surrogate models for science, including incorporating physics-informed constraints into models,
quantifying uncertainty, interoperation with existing scientific simulations, and scalable training of large models on high performance computing (HPC) resources.
This project uses complex process simulation models for advanced biofuel and bioproduct production to develop and train machine learning (ML)-based surrogate models. Researchers need flexibility to
explore different scenarios and understand how their work may impact upstream and downstream processes, as well as cost and greenhouse gas emissions. To address this need, the team uses the
Tree-Based Pipeline Optimization Tool (TPOT) to automatically identify the best ML pipelines for predicting cost and mass/energy flow outputs. This approach has been used with two promising bio-based
jet fuel blendstocks: limonane and bisabolane. The results show that ML algorithms trained on simulation trials may serve as powerful surrogates for accurately approximating model outputs at a
fraction of the computational expense. Contact: Corrine Scown (Scown on the Web)
The gpCAM project consists of an API and software designed to make autonomous data acquisition and analysis for experiments and simulations faster, simpler, and more widely available by leveraging
active learning. The tool is based on a flexible and powerful Gaussian process regression at the core, which proves the ability to compute surrogate models and associated uncertainties. The
flexibility and agnosticism stem from the modular design of gpCAM, which allows the user to implement and import their own Python functions to customize and control almost every aspect of the
software. That makes it possible to easily tune the algorithm to account for various kinds of physics and other domain knowledge and constraints and to identify and find interesting features and
function characteristics. A specialized function optimizer in gpCAM can take advantage of high performance computing (HPC) architectures for fast analysis time and reactive autonomous data
acquisition. Contact: Marcus Noack
PySMO is an open-source tool for generating accurate algebraic surrogates that are directly integrated with an equation-oriented (EO) optimization platform, specifically IDAES and its underlying
optimization library, Pyomo. PySMO includes implementations of several sampling and surrogate methods (polynomial regression, Kriging, and RBFs), providing a breadth of capabilities suitable for a
variety of engineering applications. PySMO surrogates have been demonstrated to be very useful for enabling the algebraic representation of external simulation codes, black-box models, and complex
phenomena in IDAES and other related projects. Contact: Oluwamayowa Amusat (Amusat on the Web)
Cosmological probes pose an inverse problem where the measurement result is obtained through observations, and the objective is to infer values of model parameters that characterize the underlying
physical system—our universe, from these observations and theoretical forward-modeling. The only way to accurately forward-model physical behavior on small scales is via expensive numerical
simulations, which are further "emulated" due to their high cost. Emulators are commonly built with a set of simulations covering the parameter space; the aim is to establish an approximately
constant prediction error across the hypercube. We provide a description of a novel statistical framework for obtaining accurate parameter constraints. The proposed framework uses multi-output
Gaussian process emulators that are adaptively constructed using Bayesian optimization methods with the goal of maintaining a low emulation error in the region of the hypercube preferred by the
observational data. We compare several approaches for constructing multi-output emulators that enable us to take possible inter-output correlations into account while maintaining the efficiency
needed for inference. Contacts: Dmitriy Morozov, Zarija Lukic
We developed a neural network-based surrogate model for simulating the process whereby partons are converted to hadrons for high energy physics. The development is the first step towards a fully
data-driven neural network-based hadronization simulator. Contact: Xiangyang Ju (Ju on the Web)
Multi-physics cosmological simulations are powerful tools for studying the formation and evolution of structure in the universe but require extreme computational resources. In particular, modeling
the hydrodynamic interactions of baryonic matter adds significant expense but is required to accurately capture small-scale phenomena and create realistic mock-skies for key observables. This project
uses deep neural networks to reconstruct important hydrodynamical quantities from coarse or N-body-only simulations, vastly reducing the amount of compute resources required to generate high-fidelity
realizations while still providing accurate estimates with realistic statistical properties. Contact: Peter Harrington | {"url":"https://cs.lbl.gov/what-we-do/machine-learning/surrogate-models/","timestamp":"2024-11-02T18:55:30Z","content_type":"text/html","content_length":"31144","record_id":"<urn:uuid:51a19356-f863-4b26-81d6-11083a8f655f>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00828.warc.gz"} |
How to Assign Columns Names In Pandas?
To assign column names in pandas, you can simply access the columns attribute of the DataFrame and assign a list of column names to it. For example, if you have a DataFrame called df, you can assign
column names like this:
df.columns = ['Column1', 'Column2', 'Column3']
This will rename the columns of the DataFrame to 'Column1', 'Column2', and 'Column3'. You can also assign column names when reading in a CSV file by using the names parameter in the read_csv
function. For example:
df = pd.read_csv('data.csv', names=['Column1', 'Column2', 'Column3'])
This will read in the CSV file with the specified column names. You can then access and manipulate the columns using these assigned names.
How to rename columns in pandas DataFrame?
You can rename columns in a pandas DataFrame by using the rename() method. Here is an example:
1 import pandas as pd
3 # Create a sample DataFrame
4 data = {'A': [1, 2, 3, 4, 5],
5 'B': [6, 7, 8, 9, 10]}
7 df = pd.DataFrame(data)
9 # Rename columns
10 df = df.rename(columns={'A': 'Column1', 'B': 'Column2'})
12 print(df)
This will output:
1 Column1 Column2
In the rename() method, you can provide a dictionary with keys as the old column names and values as the new column names that you want to rename the columns to.
How to assign column names from a separate file in pandas?
To assign column names from a separate file in pandas, you can follow these steps:
1. Read the file containing the column names into a pandas DataFrame using the read_csv() function.
1 column_names = pd.read_csv('column_names.csv')
1. Extract the column names from the DataFrame and store them in a list.
1 column_names_list = column_names['Column Name'].tolist()
1. Read the data file for which you want to assign column names into a pandas DataFrame, and specify the names parameter to use the extracted column names.
1 data = pd.read_csv('data_file.csv', names=column_names_list)
By following these steps, you can assign column names from a separate file to a pandas DataFrame.
How to assign column names with special characters in pandas?
To assign column names with special characters in pandas, you can use the rename method or directly set the column names within the dataframe creation statement. Here's an example using the rename
1 import pandas as pd
3 # Create a dataframe with default column names
4 data = {'A': [1, 2, 3], 'B': [4, 5, 6]}
5 df = pd.DataFrame(data)
7 # Rename columns with special characters
8 df = df.rename(columns={'A': 'Column$1', 'B': 'Column#2'})
10 print(df)
Alternatively, you can directly set the column names with special characters during dataframe creation like this:
1 import pandas as pd
3 # Create a dataframe with column names containing special characters
4 data = {'Column$1': [1, 2, 3], 'Column#2': [4, 5, 6]}
5 df = pd.DataFrame(data)
7 print(df)
Both of these methods will allow you to assign column names with special characters in pandas. | {"url":"https://bloggdog.dsn-hkpr.ca/blog/how-to-assign-columns-names-in-pandas","timestamp":"2024-11-11T01:57:52Z","content_type":"text/html","content_length":"158726","record_id":"<urn:uuid:116301e7-2f4b-49fe-bccb-ebeca2562a1d>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00127.warc.gz"} |
Algebra 1 Tutoring in Stamford, CT | Hire the Best Tutors Now!
I have a bachelors degree in Economics from the University of Texas, a masters degree from SUNY Buffalo, and 10 years of experience in education. I've taught in 4 different countries around the world
on 4 different continents. I've taught all of the following courses: Pre-Algebra, Algebra 1, Geometry, Algebra 2, Pre-Calculus, Intro to Calculus, AP AB Calculus, and Economics. I enjoy seeing
students grow and watching the light bulb go off when they have that ... See more | {"url":"https://heytutor.com/tutors/algebra-1/ct/stamford/","timestamp":"2024-11-02T17:22:41Z","content_type":"text/html","content_length":"202070","record_id":"<urn:uuid:b230c582-a383-453e-81ed-48925b678712>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00716.warc.gz"} |
The number one (1) is the first Positive Integer. It is an Odd Number. Although the number 1 used to be considered a Prime Number, it requires special treatment in so many definitions and
applications involving primes greater than or equal to 2 that it is usually placed into a class of its own. The number 1 is sometimes also called ``unity,'' so the th roots of 1 are often called the
th Roots of Unity. Fractions having 1 as a Numerator are called Unit Fractions. If only one root, solution, etc., exists to a given problem, the solution is called Unique.
The Generating Function having all Coefficients 1 is given by
See also 2, 3, Exactly One, Root of Unity, Unique, Unit Fraction, Zero
© 1996-9 Eric W. Weisstein | {"url":"http://drhuang.com/science/mathematics/math%20word/math/0/0002.htm","timestamp":"2024-11-14T20:07:52Z","content_type":"text/html","content_length":"4324","record_id":"<urn:uuid:c34e40be-4d38-4fef-8c58-17a14f0943ff>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00387.warc.gz"} |
Who can explain correlated equilibrium for my game theory assignment? | Hire Someone To Do My Assignment
Who can explain correlated equilibrium for my game theory assignment? Okay, that sounds a bit more plausible than what is being put on paper. I had a friend who figured out that the way you do your
game assumes that the grid changes from each row you choose, rather than the same row that one is in just as much as you have given in your game – hence the name “game”. He was happy to check this
out when I played his game. So the problem is this. For me the paper fails because it simply follows the first quote; in the sentence there, to show the overlap between rows, you have: In the second
sentence, both rows have occupied the same point. All three of them in the first two are occupied by: Right next to the in the order they appear in that row At the end of the sentence this is clear.
We’re using the true, not the false line: The third and fourth rows give up in this rather complex arrangement (I’m pretty sure it had one more row in the first line – and that row is already
occupied by one thing), and somehow the system works out that we’re not tied to our previous row (i.e. there are several rows at any time in any direction that would help the game as well as from the
beginning). The fifth and sixth make no sense – just so we can get some basic business logic out of this analysis. This gets even more confusing for me at play. When you make a series of comparisons
in R … — there you can work out the exact order of operations and get the most common node of the bunch. All right, maybe I need to go get into a different sort of style or discussion for whatever
reason and fill in the blanks? Does it matter? I’ll give you two specific questions about the possible theories that might arise. First, the claim that row 1 and rows 2-3 behave perfectly as though
row 1 has occupied the same point in each of the two columns? On paper, one could argue that the least common thread in the chain of operations is row 3 which (by the way) happens to be occupied the
least often. But it seems to me that the other arguments could be made here: Row 9 has occupied the least frequent row Row 20 has occupied the least common row Row 41 has occupied the least common
row (and the other two have occupied the least common row) Row 51 has occupied almost all of the common most common row (and the third and fourth rows have occupied the least common most common most
common most common odd row). On the actual game theory, that works – however it might. If the common most common row is occupied the least common common row, row 51 will occupy almost all of the row,
but it will not occupy any more than row 10, even if you take row 51 as of rowWho can explain correlated equilibrium for my game theory assignment? On the face of it, I don’t know of anything that I
can explain correlated equilibrium the way that WolframAlpha did — more like a simulation — but I know of nothing that means anything to anyone. And I can show you how going by some random
combination of predictability and correlation can produce an unexpected outcome. Like magic, mixing and other tricks for creating the universal equilibria in the world. This random process is perhaps
the most original and powerful method scientists came up with.
Pay Someone To Take An Online Class
An intuition I find to be hard to do really is to interpret as little as possible. One day, my teacher called me, answered my questions and said, “This is important, Professor; I’m going to figure
out some simple algorithms. Please let me try it out.” I accepted, and had to have my input. Today, I’m familiar with many small things on topic and see almost no evidence for the randomness of it.
But what I did was set up like 10,000 times this whole process. Each time, I did the math. I don’t know why it worked, but it did. I could be in heaven; there were many possibilities, but I didn’t
just give it a thought. Of course it will happen many times, I thought. One thing that I didn’t know: When you take the large factor from many inputs, you will have different results. Like a
numerical simulation I was told to log everything out of my head. I think that is what said, “Heck, if I’m not mistaken, he’s right!…” That it took 3 seconds from the beginning, I couldn’t care less.
It actually started working, because (this is what I have in mind) the logarithm of the input is actually in the output of the algorithm. This is a problem for many algorithms, as in the worst case,
computing the equilibria that were before the algorithm. Every other method would take you long (2 sec) to go through the process and give you more output. Imagine a software program that determines
some conditions for the input data to be consistent, say? Say, for example, that the input value is a “random” value of 0 (1) and then applies the rule “a < B" to get A = A.
Google Do My Homework
visit B is “A”, and B is try this site a (1) + B = 0. In this case, everything is randomized. It turns out, no matter how big A is, it is still possible to have any sort of stable “random” amount of
data (even in the worst case.) I think in practice that’s the problem, since we’re on the lower end of an “equilibrium”, and changing our values can change the underlying system. Now I’m ready to use
here my method of finding the coefficients of this theorem for common solution of linear equations, which is really quite a strange application. The equations were originally approximatives of the
relation between quantity visite site and quantity B of the random combinations of sum 1/A, 1/B (where A is a quantity independent of B) and 1/A (B is a quantity independent of A). Now I could
actually find out about this equation, but I’m not able to. The problem is that I really don’t know what to do with this problem. I do.Who can explain correlated equilibrium for my game theory
assignment? If left alone, I can’t reproduce the case of the Pearson’s correlation and correlation coefficients in a linear scale. Regarding the correlation coefficient itself, I can’t reproduce it
in a linear scale because some people are “pink” or my colleagues and others are yellow, but mostly that connection. “An inverse connection, oh lord.” Apparently this is a pretty standard concept.
But let’s split the process onto a linear scale and go by a local linear scale. Can you explain it in a linear one? If so, can you then be kind to the “focussing and non-interference” and
“escientists” (says the physicist of time) so that when the observed time differences correlate in very good (or with zero-derivative) form, (zero-derivative, yes) then they can apply me to a more
physical theory? I was just wondering if anyone of you of interest thought that linear correlation and correlation coefficients over time can be translated onto a continuum? Just curious. Regarding
the correlation coefficient itself, I can’t reproduce it in a linear scale because some people are “pink” or my colleagues and others are yellow, but mostly that connection.
How Much Should You Pay Someone To Do Your Homework
“An inverse connection, oh lord.” Apparently this is a pretty standard concept. But let’s split the process onto a linear scale and go by a local linear scale. Can you then be kind to the “focussing
and non-interference” and “escientists” (says the physicist of time) so that when the observed time differences correlate in very good (or with zero-derivative) form, (zero-derivative, yes) then they
can apply me to a more physical theory? A) In general there is an inverse correlation in years. The usual link is that it’s possible to measure correlays relatively rapidly. But in real systems I
haven’t even had a chance to measure correlations. I can compare the theoretical predictions against observed ones. That makes measuring the time difference perhaps easier to do. B) If you are
considering linear algebra instead of my use of a linear scale you still question classical equilibrium and correlation as an initial question. There are several known linear models of games, such as
many of the popular games of the 20th-century. Perhaps we could apply my analysis to 2d games or 3d games? But that’ll require making several assumptions about the game, so it is not as fast as some
of my preferred models. I’ll just disagree with the correlation coefficient I have and the data from other games and algorithms in an academic paper. And I like the theory as a whole, but don’t
expect more or less continuity of the data from the other games versus their methods for constructing linear models. Some games have been fine until recently enough to be used when studying the power
of computer algorithms. But if the correlation is a mere | {"url":"https://assignmentinc.com/who-can-explain-correlated-equilibrium-for-my-game-theory-assignment","timestamp":"2024-11-04T14:15:34Z","content_type":"text/html","content_length":"111493","record_id":"<urn:uuid:97f3ba3b-8be3-4f5a-bbab-282d208e0f51>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00595.warc.gz"} |
Team:Oxford/how much can we degrade
Before we began using synthetic biology to develop a system for bioremediation of chlorinated waste, we thought it was important to work towards an answer to the above question. To do this, we used
information from the literature (Gisi et al, 1998) about the metabolism of the native bacterium Methylobacterium extorquens DM4.
We then worked on a model to calculate both the pH change of the system and the volume of DCM degraded over time. This was achieved by using a combination of Michaelis-Menten kinetics, ordinary
differential equations and stoichiometric relations.
How much DCM could the native bacterium degrade?
How much DCM could the native bacterium degrade?
Calculating total DCM degraded
1) Obtaining a theoretical growth curve
To start this calculation, we needed to know how many bacteria we could expect to have in our system. To do this, we used the realistic bead dimensions and numbers shown in the Matlab screen shot on
the left. This allowed us to calculate the volume of bacteria we predict to be infused the agarose beads. We then used the assumption that the bacteria would grow to an optimum density of 10^7
bacteria per ml of agarose[1] and combined these to give us an approximation of how to scale the growth curve:
A = Gompertz vertical scaling constant
N = number of beads
V = volume of each bead in ml
ρ = number of bacteria per ml
Our theoretical growth curves were based on Gompertz functions for the reasons explained when you follow this link:
(what are Gompertz functions?)
. An example output growth curve of the model is shown here.
The scaling of the growth rate of the Gompertz function comes directly from growth curves of the DM4 bacteria that we obtained in the lab.
2) Calculating the volume of DCM that the bacteria can degrade
Our next task was to model the average rate of DCM degradation by M. extorquens DM4. Using Michaelis-Menten kinetics[2], this was predicted to be:
d[Ndcm]/dt = rate of DCM molecule degradation (s-1)
kcat = dcmA turnover rate (= 0.6 s-1 for DM4)
[DCM] = DCM concentration (= 0.02M for our system)
[DcmA] = Number of DcmA molecules per cell (87576) Where did this number come from?
Km = Michaelis constant ( = 9 x 10^-6 M for this reaction)
Through the use of diffusion-limiting beads, [DCM] is kept constant at 0.02M. This is significantly larger than our Michaelis constant, so this equation can be simplified by using the following
1. (Dr George Wadhams, personal communication, August 4, 2014)
2. Michaelis L. and Menten M.L. Kinetik der Invertinwirkung Biochem. Z. 1913; 49:333–369 English translation Accessed 6 April 2007
How much would the pH change by?
How much would the pH change by?
Calculating the pH change
The degradation of DCM by DcmA produces hydrochloric acid (HCl) according to the reaction below: 
NHCL = molecules of HCl
NDCM = molecules of DCM
V1(t) = aqueous layer volume (ml)
V2(t) = DCM layer volume (ml)
y(t) = bacteria population
MRDCM = DCM molar mass
NA = Avogadro’s constant (mol-1)
ρDCM = DCM density (kg/dm3)
[HCl] = HCl concentration (M)
These equations were then simulated using a series of linked functions on MatLab and the results are displayed below:
As you can see from the above graph, the native bacterium M. extorquens DM4 will not be able to degrade a large volume of DCM. It will therefore not be a suitable to dispose of chlorinated waste
efficiently. There are several reasons for this, including:
The degradation of DCM is a stress response for M. extorquens DM4. Therefore, when metabolising DCM, it is also up-regulating stress response molecules such as repair enzymes, which is an additional
strain on cellular metabolism.
M. extorquens DM4 has a doubling rate of 8-9 hours, so it takes 2 weeks to grow up a colony. Additionally, they proved very difficult to grow in the lab, both on standard growth agars and specialised
nutrient agars.
M. extorquens DM4 are not yet well-understood bacteria, particularly with respect to their metabolism.
However, using synthetic biology, we can dramatically increase the amount of chlorinated solvents that certain bacteria can degrade. This is because:
We will use E. coli and P. putida in order to break down DCM. The advantage is that these are extremely well-characterised bacteria that are easy to grow in the lab.
We are expressing microcompartments in both E. coli and P. putida, which prevent toxic intermediates of DCM metabolism from damaging the cells. This is necessary because unlike M. extorquens DM4, E.
coli and P. putida have not evolved for the degradation of DCM and toxic intermediates released during its metabolism
We will upregulate and express formaldehyde dehydrogenase in P. putida and E. coli, respectively. This will help the cells deal with formaldehyde, which is a genotoxic intermediate produced in the
degradation of DCM.
This model proves the power of computer modelling and shows the importance of using synthetic biology to solve global problems. The exact amount of DCM that could be degraded depends largely on input
conditions, such as the number of beads. While more beads in the system allow more rapid DCM removal, a very large system can provide challenging to construct and monitor.
(What do we mean by beads?)
What is a Gompertz function?
What is a Gompertz function?
Gompertz Functions
We used a variation of a sigmoid function called a Gompertz function to model the theoretical growth of our bead-encapsulated bacteria. These functions are well-established[1] as a method of
predicting population growth in a confined space, which will be the case if we encapsulate them in agarose beads. Growth rates follow a sigmoidal curve, where they first increase and then slow
because of limited resources and population density. We assumed that the population of bacteria over time will follow one of these functions (when scaled correctly).
Gompertz functions are of the form:
y(t) = population size as a function of time
A = maximum sustainable population
B = shift on time axis
C = growth rate
Using this theoretical form, we could then calibrate the values of our variables through comparison with actual growth curve data from wet lab experiments. This was an important step because it then
allowed us to calculate the total theoretical degradation rate of DCM that our kit can support.
Varying each of the three constants allows us to fit our Gompertz function to the actual growth data. The effect of varying each constant is shown below:
Zwietering, M. H.; Jongenburger, I.; Rombout, F. M.; van 't Riet, K. (1990), "Modeling of the Bacterial Growth Curve", Applied and Environmental Microbiology 56 (6): 1875–1881
How can we reduce the drop in pH?
How can we reduce the drop in pH?
Using buffers to reduce the pH change of our system
We have investigated the effect of using buffers in the aqueous part of our system.
As a first approximation, we model our system of bacteria turning over DCM, producing HCl, as a chemical system in which HCl immediately enters the 'bulk' (extracellular) solution; in this system we
have a single buffer (HEPES) to reduce the drop in pH, maximising the amount of DCM the entire system can degrade before the pH drops below a toxic level.
Derivation of the Van Slyke equation:
To simplify calculations, assumptions that HCl completely dissociates, and that the system volume = 1L (allowing concentration and number of moles to be treated interchangeably) are made.
Electro-neutrality condition for a system of two substances, HA and BOH:
Total concentration of buffer:
(1.9) Van Slyke equation
β = buffer capacity
n = number of equivalents of strong acid added (per L solution) – we have this as a function of t: approximately addition at a constant rate.
K_(A_i) = K_A of component buffer i
K_W = ionic product of water, 10^(-14)
C_i = concentration of component buffer i
Taking the reciprocal, and substituting the definition:
Derivation shown is based on Adam Hulanicki book Reakcje kwasów i zasad w chemii analitycznej, 2nd ed., PWN, Warszawa 1980 (English edition: Reactions of acids and bases in analytical chemistry;
Chichester, West Sussex, England: E. Horwood; New York: Halsted Press, 1987).
Another possibility of reducing the overall pH change is adding a lot more water to the system. This is the easier method and could be used for single-use DCM disposal kits. However, it is
impractical in large scale applications because of the very large amount of water that would have to be added.
How does the amount of water added affect the output?
How does the amount of water added affect the output?
Calculating the pH change
We then used our model to predict the effect on the system if you simply increase the amount of water in the aqueous layer. This shows how much water is necessary to prevent the pH from dropping too
much. It demonstrates why addition of a buffer is the more reasonable choice to control the pH of the system.
How does the kcat of the system affect the output?
How does the kcat of the system affect the output?
The apparent uni-molecular rate constant kcat, also called the turnover number, denotes the maximum number of enzymatic reactions catalysed per second.
We used our model to predict the response of the system to a change in the kcat value of the DCM degradation enzyme, dcmA. Increasing the value of kcat by a significant amount is unrealistic in the
length of our project. However, in future work, the kcat could potentially be substantially improved.
In the graph shown here, the total volume degraded doesn't change. This is because the amount of HCl that the system requires to reach a toxic pH level is constant, as we are not varying the volume
of the aqueous layer. To increase the total amount of DCM degraded, we simply need to add more water or a pH buffer to the system. However, increasing the kcat value dramatically increases the rate
of the degradation. This hints towards a valid future area of research.
Potential benefits?
Potential benefits?
Increasing the kcat of the enzyme greatly improve our system, as you can see in the models shown above.
By simple adjustment of input parameters, our model could be adapted to simulate the degradation of other types of toxic compounds in other bacteria with different enzymes. This modelling technique
is therefore particularly powerful, because if you know certain parameters about the system, you can simulate how much of a particular product can be produced by a bacterial system.
More broadly, the potential benefit of months of synthetic biology research could be analysed within a few hours using this model, as long as the relevant parameters are roughly known.
To demonstrate what we mean by this, here are some other processes with different kcat values[1]:
1. Mathews, C.K.; van Holde, K.E.; Ahern, K.G. (10 Dec 1999). Biochemistry (3 ed.). Prentice Hall. ISBN 978-0-8053-3066-3.
How can we use the pH drop?
How can we use the pH drop?
How could we measure the pH?
As we’ve built the model predicting the pH change very accurately, we have been thinking about how to use this system change to our advantage. There are two viable options that we’ve considered.
By using a pH indicator that changes colour at a pH of around 6, we could use the same electronics that we’ve developed for detecting the fluorescence of the sfGFP in the biosensor to detect the
colour change, and therefore the point at which the pH becomes dangerously low. This has the advantage of making the biosensor very user friendly while keeping the system cheap. The other option is
to use a commercially available digital pH meter to signal a warning when the pH gets too low. This could require occasional maintenance of the pH sensor, but would have the advantage of being more
How is the pH useful?
The pH in our system is an indirect measure of the amount of DCM that we’ve degraded. It is therefore possible to calculate the required amount of water that has to be added to a certain amount of
DCM to ensure the pH remains neutral. If no buffer solution is added, initial calculations (see the graph) indicate that there is a very big difference between the relative volumes of the amount of
DCM added and the volume of the aqueous layer. This highlights the importance of using a pH buffer solution in the aqueous layer.
Therefore, the system that detects the amount of DCM that we’ve degraded could link the digital pH read-out to the initial amount of water added. | {"url":"https://2014.igem.org/Team:Oxford/how_much_can_we_degrade","timestamp":"2024-11-12T07:31:38Z","content_type":"text/html","content_length":"72901","record_id":"<urn:uuid:cca85a20-ad18-42e4-be53-a18c80bf25ac>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00231.warc.gz"} |
Solving simultaneous equations
Paul Yates discusses how to solve them and their applications
A linear equation with a single unknown is relatively straightforward to solve. For example
x + 7 = 11
We can subtract 7 from both sides to give the answer x = 4.
However, if there are two unknowns, such as
x + y = 4
there are various of values of x and y that will satisfy the equation. In this case we could have x = 0 and y = 4; x = 1 and y = 3; x = y = 2; x = 3 and y = 1; or x = 4 and y = 0. There are also an
infinite number of non-integer answers. To find the values of x and y we need more information.
Let’s suppose we are also told that
x – y = 2
These two equations are known as simultaneous equations because they are true simultaneously. In general, we need as many equations as we have variables to be able to determine their values.
There are various ways of solving simultaneous equations. Laura Woollard has discussed three methods: elimination, substitution and graphical analysis.^1 She concludes that the elimination method is
the most appropriate for year 10 students, particularly those who are less able. Janet Jagger, however, argues substitution is a more versatile method for solving them and therefore advocates this
approach.^2 Doug French argues both approaches are important.^3 More intuitive methods based on inspection have been discussed by Paul Chambers.^4
I will give some examples, both generic and chemical, that illustrate the techniques used in solving simultaneous equations without getting too concerned about labelling the methods.
In the example above, it should be evident that if we add the equations together, the terms in y will cancel out:
(x + x) + (y − y) = 4 + 2
which becomes
2x = 6
and dividing both sides by 2 gives x = 3. We can substitute this value into the initial equation, giving
3 + y = 4
Subtracting 3 from both sides gives y = 1.
As another example consider the pair of equations
This time we need to recognise that multiplying the equations together will eliminate y, so that
which simplifies to
x^2 = 100
Taking the square root of each side gives the solution x = 10. Substituting into the first equation then gives
10y = 20
and dividing both sides by 10 gives y = 2. There is also a solution with the corresponding negative values since −10 is also a square root of 100.
There are relatively few places in chemistry where we need to solve simultaneous equations, but three examples are given below.
Vapour pressure
The vapour pressure p above a mixture of two liquids is given by the equation
p = x[1] p[1]^* + x[2] p[2]^*
where x[1] and x[2] are the mole fractions of components 1 and 2 of the mixture, and p[1]^* and p[2]^* are the corresponding vapour pressures of the pure solvents.
The vapour pressure above a mixture containing 0.6 mole fraction of toluene in benzene is 6.006 kPa and falls to 5.398 kPa when the mole fraction is increased to 0.7. With this information we can set
up the equations
6.006 = 0.6 p[1]^* + 0.4 p[2]^*
5.398 = 0.7 p[1]^* + 0.3 p[2]^*
If we multiply the first equation by 0.7 and the second by 0.6, we obtain a pair of equations in which the coefficients of p[1]^* are equal (similar to how one might use the lowest common denominator
method to add and subtract fractions).
4.204 = 0.42 p[1]^* + 0.28 p[2]^*
3.239 = 0.42 p[1]^* + 0.18 p[2]^*
If we now subtract the second modified equation from the first we have
(4.204 − 3.239) = (0.42 − 0.42) p[1]^* + (0.28 − 0.18) p[2]^*
which simplifies to
0.965 = 0.1 p[2]^*
Dividing both sides by 0.1 gives the vapour pressure of pure benzene, p[2]^* = 9.65 kPa.
Substituting this value into the initial equation gives
6.006 = 0.6 p[1]^* + (0.4 x 9.65)
∴ 6.006 = 0.6 p[1]^* + 3.86
Subtracting 3.86 from both sides gives
2.146 = 0.6 p[1]^*
and dividing both sides by 0.6 gives
So the vapour pressure of pure toluene, p[1]^*, is 3.58 kPa.
The Arrhenius equation
This tells us that the rate constant, k, for a reaction is related to the absolute temperature, T, by the equation
where E[a] is the activation energy for the reaction, R is the gas constant and A is a constant called the pre-exponential factor.
For the decomposition of 3-oxopentanedioic acid,^5 the rate constant is 4.75 × 10^−4 s^−1 at 20°C and 5.48 × 10^−2 s^−1 at 60°C. We can convert these temperatures to 293 K and 333 K respectively (by
adding 273) and set up the Arrhenius equation for each set of data to give equations in A and E[a]
We can evaluate the natural logarithms and reciprocals to give
The simplest way to eliminate one of the variables is to subtract one equation from the other, so the ln A terms cancel. We then have
which simplifies to
Multiplying both sides of this equation by –R then gives
4.748 R = 0.41 K^−1 × 10^−3 E[a]
and dividing both sides by 0.41 × 10^−3 K^−1 gives
Finally, substituting the value of the gas constant R gives
or 96.3 kJ mol^−1, since the initial data is given to three significant figures.
Substituting into either of the original equations allows them to be solved for A
Combining the numerical terms on the right gives
−7.652 = lnA − 39.524
Adding 39.524 to both sides gives
lnA = 31.872
Finally, taking the exponential delivers a value for A
A = e^31.872 = 6.95 × 10^13
Steady state approximation
Finally, simultaneous equations arise when we analyse the mechanisms of multistep reactions by studying their kinetics. For example, N[2]O[5] decomposes spontaneously according to the following
stoichiometric equation
2N[2]O[5 ]⇌ 4NO[2] + O[2]
which can be broken down into a set of elementary steps, each with its own simple rate law:
N[2]O[5] + M ⇌ NO[2] + NO[3] + M
(Rate constants k[1] and k[2] for forward and reverse reactions respectively)
NO[2] + NO[3] → NO + O[2] + NO[2] (Rate constant k[3])
NO + NO[3] → 2NO[2] (Rate constant k[4])
In these reactions, M is a chemically inert gas.
It is useful in the kinetic analysis to be able to express the concentration terms of the reactive intermediates NO and NO[3] in terms of the species that appear in the final stoichiometric reaction
equation (ie N[2]O[5] and NO[2]). This can be done by assuming they are in the ‘steady state’, such that their concentrations are constant, and therefore the rates of change of those concentrations
are set to zero. This kinetic analysis gives a pair of simultaneous equations in the two variables [NO] and [NO[3]]
k[1][N[2]O[5]][M] − k[2][M][NO[2]][NO[3]] − k[3][NO[2]][NO[3]] − k[4][NO][NO[3]] = 0
k[3][NO[2]][NO[3]] − k[4][NO][NO[3]] = 0
Dividing both sides of the second equation by k[4][NO[3]] gives an expression for [NO] that is independent of [NO[3]]
From the second original equation, we can see that k[3][NO[2]][NO[3]] = k[4][NO][NO[3]]. Substituting this into the first equation eliminates the [NO] term
k[1][N[2]O[5]][M] − k[2][M][NO[2]][NO[3]] − 2k[3][NO[2]][NO[3]] = 0
Collecting the terms in [NO[3]] gives
k[1][N[2]O[5]][M] – [NO[3]]{k[2][M][NO[2]] + 2k[3][NO[2]]} = 0
which rearranges to
[NO[3]]{k[2][M][NO[2]] + 2k[3[]NO[2]]} = k[1][N[2]O[5]][M]
Finally dividing both sides by {k[2][M][NO[2]] + 2k[3][NO[2]]} gives a solution for [NO[3]]
Paul Yates is acting director of learning and teaching at the Learning and Teaching Institute, University of Chester, UK
1. L Woollard, The STeP Journal, 2015, 2, 88
2. J M Jagger, Mathematics in School, 2005, 34, 32
3. D French, Mathematics in School, 2005, 34, 30
4. P Chambers, Mathematics in School, 2007, 36, 30
5. R J Silbey and R A Alberty, Physical Chemistry (3rd ed). John Wiley, 2001 | {"url":"https://edu.rsc.org/maths/solving-simultaneous-equations/2000167.article","timestamp":"2024-11-01T20:41:22Z","content_type":"text/html","content_length":"235598","record_id":"<urn:uuid:b604459a-bffd-421b-a022-53fba4083d9b>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00846.warc.gz"} |
Suppose that the actual demand for regular apartments at the $500,000 profit margin, DR, is such that the Stargrove realized a profit of $500,000 from selling regular apartments to the real estate investment company at the salvage profit margin of $100,000 per apartment. How much profit, in $ millions, did the Stargrove earn from the sales of the remaining regular apartments at the $500,000 profit margin for the same realization of demand DR? Choose the closest from the answers below.
Suppose that the actual demand for regular apartments at the $500,000 profit margin, DR, is such that the Stargrove realized a profit of $500,000 from selling regular apartments to the real estate
investment company at the salvage profit margin of $100,000 per apartment. How much profit, in $ millions, did the Stargrove earn from the sales of the remaining regular apartments at the $500,000
profit margin for the same realization of demand DR? Choose the closest from the answers below.
Q: Suppose that the actual demand for regular apartments at the $500,000 profit margin, DR, is such that the Stargrove realized a profit of $500,000 from selling regular apartments to the real estate
investment company at the salvage profit margin of $100,000 per apartment. How much profit, in $ millions, did the Stargrove earn from the sales of the remaining regular apartments at the $500,000
profit margin for the same realization of demand DR? Choose the closest from the answers below.
Q: Assume that sales of normal flats to the real estate investment business at the salvage profit margin of $100,000 per unit resulted in a profit of $500,000 for Stargrove due to the actual demand
for regular apartments at the $500,000 profit margin, DR. At the $500,000 profit margin for the same demand DR realization, how much money, in millions, did Stargrove make from the sale of the
remaining standard apartments? From the following responses, select the one that is closest.
• 46.2
• 46
• 46.5
• 45.5
• 45
• 45.2
Explanation: The total profit is $45.5 million, and the closest answer from the provided options is 45.5 million. | {"url":"https://wppgrouplinks.com/suppose-that-the-actual-demand-for-regular-apartments-at-the-500000-profit-margin-dr-is-such-that-the-stargrove-realized-a-profit-of-500000-from-selling-regular-apartments-to-the-real-estate-inv/","timestamp":"2024-11-07T07:42:36Z","content_type":"text/html","content_length":"55429","record_id":"<urn:uuid:ef26595c-bdf5-4c19-b35b-6b3099f015ec>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00156.warc.gz"} |
Streamflow simulation using the event-based SCS-MS/LR model in the Hitiaa catchment, French Polynesia
Articles | Volume 385
© Author(s) 2024. This work is distributed under the Creative Commons Attribution 4.0 License.
Streamflow simulation using the event-based SCS-MS/LR model in the Hitiaa catchment, French Polynesia
The volcanic tropical Island of Tahiti is prone to heavy rainfalls and flash floods which regularly generate severe damages. Hydrological response on the island is highly variable from one catchment
to the other, in terms of runoff production as well as peak flow amplitude. Infrastructures designers there require an operational method able to calculate the design flood metrics on any Tahitian
catchment, taking into account the variability of their response. This study applies a parsimonious distributed event-based conceptual model to the observed rainfall-runoff events of 9 small
catchments (<6km^2). A modified version of the Mishra and Singh Soil Conservation Service (SCS-MS) runoff model, associated with the Lag-and-Route (LR) model was applied to series of 9 to 176
rainfall-runoff events for each catchment that occurred between 2004 and 2021. Two dominant parameters are to be considered for the SCS-MS model: the size of the soil reservoir (Si) and the initial
water content (M) in this soil reservoir. SCS-MS was found to perform better than SCS-CN and to fit better the asymptotic behaviour of the rainfall-runoff relationship. Si could be fixed to 2500mm
for all catchments except Nymphea where Si had to be lowered to 80mm, because of the high density of urbanization over this catchment. M parameter was set variable from one event to the other as the
initial condition of the model. The Nash criterion was calculated for each event and their median value ranged from 0.39 to 0.79 across the catchments, proving the flexibility of the model. A clear
relationship could be fitted between the median M and Antecedent Precipitation Index (API), both calculated for each catchment. Further testing of this model on similar tropical volcanic context
should be undertaken to verify its flexibility and to consolidate the calibration of its parameters.
UPH 19; PUB; Modelling; Flash floods; Urbanization; Conceptual model; Calibration
Flash floods have been recognized as the main natural risk in the Island of Tahiti, where high urbanization rate increases the vulnerability of people, homes and infrastructures (Lamaury et al.,
2021). There is a strong need for a flood predetermination method to assist engineers in the design of hydraulic infrastructure.
Flash floods result from the combined effects of extreme rainfall events and the steep morphology of the catchments (Wotling, 1999). However, their relative influences on the hydrological response
and the variability of these influences across the island have not yet been captured by a rainfall-runoff model.
Wotling (1999) and Pheulpin (2016) verified the ability of event-based distributed conceptual models to reproduce the hydrological response of three western and one southern catchment. The low number
of studied catchments didn't allow them to explain the spatial variability of hydrological response, nor to relate the model parameters to measurable characteristics of the catchment such as
geomorphology, soil moisture, altitude or exposure to trade winds.
The main objective of this study is to select and apply a rainfall-runoff model that is both robust to the hydrological variability in Tahiti and ready to be regionalized. The chosen model should be
parsimonious to facilitate its transfer to ungauged catchments, and its parameters are to be related to physical features of the catchments.
2.1The island of Tahiti
Tahiti is a volcanic island belonging to the Society Archipelago in the middle of the Pacific Ocean. Tahiti is the biggest – area of 1042km^2 –, the highest – Mount Orohena culminates at 2241m –
and the most populous – 192760 inhabitants in 2017 – island of French Polynesia. The Island is subject to a southern tropical climate, controlled by northeasterly trade winds and the South Pacific
Convergence Zone. Rainfall pattern is very heterogeneous within the island, and it is also the case for extreme events. Wotling (1999) and Pheulpin (2016) modelled extreme rainfall events from
topography and exposure to trade winds.
2.2Studied catchments
Over the 33 catchments that are or have been gauged in Tahiti, nine catchments were chosen to test and calibrate the rainfall-runoff model. Record periods are indicated in Fig. 1.
They were selected for the quality of the data and because they are small-sized catchments, with area lower than 6km^2. Thus their area is considered small enough to neglect the variability of
physio-climatic variables within them, so that each of them can be representative of a specific environment, at least in terms of altitude and rainfall input.
Locations of the catchments are selected in order to capture the spatial variability of rainfall exposure within the island. Both leeward and windward sides of the main island – Tahiti Nui – are
considered, as well as the peninsula – Tahiti Iti.
Each catchment is instrumented with one or two rain gauges and the rainfall is interpolated by the Thiessen method. For each catchment, series of events are extracted at a 5min timestep from the
rainfall and flow records. An event is extracted if it occurs after a duration of at least 3h with rainfall intensity below 0.3mm in 5min. Cumulative rainfall and runoff must exceed thresholds
that are adjusted from one catchment to the other so that the number of collected events is enough. The direct runoff volume for each event is then calculated by subtracting the base flow from the
total instantaneous flow. Resulting runoff coefficients highlight the variability of the hydrological response among the selected catchments (Fig. 2).
The Mishra et al. (2003) Soil Conservation Service (SCS-MS) runoff model was modified and associated with the Lag-and-Route (LR) model to simulate the complete flood hydrographs. Note the model
operates for each cell of a regular grid mesh. Each cell produces an elementary hydrograph, and the addition of all the elementary hydrographs leads to the complete flood hydrograph.
3.1SCS-CN and SCS-MS model
Original SCS is based on the following equations:
$\begin{array}{}\text{(1)}& P=\mathrm{Ia}+F+Q\text{(2)}& \frac{Q}{P-\mathrm{Ia}}=\frac{F}{S}\text{(3)}& \mathrm{Ia}=\mathit{\lambda }S\end{array}$
with P standing for total precipitation, “Ia” for initial abstraction, F for cumulative infiltration, Q for direct runoff, S for potential maximum retention.
Combined together, Eqs. (1), (2) and (3) lead to:
$\begin{array}{}\text{(4)}& Q=\frac{{\left(P-\mathrm{Ia}\right)}^{\mathrm{2}}}{P-\mathrm{Ia}+S}\end{array}$
Mishra and Singh (2003), introduced the concept of equality between the runoff coefficient C and the degree of saturation Sr. Sr being expressed as:
$\begin{array}{}\text{(5)}& Sr=\frac{F}{S+M}\end{array}$
with M the initial soil moisture, and C as:
$\begin{array}{}\text{(6)}& C=\frac{Q}{P}\end{array}$
Equation (2) of the SCS becomes:
$\begin{array}{}\text{(7)}& \frac{Q}{P-\mathrm{Ia}}=\frac{F+M}{S+M}\end{array}$
The total soil reservoir capacity Si was introduced as:
$\begin{array}{}\text{(8)}& \mathrm{Si}=S+M\end{array}$
Equation (4) of the SCS is rewritten for the SCS-MS as follows:
$\begin{array}{}\text{(9)}& Q=\frac{\left(P-\mathrm{Ia}\right)\left(P-\mathrm{Ia}+M\right)}{P-\mathrm{Ia}+S+M}\end{array}$
Equation (4) was then differentiated as a function of time to calculate the direct runoff at each timestep. To adapt them to the dynamic flow calculation, the SCS-MS equations have been modified in
the same way as in the work of Coustau et al. (2012):
• To take into account the potential decrease of the runoff coefficient between floods, the cumulative rainfall P (t) is considered as the level of a reservoir of infinite capacity to which an
emptying is applied:
$\begin{array}{}\text{(10)}& \frac{\mathrm{d}P\left(\mathrm{t}\right)}{\mathrm{d}t}=\phantom{\rule{0.125em}{0ex}}{P}_{\mathrm{b}}\left(t\right)-\mathrm{d}s.P\left(\mathrm{t}\right)\end{array}$
where ds [T^−1] is the linear decay coefficient and P[b] (t) is the input rainfall intensity.
• Soil drainage is represented by the application of a drain modulated by the same coefficient ds:
$\begin{array}{}\text{(11)}& \frac{d\phantom{\rule{0.125em}{0ex}}H\left(\mathrm{t}\right)}{\mathrm{d}t}=\phantom{\rule{0.125em}{0ex}}{P}_{\mathrm{b}}\left(\mathrm{t}\right)-{R}_{\mathrm{dir}}\
where R[dir] (t) [LT^−1] is the direct runoff intensity.
• Delayed flow R[del] is introduced by recovering a fraction ω of the soil discharge at the outlet:
$\begin{array}{}\text{(12)}& {R}_{del}\left(\mathrm{t}\right)=\mathit{\omega }.\mathrm{d}s.H\left(\mathrm{t}\right)\end{array}$
where ω is a dimensionless exfiltration coefficient.
5 parameters are associated to SCS-MS: Si, Ia, M, ds and ω.
3.2Lag and Route model
The Lag-and-Route model (Maidment, 1993) routes the runoff production of each cell m to the outlet of the catchment.
According to the production function, each cell produces an elementary hydrograph at each time t,equation can be written as:
where R(t[o]) is the runoff produced by the m cell at time t[o], T[m] is the propagation time and K[m] is the diffusion time. They are linked by the K[0] [–] constant as follows:
$\begin{array}{}\text{(14)}& {K}_{\mathrm{m}}={K}_{\mathrm{0}}\cdot {T}_{\mathrm{m}}\end{array}$
T[m] is the travel time from the m cell to the catchment outlet. It can be expressed by:
$\begin{array}{}\text{(15)}& {T}_{\mathrm{m}}=\sum _{k=\mathrm{1}}^{n}\frac{l}{{V}_{\mathrm{0}}}\end{array}$
where n is the number of cells of length l between the cell m and the outlet and V[0] is linked to the velocity of water transfer. V[0] and K[0] are the input parameters for the transfer function.
SCS-MS/LR was calibrated on each catchment from the series of observed rainfall-runoff events. 5 parameters must be calibrated for the production function (Si, Ia, M, ω and ds) and 2 parameters for
the transfer function (V[0] and K[0]). K[0] was arbitrarily fixed to 0.7 for all events and all catchments because highly related to V[0] (Coustau et al., 2012; Nguyen and Bouvier, 2019). Ia was
fixed for all events and all catchments from graphical fit of the theoretical balance sheet equation (Eq. 9) on observations of event cumulative rainfall and runoff (Fig. 2). Si, ds and ω were first
calibrated by trial and error and sensibility analysis over each series of events paying attention to the median value of the NSE criterion (Nash and Sutcliffe Efficiency, Nash and Sutcliffe, 1970).
Explored ranges of values are from 0.01 to 0.6 for ω, from 0.5 to 2ms^−1 for ds and from 10 to 3000mm for Si, based on previous applications of the SCS model (Coustau et al., 2012; Nguyen and
Bouvier, 2019). Then these three parameters were uniformed for all catchments when possible in order to reduce the risk of equifinality.
Parameter M stands for the initial condition of the model, and should vary from one event to the other. According to its physical meaning, V[0] should also be set variable from one event to the
other. These two state parameters should be related to initial conditions like the level of soil humidity at the beginning of each event. For each catchment, such initialization relationships were
built between optimized values of M and V[0] on one hand, and measured baseflow Q[0] at the beginning of each event on the other hand. Q[0] is used here as a proxy for Antecedent Moisture Conditions.
Parameters M and V[0] were simultaneously optimized for each event, using the SIMPLEX method (Nelder and Mead, 1965) and NSE as efficiency criterion. SIMPLEX explores the parameter space and
converges to a pair of final parameters (M, V[0]) either after 300 iterations or when the difference in NSE between two successive iterations is less than 10^−9.
5.1Improvement of the SCS-MS model compared to the SCS
Before calibrating the SCS-MS/LR model in dynamics, its balance equation (Eq. 9) is adjusted for each catchment on the observed event rainfall and runoff totals. The objective is to calibrate Ia
parameter and to verify that the SCS-MS can adapt to the diversity of hydrological behaviours that can be observed in Tahiti. Compared to the SCS equation (Eq. 4), SCS-MS fits better the whole range
of observed rainfall-runoff events, especially for catchments with the lowest observed runoff coefficient like Vaipahi. For these catchments, SCS has a tendency to under-estimate the low flow events
when SCS-MS crosses the scatterplot at its centre (Fig. 5).
With Ia parameter greater than 0, balance equations of both models proved to be inadequate to fit observations, especially for catchments with observed runoff coefficients below 0.4. Ia parameter was
then set to 0 for both models.
5.2Parameter calibration and model efficiency
To measure the efficiency of SCS-MS/LR on each catchment, NSE was calculated for each event, and then the median of the NSE was calculated over each series of events.
SCS-MS/LR model showed little sensitivity to the parameters ds, ω and Si. It was therefore possible to find uniformed values of these parameters over the range of catchments. ds and ω were set to 1d
^−1 and 0.2 respectively, and Si could be set to 2500mm for all basins except Nymphea. For Nymphea, the optimal value of Si is 80mm, and setting the NSE to 2500 degrades the median of NSE criterion
from 0.77 to 0.08. For the other catchments, setting Si at 2500mm leads to negligible degradation of median NSE, which remains above 0.73 (Fig. 6).
Initialization relationships between Q[0] and optimized values of V[0] and M parameters were found not robust enough to be retained: R^2 coefficients of the adjusted relationships were found lower
than 0.1 for 6 catchments out of 9 for M, and lower than 0.1 for all catchments for V[0]. The model is therefore unable to reproduce the temporal variability of the hydrological response, but it
remains relevant for characterize the median hydrological behavior of each catchment. For each catchment, M and Vo were therefore set to the medians of the optimised sets. Median M varies between 8
and 1710mm depending on the catchment, and Vo varies between 0.9 and 1.6ms^−1. These two parameters are not correlated, which is reassuring with regard to the risk of equifinality.
Median NSE values ranged from 0.73 to 0.87 across the catchments when M and V[0] optimized and from 0.39 to 0.79 with M and V[0] fixed to the median of optimizes values (Fig. 6).
Setting Vo and M to a single value for each basin lowers the NSE criteria because the model no longer takes into account the temporal variability of the hydrological response. The poor performance of
the relationships between these two parameters and the AMC proxy can be explained by the existence of spatial uncertainties in rainfall that interfere with the optimisation of M and Vo. Rainfall
patterns in Tahiti are highly variable in space (Benoit et al., 2022) and a high level of uncertainties is associated with values of spatial rainfall averages. These uncertainties are reflected in
the hydrological model on the optimization of the initial condition until blurring the relationship between the initial condition and a proxy for AMC (Tanguy et al., 2023).
5.3Towards the regionalization of the model parameters
After calibrating M and Vo separately for each catchment, their spatial variability, attempts were made to explain their spatial variability. The median values of M were linked to a proxy of AMC in
order to explain its spatial variability across catchments.
Relationship between median M and median Q[0] was not relevant, but a clear relationship was identified between the median M and the mean Antecedent Precipitation Index (API) calculated separately
for each catchment (Fig. 7). API was formulated by Fedora and Beschta (1989) as follows:
$\begin{array}{}\text{(16)}& {\mathrm{API}}_{\mathrm{t}}=\left({\mathrm{API}}_{\mathrm{t}-\mathrm{\Delta }t}\mathrm{K}\right)+{P}_{\mathrm{\Delta }t}\end{array}$
K is a recession constant that we fixed to 0.85. We calculated API at a daily time step for the rainfall time series of each catchment, before extracting the daily API corresponding to the day before
the beginning of each rainfall event.
A hypothesis to explain the low Si value in the Nymphea catchment lies in the difference of land use. The land use type of each catchment was estimated from satellite images of the island (Google
maps). All catchments are mainly natural with peri urban areas smaller than 20% of the total catchment surface, except Nymphea which is the only catchment to be entirely urbanized. Reducing the size
Si of the soil reservoir for urban basins is coherent since artificial surfaces are mostly impermeable. Surface runoff is favoured compared to natural basins where an increased production capacity
allows more infiltration, storage and subsurface runoff.
The modified version of SCS-MS/LR was calibrated and tested on series of observed rainfall-runoff events from 9 Tahitian catchments. V[0] and M could not be initialized from AMC proxy, and had to be
fixed for each catchment. The model no longer restitute the temporal variability of rainfall-runoff response, but is still relevant to provide the average behavior of each catchment. Evaluated by the
median values of NSE calculated for each catchment, the model efficiency is affected by the approximation made on V[0] and M but remains above 0.39. This model fulfills the flexibility required to
restitute the hydrological spatial variability in Tahiti and its small number of parameters makes it a good candidate for regionalization.
About the regionalization of the model parameters, Si can be set according to the land use type. The median M is significantly related to the average API, which is a proxy for the average moisture
state of each catchment. This study presents soil moisture as a key variable to regionalize rainfall-runoff models. However, the regionalization is not completed because V[0] spatial variations still
remain unexplained, and because the model has not been validated.
This model and the relationship found between M and mean API are promising for event rainfall-runoff modelling in tropical volcanic context. However, it could be applied to other catchments in
similar context in order to consolidate the calibration of its parameters and to clarify the regionalization relationships. The impact of land use type on the Si parameter could especially be
Data was treated using VISHYR module of ATHYS environment (http://www.athys-soft.org/, last access: 15 November 2022; HSM, 2022). The hydrological model was implemented with the MERCEDES module of
ATHYS environment (http://www.athys-soft.org/, last access: 15 November 2022; HSM, 2022). Figures were made using Matplotlib version 3.3.3 (Caswell et al., 2020) available under the Matplotlib
license at https://matplotlib.org/ (last access: 15 November 2022; Matplotlib, 2022), and ggplot2 (Wickham, 2016) at https://ggplot2.tidyverse.org (last access: 15 November 2022).
Hydrometeorological data is available upon request from Groupement d’Etudes et de Gestion du Domaine Public de Polynésie Française (secretariat@equipement.gov.pf) and Météo France
LS was the principal investigator of the project. CB, GT and LS conceived together the framework of the study. GT analyzed the data, performed the numerical simulations and prepared the graphical
outputs. CB contributed to the critical interpretations of the results, sharing his deep knowledge on hydrological modelling. GT wrote the manuscript in consultation with CB and LS.
The authors declare that they have no conflict of interest.
Publisher’s note: Copernicus Publications remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This article is part of the special issue “IAHS2022 – Hydrological sciences in the Anthropocene: Variability and change across space, time, extremes, and interfaces”. It is a result of the XIth
Scientific Assembly of the International Association of Hydrological Sciences (IAHS 2022), Montpellier, France, 29 May–3 June 2022.
This project is also sponsored by a local private company named Société Polynésienne de l'Eau, de l'Electricité et de Déchets (SPEED). The authors are grateful to the French Weather Agency (Direction
Interrégionale en Polynésie française – Météo France) and to the Polynesian public service named Direction de l'Equipement (Groupement d’Etudes et de Gestion du Domaine Public de Polynésie Française
– GEGDP) in particular for providing the rainfall and water level gauge dataset of the Island of Tahiti. The authors are very grateful to the Hydropower company, Engie-EDT-Marama Nui, for logistic
support during fieldwork on the Hitiaa catchment.
This work is supported by the Government of French Polynesia – Ministère de la Recherche through the project E-CRQEST (grant no. 05832 MED 2019-08-26). This study is also linked with the initiated
project ERHYTM, Contrat de Projets Etat/Polynésie française (grant no. 7938/MSR/REC 2015-12-05), focusing on flood hazards over Tahiti and Moorea.
This paper was edited by Christophe Cudennec and reviewed by two anonymous referees.
Benoit, L., Sichoix, L., Nugent, A. D., Lucas, M. P., and Giambelluca, T. W.: Stochastic daily rainfall generation on tropical islands with complex topography, Hydrol. Earth Syst. Sci., 26,
2113–2129, https://doi.org/10.5194/hess-26-2113-2022, 2022.
Caswell, T. A., Droettboom, M., Lee, A., Hunter, J., Firing, E., Stansby, D., Klymak, J., Hoffmann, T., Sales de Andrade, E., Varoquaux, N., Hedegaard Nielsen, J., Root, B., Elson, P., May, R., Dale,
D., Lee, J.-J., Seppänen, J. K., McDougall, D., Straw, A., and Katins, J.: matplotlib/matplotlib v3.1.3 (v3.1.3), Zenodo [code], https://doi.org/10.5281/zenodo.3633844, 2020.
Coustau, M., Bouvier, C., Borrell-Estupina, V., and Jourde, H.: Flood modelling with a distributed event-based parsimonious rainfall-runoff model: case of the karstic Lez river catchment, Nat.
Hazards Earth Syst. Sci., 12, 1119–1133, https://doi.org/10.5194/nhess-12-1119-2012, 2012.
Fedora, M. A. and Beschta, R. L.: Storm runoff simulation using an antecedent precipitation index (API) model, J. Hydrol., 112, 121–133, 1989.
HSM: ATelier HYdrologique Spatialisé, HSM [data set], http://www.athys-soft.org/, last access: 15 November 2022.
Lamaury, Y., Jessin, J., Heinzlef, C., and Serre, D.: Operationalizing urban resilience to floods in Island Territories – Application in Punaauia, French Polynesia, Water, 13, 337, https://doi.org/
10.3390/w13030337, 2021.
Maidment, D. R.: Handbook of Hydrology, edited by: David, R., Maidment, ISBN 0-07-039732-5, McGraw-Hill, 1424 pp., 1993.
Matplotlib: Matplotlib: Visualization with Python, Matplotlib [data set], https://matplotlib.org/, last access: 15 November 2022.
McBride, M.: New books–Handbook of Hydrology, edited by: Maidment, D. R., Ground Water, 32, 331, 1994.
Mishra, S. K., Singh, V. P., Sansalone, J. J., and Aravamuthan, V.: A modified SCS-CN method: characterization and testing. Water Resources Management, 17, 37-68, 2003.
Nash, J. E. and Sutcliffe, J. V.: River flow forecasting through conceptual models part I – A discussion of principles, J. Hydrol., 10, 282–290, 1970.
Nelder, J. A. and Mead, R.: A simplex method for function minimization, The Comput. J., 7, 308–313, 1965.
Nguyen, S. and Bouvier, C.: Flood modelling using the distributed event-based SCS-LR model in the Mediterranean Réal Colobrier catchment, Hydrol. Sci. J., 64, 1351–1369, 2019.
Tanguy, G., Sichoix, L., Bouvier, C., and Leblois, E.: Sensitivity of an event-based rainfall-runoff model to spatial uncertainty in precipitation : a case study of the Hitiaa catchment, Island of
Tahiti, submitted, 2023.
Pheulpin, L., Recking, A., Sichoix, L., and Barriot, J. P.: Extreme floods regionalisation in the tropical island of Tahiti, French Polynesiam, in: E3S Web of Conferencesm Vol. 7, p. 01014, EDP
Sciences, https://doi.org/10.1051/e3sconf/20160701014, 2016.
Wickham, H.: ggplot2: Elegant Graphics for Data Analysis, Springer-Verlag New York, ISBN 978-3-319-24277-4, https://ggplot2.tidyverse.org, ggplot2 [code], 2016.
Wotling, G.: Caractérisation et modélisation de l'aléa hydrologique à Tahiti, Montpellier (FRA), Montpellier, USTL, IRD, 355 p., multigr. (Mémoires Géosciences-Montpellier, 18), Th. Géosciences,
Mécanique, Génie Mécanique, Génie Civil, USTL, Montpellier, 20 December 1999, 2000. | {"url":"https://piahs.copernicus.org/articles/385/31/2024/piahs-385-31-2024.html","timestamp":"2024-11-07T21:56:38Z","content_type":"text/html","content_length":"204201","record_id":"<urn:uuid:ee0fa705-c893-4fbd-8754-4720fb57e179>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00287.warc.gz"} |
Interval Methods Help a Robot Succeed
Interval methods helped a robot designed by the University of Texas at El Paso (UTEP) team win a prestigious third place world-wide in the robot competition held during the American Association of
Artificial Intelligence conference in Portland, Oregon, August 6-7, 1996.
Robots have to deal with two types of uncertainty:
• first, their sensors are not absolutely accurate; as a result, they measure, e.g., distances to obstacles only approximately;
• second, their actuators are not absolutely precise; as a result, e.g., a command to turn 90 degrees can actually leads to an 85 or 95 degree turn.
Traditionally, statistical methods have been used to deal with these two types of uncertainty. There are, however, two major problems related to these methods:
• first, they are very computationally intensive: for every pixel, at any moment of time, we need to compute and store the probability that the corresponding point contains an obstacle; in a mobile
robot, it is desirable to have computational methods that are as simple as possible;
• second, even more importantly, these methods require that we know the probabilities of errors for different sensors and actuators, and we usually do not know the exact values of these
probabilities. Instead, we only know the intervals of possible error values. We can try to guesstimate the probabilities, but:
□ if we wrongly guess the probabilities of sensor errors, we may erroneously hit an obstacle;
□ if we wrongly guess the probabilities of actuator errors, and use these wrong probabilities in some filtering-type correction, we may worsen the position error instead of compensating for it.
The team leaders of the UTEP team, graduate students David Morales and Tran Son and their supervisor Chitta Baral, decided to abandon statistical methods and use interval-based methods instead.
To take sensor errors d into consideration, their robot assumes that any pixel that could be (within this error) inside an obstacle has to be avoided. As a result, e.g., when going in a corridor, the
robot actually follows the "virtual corridor" whose width is 2d smaller than the actual width.
To compensate for the actuator errors, with unknown probabilities, the robot does not attempt any statistical filter-type corrections; instead, it uses the sensor feedback to periodically adjusts its
position and orientation.
Several other novel ideas have been used. The resulting algorithms turned out to be computationally simpler and more reliable than the previously known ones. In the robot competition, the robot
Diablo implementing these algorithms won the third place in complicated office navigation competition where robots had to navigate in a realistic office environment. Diablo proved to be 100%
reliable, always staying on track and never hitting any obstacle. The only points it lost were due to speed.
Due to novel algorithms, UTEP's commercially built robot outperformed more than 20 much more technologically sophisticated robots from all over the world, including teams from prestigious
institutions long involved in world-class robotic research such as Carnegie-Mellon University and the Universities of Stuttgart and Bonn.
In addition to D. Morales and T. Son, the main team included Luis Floriano and Monica Nogueira. Support team members who assisted with the robot's programming were Alfredo Gabaldon, Richard Watson,
Glen Hutton, and Dara Morgenstein. | {"url":"https://reliable-computing.org/robot.html","timestamp":"2024-11-08T10:33:58Z","content_type":"text/html","content_length":"4505","record_id":"<urn:uuid:2c1394ae-7ddd-4b05-8faa-7eb6227e025a>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00807.warc.gz"} |
New Year's 2008-2009
This year again we use the longstanding predictions of a meanshift (squared Stouffer Z) during 10 minutes surrounding midnight, and a variance decrease with a minimum at midnight. These two measures
have been assessed for each New Year since 1998-1999, a total of 11 years. The probability for the squared Stouffer Z for this year is nearly significant at p=0.151 with Z=1.033.
For the Variance decrease, a comparison with the distribution of 10000 permutations of the data indicates a modest probability of 0.307 and Z=0.505 according with the prediction.
In addition to the two figures above representing the results for the current year, a composite over the 11 years is also shown. The meanshift is nonsignificantly negative, contrary to the original
prediction. The variance decrease prediction is supported with a p-value of 0.028 and Z=1.909.
The following figure is an enlarged version of the 10-year average of the variance shown above, right. | {"url":"http://w.global-mind.org/ny2009.2figs.html","timestamp":"2024-11-04T11:49:16Z","content_type":"text/html","content_length":"2116","record_id":"<urn:uuid:68119ee8-8721-4f64-bdce-857d765732e5>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00731.warc.gz"} |
The number of movies Scarlett Johansson appeared in correlates with Cumulative goals scored by Vincent Kompany in domestic matches (r=0.723)
Spurious correlation #12,098 · View random
Download png, svg
It takes a bit of time to generate AI explanations!
Check back later, or email me if you'd enjoy seeing this work in real-time.
Random correlation
Discover a new correlation
View all correlations
View all research papers
Report an error
Data details
The number of movies Scarlett Johansson appeared in Source: The Movie DB Additional Info: Scoop (2006); Girl with a Pearl Earring (2003); Vicky Cristina Barcelona (2008); The Nanny Diaries (2007); An
American Rhapsody (2001); My Brother the Pig (1999); Under the Skin (2014); Manny & Lo (1996); Lucy (2014); Deep Down (2014); Rough Night (2017); Ghost in the Shell (2017); Black Widow (2021); Jeff
Koons (2017); North Star (2023); Drive-Thru Records: Vol. 1 (2002); Penglai (2022); Noi siamo cinema (2021); Lost in Translation (2003); Ghost World (2001); The Spirit (2008); The Other Boleyn Girl
(2008); The Black Dahlia (2006); A Love Song for Bobby Long (2004); A Good Woman (2004); We Bought a Zoo (2011); Captain America: The Winter Soldier (2014); Her (2013); Don Jon (2013); Match Point
(2005); The Island (2005); Marriage Story (2019); Marriage Story: From the Pages to the Performances (2019); Moneymaker: Behind the Black Widow (2021); Asteroid City (2023); Come Home (2021); Escape
from the World's Most Dangerous Place (2012); In Good Company (2004); The Perfect Score (2004); Woody Allen: A Documentary (2011); Hitchcock (2012); Marvel Studios: Assembling a Universe (2014);
Captain America: Civil War (2016); Building the Dream: Assembling the Avengers (2012); Jojo Rabbit (2019); Art as Dialogue (2017); Sing 2 (2021); VOMO: Vote or Miss Out (2020); The Horse Whisperer
(1998); The Prestige (2006); He's Just Not That Into You (2009); Eight Legged Freaks (2002); Yes We Can (2008); Translating History to Screen (2008); Marvel: 75 Years, from Pulp to Pop! (2014); Iron
Man 2 (2010); Sing (2016); The Avengers: A Visual Journey (2012); Chadwick Boseman: A Tribute for a King (2020); Ultimate Iron Man: The Making of Iron Man 2 (2010); The Avengers (2012); Avengers: Age
of Ultron (2015); Lost on Location: Behind the Scenes of 'Lost in Translation' (2004); Chef (2014); The Jungle Book (2016); Hail, Caesar! (2016); Avengers: Infinity War (2018); Avengers: Endgame
(2019); Just Cause (1995); If Lucy Fell (1996); The Man Who Wasn't There (2001); Fall (1997); Catching Fire: The Story of Anita Pallenberg (2023); The Director's Notebook: The Cinematic Sleight of
Hand of Christopher Nolan (2007); Bert Stern: Original Madman (2011); The SpongeBob SquarePants Movie (2004); Home Alone 3 (1997); Floyd Norman: An Animated Life (2016); Saturday Night Live: The Best
of Amy Poehler (2009); Isle of Dogs (2018); Her: Love in the Modern Age (2014); Marvel Studios Assembled: The Making of Hawkeye (2022); Celebrating Marvel's Stan Lee (2019); Final Cut: Ladies and
Gentlemen (2012); Captain Marvel (2019); Thor: Ragnarok (2017); North (1994)
See what else correlates with The number of movies Scarlett Johansson appeared in Cumulative goals scored by Vincent Kompany in domestic matches Detailed data title:
Cumulative goals scored by Vincent Kompany in domestic matches for Anderlecht, Hamburg SV, and Manchester City
Source: Wikipedia See what else correlates with Cumulative goals scored by Vincent Kompany in domestic matches
Correlation r = 0.7234424
Pearson correlation coefficient
Correlation is a measure of how much the variables move together. If it is 0.99, when one goes up the other goes up. If it is 0.02, the connection is very weak or non-existent. If it is -0.99, then
when one goes up the other goes down. If it is 1.00, you probably messed up your correlation function.
r^2 = 0.5233689
Coefficient of determination
This means
of the change in the one variable
(i.e., Cumulative goals scored by Vincent Kompany in domestic matches)
is predictable based on the change in the other
(i.e., The number of movies Scarlett Johansson appeared in)
over the 17 years from 2004 through 2020.
p < 0.01,
which is statistically significant(
Null hypothesis significance test
-value is 0.00103.
-value is a measure of how probable it is that we would randomly find a result this extreme.
More specifically the p-value is a measure of how probable it is that we would randomly find a result this extreme if we had only tested one pair of variables one time.
But I am a p-villain. I absolutely did not test only one pair of variables one time. I correlated hundreds of millions of pairs of variables. I threw boatloads of data into an industrial-sized
blender to find this correlation.
Who is going to stop me? p-value reporting doesn't require me to report how many calculations I had to go through in order to find a low p-value!
On average, you will find a correaltion as strong as 0.72 in 0.103% of random cases. Said differently, if you correlated 971 random variables
Which I absolutely did.
with the same 16 degrees of freedom,
Degrees of freedom is a measure of how many free components we are testing. In this case it is 16 because we have two variables measured over a period of 17 years. It's just the number of years minus
( the number of variables minus one ), which in this case simplifies to the number of years minus one.
you would randomly expect to find a correlation as strong as this one.
[ 0.37, 0.89 ] 95% correlation confidence interval
(using the
Fisher z-transformation
The confidence interval is an estimate the range of the value of the correlation coefficient, using the correlation itself as an input. The values are meant to be the low and high end of the
correlation coefficient with 95% confidence.
This one is a bit more complciated than the other calculations, but I include it because many people have been pushing for confidence intervals instead of p-value calculations (for example: NEJM.
However, if you are dredging data, you can reliably find yourself in the 5%. That's my goal!
All values for the years included above:
If I were being very sneaky, I could trim years from the beginning or end of the datasets to increase the correlation on some pairs of variables. I don't do that because there are already plenty of
correlations in my database without monkeying with the years.
Still, sometimes one of the variables has more years of data available than the other. This page only shows the overlapping years. To see all the years, click on "See what else correlates with..."
link above.
│ │ 2004 │ 2005 │ 2006 │ 2007 │ 2008 │ 2009 │ 2010 │ 2011 │ 2012 │ 2013 │ 2014 │ 2015 │ 2016 │ 2017 │ 2018 │ 2019 │ 2020 │
│ The number of movies Scarlett Johansson appeared in (Movie appearances) │ 6 │ 2 │ 3 │ 2 │ 5 │ 2 │ 2 │ 3 │ 6 │ 2 │ 8 │ 1 │ 5 │ 5 │ 2 │ 6 │ 2 │
│ Cumulative goals scored by Vincent Kompany in domestic matches (Goals │ 2 │ 2 │ 2 │ 0 │ 3 │ 1 │ 2 │ 0 │ 3 │ 1 │ 4 │ 0 │ 2 │ 3 │ 1 │ 1 │ 1 │
│ scored) │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │
Why this works
1. Data dredging: I have 25,237 variables in my database. I compare all these variables against each other to find ones that randomly match up. That's 636,906,169 correlation calculations! This is
called “data dredging.” Instead of starting with a hypothesis and testing it, I instead abused the data to see what correlations shake out. It’s a dangerous way to go about analysis, because any
sufficiently large dataset will yield strong correlations completely at random.
2. Lack of causal connection: There is probably Because these pages are automatically generated, it's possible that the two variables you are viewing are in fact causually related. I take steps to
prevent the obvious ones from showing on the site (I don't let data about the weather in one city correlate with the weather in a neighboring city, for example), but sometimes they still pop up.
If they are related, cool! You found a loophole.
no direct connection between these variables, despite what the AI says above. This is exacerbated by the fact that I used "Years" as the base variable. Lots of things happen in a year that are
not related to each other! Most studies would use something like "one person" in stead of "one year" to be the "thing" studied.
3. Observations not independent: For many variables, sequential years are not independent of each other. If a population of people is continuously doing something every day, there is no reason to
think they would suddenly change how they are doing that thing on January 1. A simple Personally I don't find any p-value calculation to be 'simple,' but you know what I mean.
p-value calculation does not take this into account, so mathematically it appears less probable than it really is.
4. Outlandish outliers: There are "outliers" in this data. In concept, "outlier" just means "way different than the rest of your dataset." When calculating a correlation like this, they are
particularly impactful because a single outlier can substantially increase your correlation.
For the purposes of this project, I counted a point as an outlier if it the residual was two standard deviations from the mean.
(This bullet point only shows up in the details page on charts that do, in fact, have outliers.)
They stand out on the scatterplot above: notice the dots that are far away from any other dots. I intentionally mishandeled outliers, which makes the correlation look extra strong.
Try it yourself
You can calculate the values on this page on your own! Try running the Python code to see the calculation results.
Step 1: Download and install Python on your computer.
Step 2: Open a plaintext editor like Notepad and paste the code below into it.
Step 3: Save the file as "calculate_correlation.py" in a place you will remember, like your desktop. Copy the file location to your clipboard. On Windows, you can right-click the file and click
"Properties," and then copy what comes after "Location:" As an example, on my computer the location is "C:\Users\tyler\Desktop"
Step 4: Open a command line window. For example, by pressing start and typing "cmd" and them pressing enter.
Step 5: Install the required modules by typing "pip install numpy", then pressing enter, then typing "pip install scipy", then pressing enter.
Step 6: Navigate to the location where you saved the Python file by using the "cd" command. For example, I would type "cd C:\Users\tyler\Desktop" and push enter.
Step 7: Run the Python script by typing "python calculate_correlation.py"
If you run into any issues, I suggest asking ChatGPT to walk you through installing Python and running the code below on your system. Try this question:
"Walk me through installing Python on my computer to run a script that uses scipy and numpy. Go step-by-step and ask me to confirm before moving on. Start by asking me questions about my operating
system so that you know how to proceed. Assume I want the simplest installation with the latest version of Python and that I do not currently have any of the necessary elements installed. Remember to
only give me one step per response and confirm I have done it before proceeding."
# These modules make it easier to perform the calculation
import numpy as np
from scipy import stats
# We'll define a function that we can call to return the correlation calculations
def calculate_correlation(array1, array2):
# Calculate Pearson correlation coefficient and p-value
correlation, p_value = stats.pearsonr(array1, array2)
# Calculate R-squared as the square of the correlation coefficient
r_squared = correlation**2
return correlation, r_squared, p_value
# These are the arrays for the variables shown on this page, but you can modify them to be any two sets of numbers
array_1 = np.array([6,2,3,2,5,2,2,3,6,2,8,1,5,5,2,6,2,])
array_2 = np.array([2,2,2,0,3,1,2,0,3,1,4,0,2,3,1,1,1,])
array_1_name = "The number of movies Scarlett Johansson appeared in"
array_2_name = "Cumulative goals scored by Vincent Kompany in domestic matches"
# Perform the calculation
print(f"Calculating the correlation between {array_1_name} and {array_2_name}...")
correlation, r_squared, p_value = calculate_correlation(array_1, array_2)
# Print the results
print("Correlation Coefficient:", correlation)
print("R-squared:", r_squared)
print("P-value:", p_value)
Reuseable content
You may re-use the images on this page for any purpose, even commercial purposes, without asking for permission. The only requirement is that you attribute Tyler Vigen.
Attribution can take many different forms. If you leave the "tylervigen.com" link in the image, that satisfies it just fine. If you remove it and move it to a footnote, that's fine too. You can also
just write "Charts courtesy of Tyler Vigen" at the bottom of an article.
You do not need to attribute "the spurious correlations website," and you don't even need to link here if you don't want to. I don't gain anything from pageviews. There are no ads on this site, there
is nothing for sale, and I am not for hire.
For the record, I am just one person. Tyler Vigen, he/him/his. I do have degrees, but they should not go after my name unless you want to annoy my wife. If that is your goal, then go ahead and cite
me as "Tyler Vigen, A.A. A.A.S. B.A. J.D." Otherwise it is just "Tyler Vigen."
When spoken, my last name is pronounced "vegan," like I don't eat meat.
Full license details.
For more on re-use permissions, or to get a signed release form, see
Download images for these variables:
• High resolution line chart The image linked here is a Scalable Vector Graphic (SVG). It is the highest resolution that is possible to achieve. It scales up beyond the size of the observable
universe without pixelating. You do not need to email me asking if I have a higher resolution image. I do not. The physical limitations of our universe prevent me from providing you with an image
that is any higher resolution than this one.
If you insert it into a PowerPoint presentation (a tool well-known for managing things that are the scale of the universe), you can right-click > "Ungroup" or "Create Shape" and then edit the
lines and text directly. You can also change the colors this way.
Alternatively you can use a tool like Inkscape.
View another random correlation
How fun was this correlation?
Bravo! Your evaluation rocks!
Correlation ID: 12098 · Black Variable ID: 26635 · Red Variable ID: 306 | {"url":"https://www.tylervigen.com/spurious/correlation/12098_the-number-of-movies-scarlett-johansson-appeared-in_correlates-with_cumulative-goals-scored-by-vincent-kompany-in-domestic-matches","timestamp":"2024-11-01T19:45:24Z","content_type":"text/html","content_length":"37955","record_id":"<urn:uuid:815eb312-3069-4aad-8f8b-206c731e0df5>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00221.warc.gz"} |
Difference Between Selecting “Subtract points for incorrect answers” and “Allow partial credit”
When to Use?
The eLearning Technologies team tested these point variances to determine the end result on exam scores.
Scenario #1 – Subtract Points for Incorrect Answers
A test question contains 4 answers, 3 are right but the student chose all 4. Would they only get .66 points for the question because you would subtract “one” for the answer they chose wrong? Or is
the Subtract points for incorrect answers subtracting an entire point from their text total?
If the question is worth 10 points and the student selects all four possible answers, they will receive a score of 7.5 out of 10. Blackboard Ultra takes the overall point total for the question and
divides it by the number of possible answers. This amount is then deducted from the overall total for each incorrect answer selected.
Example: All four answers selected.
However, if the student selects only 2 correct answers then they will receive a score of 6.66 out of 10. Blackboard Ultra takes the overall point total for the question and divides it by the number
of possible correct answers. This amount is then deducted from the overall total for each correct answer that was not selected.
Example: Two correct answers out of three selected.
In this scenario, a student can actually earn more points for selecting all of the answers, including the wrong answer, rather than erring on the side of caution and only selecting two of the three
correct answers.
Scenario #2 – Allow Partial Credit
We have been doing the Allow partial credit and with this if there are 3 right answers and they choose all four, they still get the full point, so no problem selecting all four answers…you would
always get 100% right?!
Yes, this correct. If the question is worth 10 points and the student selects all four possible answers then they will receive a score of 10 out of 10. It does not deduct any points from the overall
total for each incorrect answer selected.
Example: Three correct and one incorrect answer out of four selected.
Was this helpful?
Thank you. Your feedback has been recorded. | {"url":"https://services.gvsu.edu/TDClient/60/Portal/KB/ArticleDet?ID=11533","timestamp":"2024-11-12T19:22:43Z","content_type":"application/xhtml+xml","content_length":"48753","record_id":"<urn:uuid:4f941337-dfaf-44ac-8f02-6da09e471a7e>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00069.warc.gz"} |
What Trend? 1991 or 1985?
During the Great Moderation, Nominal GDP remained on a remarkably stable growth path. Generally, I have been calculating a log linear trend from the first quarter of 1985 to the last quarter of 2007.
Why not start at the beginning--1984? I used to, but the 10 percent growth in the second quarter of 2004 caused me to place that year as part of the Reagan-Volcker nominal reflation.
Others have instead started their trends in the next decade. As far as nominal GDP is concerned, it makes little difference. The trend starting in 1985 has a 5.38 percent growth rate and the trend
starting in 1991 has a 5.28 percent growth rate. While the fit is a bit better for 1985 to 2007, the 1991 to 2007 trend does almost as well.
Not much difference to be seen at this scale. Still, for the second quarter of 2012, the gap between current nominal GDP and trend is 15.2 percent for the 1985 trend and only 14.1 percent for the
1991 trend. A difference of one tenth of a percent in growth rates adds up over the years.
No doubt it was due to my Market Monetarist priors that I have always fit all the other macroeconomic variables to the same trend--1985 to 2007. The trend inflation rate using the GDP chain-type
deflator is 2.3 percent and in the second quarter of 2012, the price level was 2.42 percent below trend.
Using 1991, on the other hand, provides a trend inflation rate of 2.05 percent. And the price level was not below trend in the second quarter of 2012, it was .3 percent above trend! This is as close
to being on trend as can be expected. As Bullard and also Davies (at all) have pointed out, the Fed is doing a great job in targeting a 2 percent growth path for the price level. (At least last
The 1991 trend fits the first part of the nineties (up until 1997) very well. It also fits the Great Recession. The 1985 trend fits just about nothing other than the peak of the "Housing boom," maybe
2006 and 2007.
Of course, from a Market Monetarist perspective, the price level should vary with "supply shocks." All that is important is for nominal GDP to remain on trend.
Here is the price level again with the 1991 trend.
After the 2 percent targeting regime was established in the nineties, everything goes well until 1997 when inflation falls below 2 percent and the price level begins to fall below trend. The gap
between the price level and its trend grows and reaches 1.5 percent in 2002. It only really starts shrinking in 2004 and has returned to trend by 2005. However, it overshoots, and reaches 2.8 percent
above trend in the second quarter of 2008. Then it begins to shrink (should I add, "because the Fed took action?") By the middle of 2009, it was back to one percent above trend, and has continued to
gently return to trend, nearly reaching it by the middle of 2012.
Of course, because the trend for nominal GDP using the 1991-2007 period is approximately the same as for the longer 1985 to 2007 period, the deviations from trend are about the same. During the late
nineties, when the price level was below the 1991 trend, nominal GDP was above trend, reaching 2.59 percent in the third quarter of 2000 using both the 1985 and the 1991 trend. At that point of
"high" spending on output, the 1985 trend for the price level is about one percent "too high," while the 1991 trend for the price level was about 1 percent "too low."
And what about the "Housing Boom?" Using the 1991 trend, nominal GDP is above trend from the third quarter of 2005 to the third quarter of 2007 and it reaches .8 percent above trend during the first
and second quarter of 2006. Looking at the longer, 1985 trend, only the first and second quarters of 2006 are above trend at all, and only about .2 percent.
So nominal GDP trends and deviations from trend are about the same, but the price level trends and deviations from trend are very different. Obviously, the difference must be in real GDP.
The 1985 trend tracks real GDP very close from 1984 to 1987, then it rises above trend a bit before falling below trend for a recession in 1989. There is a very slow return to trend and then a
pronounced Dot.Com boom. The 2000 recession is just a return to trend and then real GDP remains on the trend until the Great Recession.
The 1991 trend starts with real GDP above trend in 1984 and the gap grows reaching nearly 4.2 percent in the first quarter of 1989. The "recession" was simply a return to trend, which was reached in
1991. This begins the long period of real GDP remaining very close to trend until the Dot.Com boom begins. Just like the trend from 1985, the recession of 2000 was just a return to trend. And then
both remain at trend until the Great Recession begins. Interestingly, real GDP is below the 1991 trend exactly like nominal GDP by the second quarter of 2012. This, of course, follows from the price
level being approximate at trend.
Here is real GDP and the 1991 to 2007 trend for real GDP:
The "Reagan Boom" is clear on this diagram, along with the Dot.Com boom. There is really no "housing boom."
Consider the CBO estimate of potential output. How does that compare?
There is no "Reagan Boom" in the late eighties according to the CBO, and the recession in the early nineties involved real output less than potential. While both the 1985 and 1991 trend show real GDP
above trend, the 1991 shows a very large gap before the recession, and at least the 1985 trend shows real GDP below trend when real GDP is below the CBO estimate.
Both the 1991 and the 1985 trends show real GDP above trend during the Dot.Com boom, and the CBO has real GDP above potential. Interestingly, both the 1991 and 1985 trends have real GDP almost
exactly on trend during the "Housing Boom" and the CBO estimate has real GDP equal to potential.
Looking at real GDP,. the 1991 trend takes the slow recovery of the nineties to define the trend. It shows real GDP far above trend during the late eighties. The 1991 trend shows a huge "boom" in the
price level during the "Housing boom" when real GDP remains at trend and real GDP remains equal to potential output.
Is the 1991 trend "best?" I have my doubts.
1 comment:
1. One must remember that no matter what we got to follow the trend, if we are to get success in a business like Forex than we got to do it through following the trend well or else it will always be
trouble to handle for us. I am grateful to my broker OctaFX since with them, it’s all so much easier for me to work out things and that’s especially to do with their daily market updates which is
deadly accurate and rewards me endless. | {"url":"https://monetaryfreedom-billwoolsey.blogspot.com/2012/10/what-trend-1991-or-1985.html","timestamp":"2024-11-02T11:27:21Z","content_type":"text/html","content_length":"87065","record_id":"<urn:uuid:47f085a1-1f6e-4c12-9b08-70c169f4cf7f>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00215.warc.gz"} |
Step by Step Guide on How to learn Calculus Easier - Custom My Paper
Step by Step Guide on How to learn Calculus Easier
Tweet on Twitter Share on Facebook Share on Pinterest
Calculus is a branch of mathematics that deals with derivatives, limits, functions and integrals. It is a large part of mathematics, as used in physics and engineering. Many students have difficulty
understanding mathematical calculations, especially because they have not found the right approach to solve this problem.
Numeracy, like any other branch of mathematics, is simple once you understand the basics. According to Mypaperdone’s experts, many students have difficulty with this math brunch because they have
mixed up the basics.
If you have a love-hate relationship with mathematical analysis, it means you have to dig deeper to appreciate its beauty as a discipline. Every student knows the torment of passing an exam he has
not studied well. This is what all math lessons would look like if you didn’t go back to the beginning.
When you spend time understanding mathematical calculations, you find that the way they relate to the subjects is elegant in your brain. Once you understand the basics, start looking at problems as
an opportunity to play with numbers.
Calculating is an exciting discipline, and here’s a step-by-step guide to help you understand it.
1. Start with the other parts of the basic calculation
Because arithmetic is a branch of mathematics, it means you must understand it; you must first understand the basics of mathematics. Other mathematical areas related to calculus that you should
follow include
This branch of mathematics deals with arithmetic operations.
With algebra, you learn the tape and the scaffold.
This branch includes everything related to the properties of triangles and circles.
You will learn the properties of all forms.
2. Understanding parts of the calculator
Now that you understand all the branches of mathematics related to infinitesimal calculus, you can move on to the basics of this branch. In this glass, you are introduced to the basic subgroups, i.e.
integral and differential calculation.
The calculation is generally the study of the accumulation, the change, and the speed of change, which sounds so complicated but is actually very simple.
3. Examination of calculation formulae
Integral and derived calculations have basic formulas that help navigate through the complex parts of the discipline. Note that for each formula the appropriate evidence must also be considered. If
you do, it becomes easy to deal with application problems, because you understand how the formula works.
4. More information about restrictions
In the calculation, a complex function can be solved if its boundary is found. The limitations of complex functions make it easy to decode a function because all its subtleties can be solved.
5. Study of the fundamental theorem of infinitesimal calculus
This is very important because it is difficult to understand complex functions if one does not know the basic arithmetic theorems. The fundamental theorems of infinitesimal calculus teach that
differentiation and integration are reversed.
6. Practical calculation tasks
Once you have all the basics behind you, it’s time to test your knowledge by solving math problems. Make sure you select a wide range of problems so you can practice all arithmetic problems.
If you are stuck in solving a job, you should consult your colleagues. It may seem impossible at the moment, but with these small efforts, you can reach an above-average grade at the end of the
Make sure that not a day goes by without practicing math because practicing is perfect.
An explanation of the examples
Most examples in the calculations are based on physics concepts, which is also good for anyone who studies physics. However, this can cause problems for anyone struggling with physics.
This means that you have to refresh your physical knowledge to excel in calculus. For example, do you know the equation for the speed of an object? If you cannot answer this question right away, you
must go back to the beginning.
In fact, it is best to start with examples from physics before delving into mathematics. Make sure you use visual examples because they make it easier to understand the concepts.
7. View your concepts
This is very important because no one is immune to amnesia. If you are not 100% sure, you should revise your concepts. It’s the difference between thinking it’s easy to write an article and getting
very good grades when the results come back.
After you have studied the concept, be sure to revisit any costly mistakes made in the course of the work or exam. Make sure you have time to look at your notes and get into the habit because
arithmetic is not something you learn once a week.
Read Also: Top 10 Best Courses After 12th Non-Medical in India 2021
If you want to succeed, you have to plan to learn. Never hesitate to ask your teachers for help. Because that’s why they’re in school.
Important tips to remember
• Arithmetic is not one of those subjects that can be understood without a teacher. So you have to attend all the lectures and pay attention to what the professor says.
• The practice is the key to excellence when it comes to arithmetic. Look at as many examples as possible and ask for help if you get stuck.
• Always start with the basics of derivatives when trying to understand a calculation function.
Read Also:How To Become a Web Developer In India
Final reflection
At first glance, arithmetic may seem like a complicated subject, but if you intend to learn it, you realize that it all makes sense. Make sure you practice at least one math problem every day to hone
your problem-solving skills. Remember that the school’s teachers will help you if you get stuck, so don’t hesitate to ask questions. After all, that’s how you learn. | {"url":"https://custom-my-paper.com/step-by-step-guide-on-how-to-learn-calculus-easier/","timestamp":"2024-11-06T20:07:08Z","content_type":"text/html","content_length":"73305","record_id":"<urn:uuid:3db7d385-7ff4-427c-bf65-2467e0a98ebd>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00849.warc.gz"} |
Observations placeholder
Pythagoras - Marcilio Ficino describes the musical ratios of Pythagoras
Type of Spiritual Experience
A description of the experience
Ficino, Marsilio - Commentary on Plato’s Timaeus
On harmony, ratios and proportions
It is said that when Pythagoras had observed in the smithies that harmony issued from the hammer-blows by a law of weights, he gathered the numbers which held the self-harmonising difference of the
weights. Then he is said to have tautened strings by tying on the various weights which he had discovered in the hammers. And, as a result, he is said to have clearly thought that from a string which
was tautened more than another according to the sesquioctaval ratio another tone was discerned by contrast, that is, a full, complete sound, as with the ratio of nine to eight.
Nor was he said to be able to divide the tone into two equal half-tones, since nine does not divide into two equal parts; but he discovered that one half-tone is slightly more than half while the
other is slightly less than half; this he called diesis, while Plato called it limma.
How much the halftone was less than the full tone and away from the true half-tone, Plato shows in the difference between the numbers two hundred and forty-three and two hundred and fifty-six. For
since an eighth of the smaller number is thirty and nearly a half and since the larger number is thirteen more than the smaller number, it produces neither a tone nor a full half-tone.
Pythagoras next observed that what I might call the full and complete breadth of the tone consists of two sounds and an interval, and thus he arrived at the first elements of harmony.
The first of these harmonies, the diapason, is based on the first or double ratio, where the first string had twice as much tension as the second because of the double weight on it; when both were
struck, the first resumed its rectilinearity twice as energetically and twice as quickly and sounded a far higher note than the lower one, yet one that was so friendly to the other that it appeared
as a single sound, more restricted in one respect but fuller in another. By regulating and measuring the first sound when it happened to be higher, he ascertained that it is positioned at the eighth
step above the note that is accounted low and that, as a rule, it consists of eight notes, seven intervals, and six tones.
Moreover, when the ratio of tension and slackness between the two strings was that of one to one and a third, he found the diatesseron harmony, comprising two tones, a minor half-tone, four notes,
and three intervals.
In the ratio of one to one and a half he found the diapente, and in this he noticed three tones and a smaller half-tone, five notes, and four intervals.
Then he observed that the diapason consisted of the diatesseron and the diapente,, for the double ratio which produces the diapason, where the mean occurs, is composed of the sesquitertial and the
sesquialteral, of which these consist. For the first double having a mean is of two in relation to four, composed of the sesquialteral between three and two and of the sesquitertial between four and
But from its triple appearance he considered the diapason diapente, having twelve notes and eleven intervals. For the diapason is based on the double, while the diapente is based on the sesquialteral
. But the unbroken sesquialteral of the double produces the triple. For six is the double of three. Consider nine, from which is born the sesquialteral of six, and you immediately have the triple of
However, it is not a triple if you add a sesquitertial to the double: for example, if you compare eight to six; for the sesquioctaval is missing, that is, the relation of nine to eight.
And since the sesquitertial, when added to the double, is superpartient, as in the ratio of eight to three (for here there is a double superbipartient relation through the bisesquitertial),
Pythagoras forbade any continuation beyond the double, that is, the diapason through the sesquitertial (diatesseron), although Ptolemy sometimes admits this addition.
But the sesquitertial was agreeably added to the triple, for hence comes a quadruple proportion, and through the quadruple the disdiapason harmony, having fifteen notes and fourteen intervals; for
example, if a triple is placed alongside nine and three, you will reach twelve, which is the sesquitertial of nine, and you will at once obtain the quadruple born of twelve in relation to three.
But for the sake of melody he forbids any progression beyond the quadruple, not only because violence invades the senses from the more vehement movement and from the broken sound, but also because as
soon as the quintuple is born between five and four beyond the first quadruple, there immediately arises a superbipartient between five and three which produces dissonance.
But so that we may not go beyond the quadruple, he also prohibits a frequent descent below the sesquitertial, for the sake of avoiding heaviness. He forbids the frequent continuation of two
sesquitertials, for two reasons: they are generally displeasing, and they do not complete a double or a diapason.
If you take nine, twelve, and sixteen, you have two sesquitertials, but you certainly do not have a double, for there is no sesquioctaval – that is, the ratio of eighteen to sixteen - so that the
double is born from the ratio of eighteen to nine.
Nor does he like to continue a pair of sesquialterals, for when they are continued they exceed the double by the sesquioctaval ratio. This you will be able to observe if you write down the numbers
four, six, and nine.
For here there are two sesquialterals, from four to nine. Nine exceeds eight, which is the double of four, by one-eighth. From this it is clear that the power of the sesquialteral is greater than
half the double.
It is also clear that the power of the sesquitertial is less than half by just the same amount, since the double is created from the two of them together. Indeed, the sesquialteral exceeds the
sesquitertial by just one-eighth. For eight to six gives you a sesquitertial, while nine to six gives you a sesquialteral. But nine to eight gives a sesquioctaval, and you can see that the
sesquialteral exceeds the sesquitertial by this one-eighth part.
However, it should be remembered that the diatesseron - the arrangement of four notes rising to the fourth step heard through itself - is not approved, but is happily taken from the one which, with
the addition of a tone, easily becomes the diapente, that extremely pleasing harmony of the fifth note. Again, if it is connected to the diapente, it produces the diapason - the most perfect harmony
of all.
Concepts, symbols and science items
Science Items
Activities and commonsteps | {"url":"https://allaboutheaven.org/observations/pythagoras-marcilio-ficino-describes-the-musical-ratios-of-pythagoras-015982/221","timestamp":"2024-11-06T01:06:04Z","content_type":"text/html","content_length":"22635","record_id":"<urn:uuid:d740c50d-4377-438f-8277-d553be52cc78>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00560.warc.gz"} |
1-well-covered graphs revisited
A graph is well-covered if all its maximal independent sets are of the same size (Plummer, 1970). A well-covered graph is 1-well-covered if the deletion of any one vertex leaves a graph, which is
well-covered as well (Staples, 1975). A graph G belongs to class W[n] if every n pairwise disjoint independent sets in G are included in n pairwise disjoint maximum independent sets (Staples, 1975).
Clearly, W[1] is the family of all well-covered graphs. It turns out that G∈W[2] if and only if it is a 1-well-covered graph without isolated vertices. We show that deleting a shedding vertex does
not change the maximum size of a maximal independent set including a given independent set. Specifically, for well-covered graphs, it means that the vertex v is shedding if and only if G−v is
well-covered. In addition, we provide new characterizations of 1-well-covered graphs.
Bibliographical note
Publisher Copyright:
© 2018 Elsevier Ltd
Dive into the research topics of '1-well-covered graphs revisited'. Together they form a unique fingerprint. | {"url":"https://cris.biu.ac.il/en/publications/1-well-covered-graphs-revisited","timestamp":"2024-11-11T14:56:20Z","content_type":"text/html","content_length":"51812","record_id":"<urn:uuid:a4db1045-7dad-4603-9a7c-664f0e9dc25e>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00830.warc.gz"} |
If I am standing on a mountain, Am I going faster than someone at sea level? If so, is my day shorter or will the higher vantage point of the horizon allow my day to seem longer?
Asked by: Troy S. Deen
The further away you are from the axis connecting the North and South Poles, the faster you will be moving as a result of Earth's rotation. So unless you are at either pole, climbing to a mountain
top WILL cause you to move faster than at the same latitude at sea level. Your speed alone, however, has no effect on the relative length of night and day. One rotation still takes 24 hours...you are
travelling faster because your path around the Earth is longer.
At higher altitudes you will see more sunlight (and more daylight time), though, because the higher vantage point allows you to see the Sun 'over the horizon' more than at sea level. Imagine being
many thousands of miles above sea level, where the apparently smaller Earth is able to block out even less sunlight. At an infinite 'altitude', in interplanetary space, daylight is continuous.
Answered by: Paul Walorski, B.A., Part-time Physics Instructor
Yes, you will have longer daylight. For an observer exactly at sea level, the sun comes up at the horizon, which is exactly 90° away from straight up. Their 'sky' covers 180° of overhead angle with
respect to their East-West line.' For an observer on a mountain, however, the curvature of the Earth allows for a larger 'sky.'' Their sun doesn't set until it has gone 90° plus some angle θ (see
The object of this derivation will be to calculate the angle θ as a function of R, the radius of the planet, and d, the distance above the surface of the planet.' We will then assume that the sun
takes the same amount of time to travel through both observers' 180° sky, plus an extra 240 seconds per degree of extra angle for the observer on the mountain.
First of all, note that the 'horizon' looks a lot like one of those tangent line problems they make you do in calculus. This is where I will start. I will treat the mean surface of the Earth as a
circle of radius R centered at the origin, and the observer as a point sitting at (0, R+d).' Our 'horizon' is a line that includes the observer point and is tangent to the circle at a point (x,y).
First I want to know the values of x and y. I will start with the equation for a circle, since this point is on the circle.
Now, to find the slope of the line and the circle at that point, I will use implicit differentiation:
I also know that the slope of the line at that point must be:
Setting 5) equal to 4) (since the slope of the line equals the slope of the circle at the point in question), we get
And cross multiplying we get
Substituting equation 1) into the left side, we get:
Now to solve for our x-coordinate, which is a bit messy:
Note to self:'suppress R2D2 joke''.D'oh!
Okay, now we have the coordinates of that point, and we can find the angle.' Using the following sketch, we see that
2) 3)
6) Finally for the fun part, let's plug some numbers in. First note that if d=0, the extra angle is also zero, which is exactly what we'd
expect.'Also, we see that
Just as we'd expect. Using the radius of the Earth:
9) 10)
12) 13)
15) 16)
which is the height of the Mauna Kea, one of five volcanic masses making up the 'Big Island' of Hawaii, (note that 13,796 ft or 4,205 m
17) is the its part above the sea level - otherwise, from its base underneath the ocean to the top it is about 30,000 ft or 9,000 m tall,
making it the tallest in the world when measured as such). I calculate
which translates into 8.32 minutes of extra time for viewing the sunset.
you can see that a 6 foot tall person standing up has an extra
over a man who's eyes are at sea level, which translates into approximately 10 seconds of sunset.' Of course, this small an angle could
easily be obstructed by small things on the surface of the Earth, e.g. waves in the ocean.
Answered by: Gregory Ogin, Physics Undergraduate Student, UST, St. Paul, MN | {"url":"https://www.physlink.com/education/askexperts/ae413.cfm","timestamp":"2024-11-08T02:47:08Z","content_type":"text/html","content_length":"48711","record_id":"<urn:uuid:f13ead1e-f3e8-4115-945b-c7ba69a87260>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00283.warc.gz"} |
ESS Open Archive - ESS Open Archive
Despite the ability to image coronal mass ejections (CMEs) from the Sun through the inner heliosphere, the forecasting of their Time-of-Arrival (ToA) to Earth does not yet meet most Space weather
users’ requirements. The main physical reason is our incomplete understanding of CME propagation in the inner heliosphere. Therefore, many ToA forecasting algorithms rely on simple empirical
relations to represent the interplanetary propagation phase using, mostly, kinematic information from coronagraphic observations below 30 solar radii (Rs) and a couple rather implying assumptions of
constant direction and speed for the transient. Here, we explore a different, yet still empirical approach. We replace the assumption of constant speed in the inner heliosphere with a two-phase
behavior consisting of a decelerating (or accelerating) phase from 20 Rs to some distance, followed by a coasting phase to Earth. In a nod towards a forecasting scheme, we consider only
Earth-directed CMEs use kinematic measurements only from the Heliospheric Imagers aboard both STEREO spacecraft, treat each spacecraft separately to increase the event statistics, analyze the
measurements in a data-assimilative fashion and intercompare them against three popular localization schemes for single viewpoint observations (fixed-φ, harmonic mean and self-similar expansion. For
the 21 cases, we obtain the best mean absolute error (MAE) of 6.4±1.9 hours, for the harmonic mean approximation. Remarkably, the difference between predicted and observed ToA is < 52 minutes for 42%
of the cases. We find that CMEs continue to decelerate beyond even 0.7 AU, in some cases. | {"url":"https://essopenarchive.org/inst/20904?page=42&tag_filter=Solar+System+Physics","timestamp":"2024-11-08T06:12:33Z","content_type":"text/html","content_length":"148347","record_id":"<urn:uuid:a5d1828f-346c-47b8-843c-05f2e26751c4>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00184.warc.gz"} |
Elements of Geometry
Elements of Geometry: Containing the First Six Books of Euclid, with a Supplement on the Quadrature of the Circle and the Geometry of Solids ; to which are Added Elements of Plane and Spherical
John Playfair
From inside the book
Page 18 ... Spherical Trigonometry John Playfair. A B D E C N. B. When several angles are at one point B , any one of them ' is expressed by three letters , of which the letter that is at the ver- '
tex of the angle , that is , at the point in which ...
Page 24 ... Spherical Trigonometry John Playfair. posite , shall be equal , each to each , viz . the angle ABC to the angle DEF , and the angle ACB to DFE . For , if the triangle ABC be applied to
the triangle DEF , so that the point A may be on D ...
Page 25 ... Spherical Trigonometry John Playfair ! A remainder CG ; and FC was proved to be equal to GB , therefore the two sides BF , FC are equal to the two CG , GB , each to each ; but the angle
BFC is equal to the angle CGB ; wherefore the ...
Page 26 ... Spherical Trigonometry John Playfair. B , cannot be equal to one another . Join CD , and if possible let CB be equal ... angle ACD is equal ( 5. 1. ) to the angle ADC : But the angle ACD
is greater than the angle BCD ; there- C D ...
Page 27 ... Spherical Trigonometry John Playfair. B A DG C E F to DF ; and also the base BC equal to the base EF . The angle BAC is equal to the angle EDF . For , if the triangle ABC be applied to
the triangle DEF , so that the point B be on E , and ...
Section 10 236
Section 11 240
Section 12 246
Section 13 296
Section 14 298
Section 15 320
Section 16 331
Section 17
Bibliographic information | {"url":"https://books.google.com.jm/books?id=8SXPAAAAMAAJ&vq=spherical+angle&dq=editions:UOM39015063898350&lr=&output=html_text&source=gbs_navlinks_s","timestamp":"2024-11-03T21:34:46Z","content_type":"text/html","content_length":"57111","record_id":"<urn:uuid:f674b3bd-b07c-4e47-9f63-8b898d6b7a81>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00199.warc.gz"} |
Rational Expressions Worksheet Algebra 2
Rational Expressions is a “pay what you want” website that can be used by students and educators for online learning or for home study. With many ways to learn Math (and for people who love to take
it in little chunks), the Site is excellent for the student with a busy schedule.
The fundamentals of this site are presented in a clear and organized manner, and students will see a clear progression in both their textbook knowledge and understanding of the material. As the
Rational Expressions site has become more advanced, it has moved toward more content that is not directly from the textbook.
12 best Radical Expressions images on Pinterest from rational expressions worksheet algebra 2 , source:pinterest.com
For the most part, the Rational Expressions site, or the new content that has come out over the years, is not terribly difficult to understand. It is certainly well worth the cost and is a great
value for the money. However, as with any resource, even with the best courses in the class, there are some things that students can miss if they do not look at the supplemental material and
workbooks that are offered by the teachers.
For example, the Problem Sets for the Rational Expressions site is often listed in the course catalog. Students and teachers alike can get a nice overview of the material in this book by using the
Problem Sets. One of the best things about these Problem Sets is that students can usually use them in conjunction with the worksheets to review any and all sections of the textbook or to even use it
as an additional tool to further review or clarify areas of their reading.
KateHo Infinite Algebra 2 Free Download algebra 2 worksheets from rational expressions worksheet algebra 2 , source:kateho.com
This has also been used to supplement the Teaching and Resources module for the Rational Expressions books. Teachers can use the problem sets and the supplementary materials to review the material of
the Textbook and Practice Tests, and this can help them create problems or watch-lists that can be used in class. This is extremely valuable as it gives students a way to work through a problem or
read to review specific topics. Some students have found the Rational Expressions books are even more useful because they can learn how to solve problems and how to go through the exercises. These
eBooks also provide a great way to practice and reinforce the content of the books, and teachers can use the Question and Answers section to do this as well.
Teachers can use the interactive exercises for the Periodic Table of Elements to give their students a preview of the problems they will face when solving these problems in the book. Teachers can
also use the Portal to bring up students’ test scores so that they can determine if the students are doing well on the book. This is very helpful to teachers as it gives them a good idea of where
their students stand against the material they have to deal with.
Simplifying Expressions Worksheet from rational expressions worksheet algebra 2 , source:whoisnasirelahi.com
In some instances, such as the Parenting Activity book, students may find that they are able to skip ahead and do the exercises that they need to do so that they do not have to spend so much time on
the book. Parents can also use the Portal to review the Materials chapter and the Problem Sets that they have already purchased. These are just two of the ways that the site has become a popular
resource for online resources.
There are many other ways that students can benefit from the Online Resource Book Resources for the Textbook Solutions for the Rational Expressions worksheet algebra book. Many of the worksheets and
textbooks that are available through the library and the school system are available online as well, and students can access the resources for free.
Graphing Rational Functions Worksheet Pdf Graphing Polynomial from rational expressions worksheet algebra 2 , source:ning-guo.com
Students can also find free videos on the site for most of the supplementary materials for the Rational Expressions worksheet algebra course. Many of the videos are from local math camps and other
online resources, so students can benefit from the visual aid that they can use in class.
Students also have the option of watching online videos and reading the supplement guides for the Student Study Guide and Textbook Solutions for the Rational Expressions worksheet algebra course.
These books are for students who need the supplemental materials in the form of educational videos, notes, and worksheets.
Simplify Each Expression Worksheet Answers Elegant Worksheet Ideas from rational expressions worksheet algebra 2 , source:thebruisers.net
Different resources are available to help students work through the material as it comes up, but not all of them are on offer at one time. As students are introduced to new subjects and concepts, the
various sites will offer supplemental materials as well. and these will likely be available at different times in different places.
Solving Radical Equations Worksheet Answers Awesome the Graph A from rational expressions worksheet algebra 2 , source:therlsh.net
Multiplying rational expressions multiple variables video from rational expressions worksheet algebra 2 , source:khanacademy.org
Simplifying radical expressions rational exponents radical equations from rational expressions worksheet algebra 2 , source:slideshare.net
Algebra 2 Unit from rational expressions worksheet algebra 2 , source:deliveryoffice.info
Rational Expressions Worksheet Algebra Worksheets Ning guo from rational expressions worksheet algebra 2 , source:ning-guo.com | {"url":"https://briefencounters.ca/18779/rational-expressions-worksheet-algebra-2/","timestamp":"2024-11-07T23:36:02Z","content_type":"text/html","content_length":"93821","record_id":"<urn:uuid:d9204657-d1e3-4e3d-bd2f-91550e9182df>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00364.warc.gz"} |
How to find the value of X in Adjacent Angles
To find the value of X in adjacent angles
Vertically adjacent complementary complementary angle practice questions. Say whether the angles are side by side or vertical. Next, replace the values for each angle in the equation.
Find unfamiliar angles from the adjacent angles tutorial
What is the value of x if the total of all adjacent angles in the illustration below is 180 deg. Check out our college algebra course. The 308 institutes have either approved or given pre-approval
for the remittance. A number of different higher education establishments and higher education establishments take ACE CREDIT's advice into account when assessing the application of ACE CEDIT to
their courses.
at a neighboring corner? (352478)
tutor to respond to that request. Effective High School STEM Tutor & CUNY Math Peer Leader Effective High School STEM Tutor & CUNY ..... You can use the three fundamental triangulation features to
find the pages that are not there. In order to introduce the tutor best suited for you, we need your postcode.
Use a different e-mail in order to register a new students area. Use Facebook to login to your Facebook e-mail inbox. Looks like you have already created an affiliate with the e-mail you provided. In
order to introduce the tutor best suited for you, we need your postcode.
Determination of dimensions for additional perpendicular and adjacent angles
Via "Find the dimensions of additional supplemental perpendicular and adjacent angles" Find the dimensions of additional supplemental perpendicular and adjacent angles: There we will be discussing
additional, additional, perpendicular adjacent and matching angles. If, for example, ?A = 52° and ?B = 38°, the angles ?A and ?B complement each other.
The two angles should complement each other when the total of their dimensions is 180°. The angles whose dimensions are 112° and 68° complement each other, for example. Angles face each other when
two crossed each other. There are two adjacent angles if they have a shared side and a shared apex and do not intersect.
You don't have to lie on similarly large outlines. Exactly the same angles. Since AOB is a rectilinear line, the total of these three angles is 180 degrees. For which value of x will ABB and CD be
linear parallels? The ?FOB and OHD are corresponding angles, so they are matching.
Therefore, the value of x is 30°. For which value of x will ABB and CD be linear parallels? For example 4: Find the value of x. Example 5: Solution: Since AOB is a line, the total of these three
angles is 180 degrees.
Therefore, the value of x is 35. Having gone through the above, we sincerely believe that the student has grasped "Find measurements of complementary additional vertical and adjacent angles".
Mehr zum Thema | {"url":"http://www.iliamnaair.com/topic/how-to-find-the-value-of-x-in-adjacent-angles","timestamp":"2024-11-02T23:42:36Z","content_type":"text/html","content_length":"22800","record_id":"<urn:uuid:4641254b-2200-435c-8879-bf30f347893f>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00224.warc.gz"} |
Review of The Man of Numbers: Fibonacci's Arithmetic Revolution by Keith Devlin
The Man of Numbers:
Fibonacci's Arithmetic Revolution
by Keith Devlin
Little is known of the life of Leonardo Pisano that also goes by the names of Leonardo of Pisa and Leonardo Fibonacci. His life has straddled the transition between the 12th and 13th centuries but
neither the date of birth or death are known exactly. Leonardo wrote several books in the very first of which Liber Abbaci (dated 1202) he included a recreational puzzle whose solution - the present
day Fibonacci numbers - made his name famous in mathematical circles. Fibonacci numbers and the associated Golden Ratio have an uncanny ability to pop up rather unexpectedly in various circumstances.
There is even a mathematical periodical (The Fibonacci Quarterly) that is devoted entirely to the properties of the Fibonacci numbers.
Emphatically, Keith Devlin's book is not about the Fibonacci numbers.
What is known of the life of Leonardo Pisano came from scarce sources. In the Prologue to Liber Abbaci Leonardo, rather uncharacteristically, provides some autobiographical details related to the
circumstances of his becoming acquainted with the Indian system of numeration. There is also a record of Leonardo being summoned for an audience with the Holy Roman Emperor Frederick II sometime in
the mid-1220s. In 1241, the commune of Piza granted "the discreet and learned man, Master Leonardo" an annual honorarium of twenty Pisan pounds plus expenses for services to the city. So it is known
that Leonardo was alive at the time. It is not known how and when he died. Speculation as to how Leonardo ended his days ranges from his being killed in the recurrent civil strife in Pisa to him
living out his days peacefully as a revered and honored citizen. That's about it.
So Devlin's book is far from being a biographical exposition of Leonardo's life.
Liber Abbaci was a huge volume in excess of 600 pages. The theory has been illustrated with copious examples designed to provide exercises in using the methods described in the book. Chapter 12 alone
presents 259 worked examples. All the problem were of the kind we call nowadays "word problems". For example, here's a problem included towards the end of Chapter 11:
A certain man buys 30 birds which are partridges, pigeons, and sparrows, for 30 denari. A partridge he buys for 3 denari, a pigeon for 2 denari, and 2 sparrows for 1 denaro, namely 1 sparrow for 1/2
denaro. It is sought how many birds he buys of each kind.
The author shows solutions to this and several other word problems.
However, the book is not about solving word or any other kind of problems.
So, what is then the book about?
In Chapter 0, Devlin sets up the plan for the book:
The lack of biographical details makes straight chronicle of Leonardo's life impossible. ... Thus a book about Leonardo must focus on his great contribution and his intellectual legacy. ... We may
not have a detailed historical record of Leonardo the man, but we have his words and ideas. Just as we can come to understand great novelists through their books or accomplished composers through
their music - particularly if we understand the circumstances in which they created - so too we can come to understand Leonardo of Pisa.
The rest of the book is a doggedly consistent execution of that program. Books describing the Hindu-Arab numeration have been available already in the 12th century, before the appearance of Liber
Abbaci in 1202. Hundreds of similar books have been written in the next 200 hundred years. For a long time, Liber Abbaci stood apart from the pack. Not only Leonardo managed to package the new
methods in a readable and understandable form, he also managed to sell the new methods to the world, thus inducing the intellectual, commercial, and scientific revolution that Devlin compares to the
achievements of Galileo, Copernicus, and the invention of computers.
Starting with the publication of Liber Abbaci, the system that we are using nowadays kept gaining grounds. However, it was not an altogether an easy fight. For example, in the statutes of the Arte
del Cambo (Guild of Money Changes) of Florence in 1299, a proclamation forbade its members from using the new numerals. ... In 1494, the money changes in Frankfurt attempted to prohibit the use of
Hindu-Arab system, persuading the city to issue an ordnance that proclaimed "the master calculators are to abstain from calculating with digits. ... Abacus board arithmetic was still dominant in
northern Europe up to the end of the sixteen century.
Reading the book one feels that the story told by John Allen Paulos in his Beyond Numeracy morphs from a hilarious anecdote into the realm of reality:
A German merchant of the fifteenth century asked an eminent professor where he should send his son for a good business education. The professor responded that German universities would be sufficient
to teach the boy addition and subtraction but he would have to go to Italy to learn multiplication and division.
By the end of the 15th century, mathematical authors began printing their books, while Liber Abbaci remained in the manuscript form. Leonardo's fame has waned and his name has all but disappeared
from the annals of history until the manuscript has been rediscovered at the end of 18th century by Pietro Cossali. Cossali has reaffirmed the all important contribution of Leonardo in introducing
the Hindo-Arabic system into the modern world.
Devlin paints a broad swath of history from the 12th through the 20th century as related to the origins, education, life of Leonardo and spread of his ideas, history of commerce, rivalry between
various commercial centers, Pisa, Florence, North Africa, Venice, the rest of Europe, ending with a charming adventure of Leonardo's statue during the WWII.
This is a marvelously informative and well written book. It's a good read that I recommend to all curious of the history of mathematical ideas in general and the number systems in particular.
The Man of Numbers: Fibonacci's Arithmetic Revolution, by Keith Devlin. Walker & Company (July 5, 2011). Hardcover, 192 pp, $14.50. ISBN 978-0802778123.
|Up| |Contact| |Front page| |Contents|
Copyright © 1996-2018
Alexander Bogomolny | {"url":"https://cut-the-knot.org/books/Reviews/DevlinsFibonacci.shtml","timestamp":"2024-11-03T15:47:55Z","content_type":"text/html","content_length":"17136","record_id":"<urn:uuid:802f6a5e-58cd-4349-9d14-36dfb5bf4936>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00898.warc.gz"} |
Basel problem, the Glossary
Table of Contents
1. 103 relations: Adele ring, Akiva Yaglom, Algebraic group, Analytic function, Antiderivative, Apéry's constant, Augustin-Louis Cauchy, Bailey–Borwein–Plouffe formula, Basel, Bernhard Riemann,
Bernoulli family, Bernoulli number, Binomial theorem, Calculus, Cauchy product, Closed-form expression, Compact space, Complex analysis, Complex number, Congruence subgroup, Constant of
integration, Continued fraction, Coprime integers, De Moivre's formula, Elementary symmetric polynomial, Euler product, Euler's constant, Euler's formula, Factorization, Fourier analysis,
Generating function, Haar measure, Harmonic number, Heuristic, Hilbert space, Hyperbolic functions, Injective function, Inner product space, Integration by parts, Integration by substitution,
Interchange of limiting operations, Irrationality, Isaak Yaglom, Karl Weierstrass, L'Hôpital's rule, Leibniz integral rule, Leonhard Euler, Limit (mathematics), Limit of a function, List of sums
of reciprocals, ... Expand index (53 more) » « Shrink index
In mathematics, the adele ring of a global field (also adelic ring, ring of adeles or ring of adèles) is a central object of class field theory, a branch of algebraic number theory.
See Basel problem and Adele ring
Akiva Moiseevich Yaglom (Аки́ва Моисе́евич Ягло́м; 6 March 1921 – 13 December 2007) was a Soviet and Russian physicist, mathematician, statistician, and meteorologist.
See Basel problem and Akiva Yaglom
In mathematics, an algebraic group is an algebraic variety endowed with a group structure that is compatible with its structure as an algebraic variety.
See Basel problem and Algebraic group
In mathematics, an analytic function is a function that is locally given by a convergent power series.
See Basel problem and Analytic function
In calculus, an antiderivative, inverse derivative, primitive function, primitive integral or indefinite integral of a function is a differentiable function whose derivative is equal to the original
See Basel problem and Antiderivative
In mathematics, Apéry's constant is the sum of the reciprocals of the positive cubes. Basel problem and Apéry's constant are zeta and L-functions.
See Basel problem and Apéry's constant
Baron Augustin-Louis Cauchy (France:, ; 21 August 1789 – 23 May 1857) was a French mathematician, engineer, and physicist.
See Basel problem and Augustin-Louis Cauchy
The Bailey–Borwein–Plouffe formula (BBP formula) is a formula for pi. Basel problem and Bailey–Borwein–Plouffe formula are pi algorithms.
See Basel problem and Bailey–Borwein–Plouffe formula
Basel, also known as Basle,Bâle; Basilea; Basileia; other Basilea.
Georg Friedrich Bernhard Riemann (17 September 1826 – 20 July 1866) was a German mathematician who made profound contributions to analysis, number theory, and differential geometry.
See Basel problem and Bernhard Riemann
The Bernoulli family of Basel was a patrician family, notable for having produced eight mathematically gifted academics who, among them, contributed substantially to the development of mathematics
and physics during the early modern period.
See Basel problem and Bernoulli family
In mathematics, the Bernoulli numbers are a sequence of rational numbers which occur frequently in analysis. Basel problem and Bernoulli number are number theory.
See Basel problem and Bernoulli number
In elementary algebra, the binomial theorem (or binomial expansion) describes the algebraic expansion of powers of a binomial.
See Basel problem and Binomial theorem
Calculus is the mathematical study of continuous change, in the same way that geometry is the study of shape, and algebra is the study of generalizations of arithmetic operations.
See Basel problem and Calculus
In mathematics, more specifically in mathematical analysis, the Cauchy product is the discrete convolution of two infinite series.
See Basel problem and Cauchy product
In mathematics, an expression is in closed form if it is formed with constants, variables and a finite set of basic functions connected by arithmetic operations (and integer powers) and function
See Basel problem and Closed-form expression
In mathematics, specifically general topology, compactness is a property that seeks to generalize the notion of a closed and bounded subset of Euclidean space.
See Basel problem and Compact space
Complex analysis, traditionally known as the theory of functions of a complex variable, is the branch of mathematical analysis that investigates functions of complex numbers.
See Basel problem and Complex analysis
In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted, called the imaginary unit and satisfying the equation i^.
See Basel problem and Complex number
In mathematics, a congruence subgroup of a matrix group with integer entries is a subgroup defined by congruence conditions on the entries.
See Basel problem and Congruence subgroup
In calculus, the constant of integration, often denoted by C (or c), is a constant term added to an antiderivative of a function f(x) to indicate that the indefinite integral of f(x) (i.e., the set
of all antiderivatives of f(x)), on a connected domain, is only defined up to an additive constant.
See Basel problem and Constant of integration
In mathematics, a continued fraction is an expression obtained through an iterative process of representing a number as the sum of its integer part and the reciprocal of another number, then writing
this other number as the sum of its integer part and another reciprocal, and so on.
See Basel problem and Continued fraction
In number theory, two integers and are coprime, relatively prime or mutually prime if the only positive integer that is a divisor of both of them is 1. Basel problem and coprime integers are number
See Basel problem and Coprime integers
In mathematics, de Moivre's formula (also known as de Moivre's theorem and de Moivre's identity) states that for any real number and integer it holds that where is the imaginary unit.
See Basel problem and De Moivre's formula
In mathematics, specifically in commutative algebra, the elementary symmetric polynomials are one type of basic building block for symmetric polynomials, in the sense that any symmetric polynomial
can be expressed as a polynomial in elementary symmetric polynomials.
See Basel problem and Elementary symmetric polynomial
In number theory, an Euler product is an expansion of a Dirichlet series into an infinite product indexed by prime numbers. Basel problem and Euler product are zeta and L-functions.
See Basel problem and Euler product
Euler's constant (sometimes called the Euler–Mascheroni constant) is a mathematical constant, usually denoted by the lowercase Greek letter gamma, defined as the limiting difference between the
harmonic series and the natural logarithm, denoted here by: \begin \gamma &.
See Basel problem and Euler's constant
Euler's formula, named after Leonhard Euler, is a mathematical formula in complex analysis that establishes the fundamental relationship between the trigonometric functions and the complex
exponential function.
See Basel problem and Euler's formula
In mathematics, factorization (or factorisation, see English spelling differences) or factoring consists of writing a number or another mathematical object as a product of several factors, usually
smaller or simpler objects of the same kind.
See Basel problem and Factorization
In mathematics, Fourier analysis is the study of the way general functions may be represented or approximated by sums of simpler trigonometric functions.
See Basel problem and Fourier analysis
In mathematics, a generating function is a representation of an infinite sequence of numbers as the coefficients of a formal power series.
See Basel problem and Generating function
In mathematical analysis, the Haar measure assigns an "invariant volume" to subsets of locally compact topological groups, consequently defining an integral for functions on those groups.
See Basel problem and Haar measure
In mathematics, the -th harmonic number is the sum of the reciprocals of the first natural numbers: H_n. Basel problem and harmonic number are number theory.
See Basel problem and Harmonic number
A heuristic or heuristic technique (problem solving, mental shortcut, rule of thumb) is any approach to problem solving that employs a pragmatic method that is not fully optimized, perfected, or
rationalized, but is nevertheless "good enough" as an approximation or attribute substitution.
See Basel problem and Heuristic
In mathematics, Hilbert spaces (named after David Hilbert) allow the methods of linear algebra and calculus to be generalized from (finite-dimensional) Euclidean vector spaces to spaces that may be
See Basel problem and Hilbert space
In mathematics, hyperbolic functions are analogues of the ordinary trigonometric functions, but defined using the hyperbola rather than the circle.
See Basel problem and Hyperbolic functions
In mathematics, an injective function (also known as injection, or one-to-one function) is a function that maps distinct elements of its domain to distinct elements; that is, implies.
See Basel problem and Injective function
In mathematics, an inner product space (or, rarely, a Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation called an inner product.
See Basel problem and Inner product space
In calculus, and more generally in mathematical analysis, integration by parts or partial integration is a process that finds the integral of a product of functions in terms of the integral of the
product of their derivative and antiderivative.
See Basel problem and Integration by parts
In calculus, integration by substitution, also known as u-substitution, reverse chain rule or change of variables, is a method for evaluating integrals and antiderivatives.
See Basel problem and Integration by substitution
In mathematics, the study of interchange of limiting operations is one of the major concerns of mathematical analysis, in that two given limiting operations, say L and M, cannot be assumed to give
the same result when applied in either order.
See Basel problem and Interchange of limiting operations
Irrationality is cognition, thinking, talking, or acting without rationality.
See Basel problem and Irrationality
Isaak Moiseevich Yaglom (Исаа́к Моисе́евич Ягло́м; 6 March 1921 – 17 April 1988) was a Soviet mathematician and author of popular mathematics books, some with his twin Akiva Yaglom.
See Basel problem and Isaak Yaglom
Karl Theodor Wilhelm Weierstrass (Weierstraß; 31 October 1815 – 19 February 1897) was a German mathematician often cited as the "father of modern analysis".
See Basel problem and Karl Weierstrass
L'Hôpital's rule or L'Hospital's rule, also known as Bernoulli's rule, is a mathematical theorem that allows evaluating limits of indeterminate forms using derivatives.
See Basel problem and L'Hôpital's rule
In calculus, the Leibniz integral rule for differentiation under the integral sign, named after Gottfried Wilhelm Leibniz, states that for an integral of the form \int_^ f(x,t)\,dt, where -\infty and
the integrands are functions dependent on x, the derivative of this integral is expressible as \begin & \frac \left (\int_^ f(x,t)\,dt \right) \\ &.
See Basel problem and Leibniz integral rule
Leonhard Euler (15 April 170718 September 1783) was a Swiss mathematician, physicist, astronomer, geographer, logician, and engineer who founded the studies of graph theory and topology and made
pioneering and influential discoveries in many other branches of mathematics such as analytic number theory, complex analysis, and infinitesimal calculus.
See Basel problem and Leonhard Euler
In mathematics, a limit is the value that a function (or sequence) approaches as the input (or index) approaches some value.
See Basel problem and Limit (mathematics)
Although the function is not defined at zero, as becomes closer and closer to zero, becomes arbitrarily close to 1.
See Basel problem and Limit of a function
In mathematics and especially number theory, the sum of reciprocals generally is computed for the reciprocals of some or all of the positive integers (counting numbers)—that is, it is generally
the sum of unit fractions.
See Basel problem and List of sums of reciprocals
In trigonometry, trigonometric identities are equalities that involve trigonometric functions and are true for every value of the occurring variables for which both sides of the equality are defined.
See Basel problem and List of trigonometric identities
In mathematics, a zeta function is (usually) a function analogous to the original example, the Riemann zeta function Zeta functions include. Basel problem and List of zeta functions are zeta and
See Basel problem and List of zeta functions
In mathematics, the logarithm is the inverse function to exponentiation.
See Basel problem and Logarithm
Analysis is the branch of mathematics dealing with continuous functions, limits, and related theories, such as differentiation, integration, measure, infinite sequences, series, and analytic
See Basel problem and Mathematical analysis
The Mathematical Association of America (MAA) is a professional society that focuses on mathematics accessible at the undergraduate level.
See Basel problem and Mathematical Association of America
Mathematical induction is a method for proving that a statement P(n) is true for every natural number n, that is, that the infinitely many cases P(0), P(1), P(2), P(3), \dots  all hold.
See Basel problem and Mathematical induction
A mathematical proof is a deductive argument for a mathematical statement, showing that the stated assumptions logically guarantee the conclusion.
See Basel problem and Mathematical proof
A mathematician is someone who uses an extensive knowledge of mathematics in their work, typically to solve mathematical problems.
See Basel problem and Mathematician
Mathematics is a field of study that discovers and organizes abstract objects, methods, theories and theorems that are developed and proved for the needs of empirical sciences and mathematics itself.
See Basel problem and Mathematics
In mathematics, the Mercator series or Newton–Mercator series is the Taylor series for the natural logarithm: In summation notation, The series converges to the natural logarithm (shifted by 1)
whenever -1.
See Basel problem and Mercator series
In mathematics, a multiplicative inverse or reciprocal for a number x, denoted by 1/x or x−1, is a number which when multiplied by x yields the multiplicative identity, 1.
See Basel problem and Multiplicative inverse
Multivariable calculus (also known as multivariate calculus) is the extension of calculus in one variable to calculus with functions of several variables: the differentiation and integration of
functions involving multiple variables (multivariate), rather than just one.
See Basel problem and Multivariable calculus
The natural logarithm of a number is its logarithm to the base of the mathematical constant e, which is an irrational and transcendental number approximately equal to.
See Basel problem and Natural logarithm
In mathematics, the natural numbers are the numbers 0, 1, 2, 3, etc., possibly excluding 0. Basel problem and natural number are number theory.
See Basel problem and Natural number
In mathematics, Newton's identities, also known as the Girard–Newton formulae, give relations between two types of symmetric polynomials, namely between power sums and elementary symmetric
See Basel problem and Newton's identities
Number theory (or arithmetic or higher arithmetic in older usage) is a branch of pure mathematics devoted primarily to the study of the integers and arithmetic functions.
See Basel problem and Number theory
" die Anzahl der Primzahlen unter einer gegebenen " (usual English translation: "On the Number of Primes Less Than a Given Magnitude") is a seminal 9-page paper by Bernhard Riemann published in the
November 1859 edition of the Monatsberichte der Königlich Preußischen Akademie der Wissenschaften zu Berlin.
See Basel problem and On the Number of Primes Less Than a Given Magnitude
In mathematics, particularly linear algebra, an orthonormal basis for an inner product space V with finite dimension is a basis for V whose vectors are orthonormal, that is, they are all unit vectors
and orthogonal to each other.
See Basel problem and Orthonormal basis
In number theory, given a prime number, the -adic numbers form an extension of the rational numbers which is distinct from the real numbers, though with some similar properties; -adic numbers can be
written in a form similar to (possibly infinite) decimals, but with digits based on a prime number rather than ten, and extending to the left rather than to the right. Basel problem and p-adic number
are number theory.
See Basel problem and P-adic number
In mathematical analysis, Parseval's identity, named after Marc-Antoine Parseval, is a fundamental result on the summability of the Fourier series of a function.
See Basel problem and Parseval's identity
In algebra, the partial fraction decomposition or partial fraction expansion of a rational fraction (that is, a fraction such that the numerator and the denominator are both polynomials) is an
operation that consists of expressing the fraction as a sum of a polynomial (possibly zero) and one or several fractions with a simpler denominator.
See Basel problem and Partial fraction decomposition
In mathematics, the Riemann zeta function is a function in complex analysis, which is also important in number theory. Basel problem and Particular values of the Riemann zeta function are zeta and
See Basel problem and Particular values of the Riemann zeta function
A periodic function or cyclic function, also called a periodic waveform (or simply periodic wave), is a function that repeats its values at regular intervals or periods.
See Basel problem and Periodic function
Sir Henry Peter Francis Swinnerton-Dyer, 16th Baronet, (2 August 1927 – 26 December 2018) was an English mathematician specialising in number theory at the University of Cambridge.
See Basel problem and Peter Swinnerton-Dyer
Pietro Mengoli (1626, Bologna – June 7, 1686, Bologna) was an Italian mathematician and clergyman from Bologna, where he studied with Bonaventura Cavalieri at the University of Bologna, and succeeded
him in 1647.
See Basel problem and Pietro Mengoli
In mathematics, a polynomial is a mathematical expression consisting of indeterminates (also called variables) and coefficients, that involves only the operations of addition, subtraction,
multiplication and exponentiation to nonnegative integer powers, and has a finite number of terms.
See Basel problem and Polynomial
In mathematics, specifically in commutative algebra, the power sum symmetric polynomials are a type of basic building block for symmetric polynomials, in the sense that every symmetric polynomial
with rational coefficients can be expressed as a sum and difference of products of power sum symmetric polynomials with rational coefficients.
See Basel problem and Power sum symmetric polynomial
A prime number (or a prime) is a natural number greater than 1 that is not a product of two smaller natural numbers.
See Basel problem and Prime number
A random number is generated by a random (stochastic) process such as throwing Dice.
See Basel problem and Random number
Rationality is the quality of being guided by or based on reason.
See Basel problem and Rationality
In mathematics, a recurrence relation is an equation according to which the nth term of a sequence of numbers is equal to some combination of the previous terms.
See Basel problem and Recurrence relation
The Riemann zeta function or Euler–Riemann zeta function, denoted by the Greek letter (zeta), is a mathematical function of a complex variable defined as \zeta(s). Basel problem and Riemann zeta
function are zeta and L-functions.
See Basel problem and Riemann zeta function
In mathematics, a series is, roughly speaking, the operation of adding infinitely many quantities, one after the other, to a given starting quantity.
See Basel problem and Series (mathematics)
In mathematics, physics and engineering, the sinc function, denoted by, has two forms, normalized and unnormalized.
See Basel problem and Sinc function
In mathematics, a square number or perfect square is an integer that is the square of an integer; in other words, it is the product of some integer with itself. Basel problem and square number are
number theory and squares in number theory.
See Basel problem and Square number
In mathematics, a square-integrable function, also called a quadratically integrable function or L^2 function or square-summable function, is a real- or complex-valued measurable function for which
the integral of the square of the absolute value is finite.
See Basel problem and Square-integrable function
In calculus, the squeeze theorem (also known as the sandwich theorem, among other names) is a theorem regarding the limit of a function that is bounded between two other functions.
See Basel problem and Squeeze theorem
In mathematics, summation is the addition of a sequence of numbers, called addends or summands; the result is their sum or total.
See Basel problem and Summation
In mathematics and statistics, sums of powers occur in a number of contexts. Basel problem and sums of powers are number theory and squares in number theory.
See Basel problem and Sums of powers
In mathematics, the Tamagawa number \tau(G) of a semisimple algebraic group defined over a global field is the measure of G(\mathbb)/G(k), where \mathbb is the adele ring of.
See Basel problem and Tamagawa number
In mathematical analysis, Tannery's theorem gives sufficient conditions for the interchanging of the limit and infinite summation operations.
See Basel problem and Tannery's theorem
In mathematics, the Taylor series or Taylor expansion of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point.
See Basel problem and Taylor series
The Mathematical Intelligencer is a mathematical journal published by Springer Science+Business Media that aims at a conversational and scholarly tone, rather than the technical and specialist tone
more common among academic journals.
See Basel problem and The Mathematical Intelligencer
In mathematics, a transcendental number is a real or complex number that is not algebraic – that is, not the root of a non-zero polynomial with integer (or, equivalently, rational) coefficients.
See Basel problem and Transcendental number
In mathematics, the trigonometric functions (also called circular functions, angle functions or goniometric functions) are real functions which relate an angle of a right-angled triangle to ratios of
two side lengths.
See Basel problem and Trigonometric functions
In mathematics, a trigonometric substitution replaces a trigonometric function for another expression.
See Basel problem and Trigonometric substitution
The University of Cambridge is a public collegiate research university in Cambridge, England.
See Basel problem and University of Cambridge
In mathematics, particularly in order theory, an upper bound or majorant of a subset of some preordered set is an element of that is every element of.
See Basel problem and Upper and lower bounds
In mathematics, Vieta's formulas relate the coefficients of a polynomial to sums and products of its roots.
See Basel problem and Vieta's formulas
Vladimir Petrovich Platonov (Уладзімір Пятровіч Платонаў, Uladzimir Piatrovic Platonau; Влади́мир Петро́вич Плато́нов; born 1 December 1939, Stayki village, Vitebsk Region, Byelorussian SSR) is a
Soviet, Belarusian and Russian mathematician.
See Basel problem and Vladimir Platonov
In mathematics, a volume form or top-dimensional form is a differential form of degree equal to the differentiable manifold dimension.
See Basel problem and Volume form
In mathematics, and particularly in the field of complex analysis, the Weierstrass factorization theorem asserts that every entire function can be represented as a (possibly infinite) product
involving its zeroes.
See Basel problem and Weierstrass factorization theorem
In mathematics, the Weil conjecture on Tamagawa numbers is the statement that the Tamagawa number \tau(G) of a simply connected simple algebraic group defined over a number field is 1.
See Basel problem and Weil's conjecture on Tamagawa numbers
See also
Pi algorithms
Squares in number theory
Also known as 1 + 1/4 + 1/9 + 1/16 + · · ·, Basel series, Basel sum, Basler problem, Evaluation of z(2), Evaluation of ζ(2), Riemann zeta function zeta(2), Series of reciprocal squares, Sum of the
reciprocals of the square numbers, Sum of the reciprocals of the squares, Zeta(2), Π^2/6.
, List of trigonometric identities, List of zeta functions, Logarithm, Mathematical analysis, Mathematical Association of America, Mathematical induction, Mathematical proof, Mathematician,
Mathematics, Mercator series, Multiplicative inverse, Multivariable calculus, Natural logarithm, Natural number, Newton's identities, Number theory, On the Number of Primes Less Than a Given
Magnitude, Orthonormal basis, P-adic number, Parseval's identity, Partial fraction decomposition, Particular values of the Riemann zeta function, Periodic function, Peter Swinnerton-Dyer, Pietro
Mengoli, Polynomial, Power sum symmetric polynomial, Prime number, Random number, Rationality, Recurrence relation, Riemann zeta function, Series (mathematics), Sinc function, Square number,
Square-integrable function, Squeeze theorem, Summation, Sums of powers, Tamagawa number, Tannery's theorem, Taylor series, The Mathematical Intelligencer, Transcendental number, Trigonometric
functions, Trigonometric substitution, University of Cambridge, Upper and lower bounds, Vieta's formulas, Vladimir Platonov, Volume form, Weierstrass factorization theorem, Weil's conjecture on
Tamagawa numbers. | {"url":"https://en.unionpedia.org/Basel_problem","timestamp":"2024-11-02T15:20:39Z","content_type":"text/html","content_length":"89465","record_id":"<urn:uuid:b9ea334a-65ae-4b8f-99f7-dbd5659e4f65>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00499.warc.gz"} |
IMARGUMENT Function: Definition, Formula Examples and Usage
IMARGUMENT Function
Today, we’re going to be discussing the IMARGUMENT function in Google Sheets. This function is an incredibly useful tool that allows you to find the argument of a complex number in Google Sheets. If
you’re not familiar with complex numbers or the concept of an argument, don’t worry! We’ll be explaining everything in detail so that you can confidently use this function in your own spreadsheet
So, what exactly does the IMARGUMENT function do? In a nutshell, it returns the argument of a complex number in radians. A complex number is a number that can be written in the form a+bi, where a and
b are real numbers and i is the imaginary unit. The argument of a complex number is the angle between the positive x-axis and the line connecting the origin to the complex number in the complex
plane. Essentially, it’s a measure of how far a complex number is from the origin, expressed in radians.
Definition of IMARGUMENT Function
The IMARGUMENT function in Google Sheets is a built-in function that returns the argument of a complex number in radians. It takes a single argument, which is the complex number for which you want to
find the argument. The syntax for the IMARGUMENT function is: IMARGUMENT(complex_number). The function returns the argument of the complex number in radians, which is a value between -π and π. If the
complex number is a real number, the function will return 0. If the complex number is not a valid complex number, the function will return the #VALUE! error. The IMARGUMENT function is a useful tool
for working with complex numbers in Google Sheets and can be used in a variety of spreadsheet applications.
Syntax of IMARGUMENT Function
The syntax for the IMARGUMENT function in Google Sheets is:
This function takes a single argument, which is the complex number for which you want to find the argument. The complex number can be entered directly into the function as a string, or it can be a
cell reference that contains a complex number.
Here’s an example of how you might use the IMARGUMENT function in a formula:
This formula would return the argument of the complex number 3+4i, which is approximately 0.93.
It’s important to note that the complex number must be entered as a string, with the real and imaginary parts separated by a “+” or “-” sign and “i” representing the imaginary unit. If the complex
number is not entered in this format, the function will return an error.
Overall, the IMARGUMENT function is a useful tool for working with complex numbers in Google Sheets and can be used in a variety of spreadsheet applications.
Examples of IMARGUMENT Function
Here are three examples of how you can use the IMARGUMENT function in Google Sheets:
1. Find the argument of a complex number:
Suppose you have a list of complex numbers in a range of cells and you want to find the argument of each one. You can use the IMARGUMENT function to do this. For example, the formula:
would return the argument of the complex number in cell A1, assuming that cell A1 contains a valid complex number.
2. Use the IMARGUMENT function in a conditional formula:
You can also use the IMARGUMENT function in a conditional formula to perform different actions based on the argument of a complex number. For example, the formula:
=IF(IMARGUMENT(A1)>0, "Positive", "Negative")
would return “Positive” if the argument of the complex number in cell A1 is greater than 0, and “Negative” if it is less than or equal to 0.
3. Use the IMARGUMENT function in combination with other functions:
You can use the IMARGUMENT function in combination with other functions to perform more complex calculations involving complex numbers. For example, the formula:
would add the arguments of the complex numbers in cells A1 and B1.
These are just a few examples of how you can use the IMARGUMENT function in Google Sheets. There are many other ways you can use this function to work with complex numbers in your spreadsheets.
Use Case of IMARGUMENT Function
Here are a few real-life examples of how you might use the IMARGUMENT function in Google Sheets:
1. Analyzing data in the complex plane:
Suppose you have a set of data that consists of complex numbers, and you want to analyze the distribution of the data in the complex plane. You can use the IMARGUMENT function to find the
argument of each complex number, and then use this information to create a scatter plot or histogram showing the distribution of the data.
2. Solving equations involving complex numbers:
The IMARGUMENT function can also be useful when solving equations involving complex numbers. For example, suppose you want to find the values of x that satisfy the equation x^2 + 1 = 0. One
solution is x = i, which is a complex number with an argument of π/2. You can use the IMARGUMENT function to check that the argument of the complex number x is indeed π/2.
3. Creating visualizations of complex functions:
Suppose you have a complex function that you want to visualize. You can use the IMARGUMENT function to find the argument of the function at different points, and then use this information to
create a color map or contour plot showing the distribution of the argument over the domain of the function.
These are just a few examples of how you might use the IMARGUMENT function in real-life scenarios. There are many other applications for this function in fields such as engineering, physics, and
Limitations of IMARGUMENT Function
There are a few limitations of the IMARGUMENT function in Google Sheets that you should be aware of:
1. The function only works with complex numbers: The IMARGUMENT function is designed to work with complex numbers, which are numbers that can be written in the form a+bi, where a and b are real
numbers and i is the imaginary unit. If you try to use the function with a real number or any other type of value, it will return an error.
2. The function only returns the argument in radians: The IMARGUMENT function returns the argument of a complex number in radians, which is a unit of angle measure. If you want to express the
argument in degrees, you’ll need to use an additional function or formula to convert the radians to degrees.
3. The function can return an error if the complex number is not valid: The IMARGUMENT function will return an error if the complex number is not written in the correct format. The complex number
must be entered as a string, with the real and imaginary parts separated by a “+” or “-” sign and “i” representing the imaginary unit. If the complex number is not entered in this format, the
function will return the #VALUE! error.
Overall, the IMARGUMENT function is a useful tool for working with complex numbers in Google Sheets, but it does have some limitations that you should be aware of when using it.
Commonly Used Functions Along With IMARGUMENT
Here is a list of commonly used functions that can be used along with the IMARGUMENT function in Google Sheets:
1. IMREAL: The IMREAL function returns the real part of a complex number. You can use this function in combination with the IMARGUMENT function to find both the real and imaginary parts of a complex
number. For example, the formula:
would return the real part of the complex number in cell A1, followed by the argument of the complex number in radians.
2. IMAGINARY: The IMAGINARY function returns the imaginary part of a complex number. You can use this function in combination with the IMARGUMENT function to find both the real and imaginary parts
of a complex number. For example, the formula:
would return the imaginary part of the complex number in cell A1, followed by the argument of the complex number in radians.
3. IMCONJUGATE: The IMCONJUGATE function returns the conjugate of a complex number. You can use this function in combination with the IMARGUMENT function to find the conjugate of a complex number
and its argument. For example, the formula:
would return the conjugate of the complex number in cell A1, followed by the argument of the complex number in radians.
4. IMCOS: The IMCOS function returns the cosine of a complex number. You can use this function in combination with the IMARGUMENT function to find the cosine of a complex number and its argument.
For example, the formula:
would return the cosine of the complex number in cell A1, followed by the argument of the complex number in radians.
These are just a few examples of how you can use these functions in combination with the IMARGUMENT function in Google Sheets. There are many other functions that you can use to work with complex
numbers in your spreadsheets.
In summary, the IMARGUMENT function in Google Sheets is a useful tool for working with complex numbers. It allows you to find the argument of a complex number in radians, which is the angle between
the positive x-axis and the line connecting the origin to the complex number in the complex plane. The function takes a single argument, which is the complex number for which you want to find the
argument, and returns a value between -π and π. If the complex number is a real number, the function will return 0. If the complex number is not a valid complex number, the function will return the #
VALUE! error.
There are many ways you can use the IMARGUMENT function in your own Google Sheets projects, such as analyzing data in the complex plane, solving equations involving complex numbers, and creating
visualizations of complex functions. You can also use the IMARGUMENT function in combination with other functions, such as IMREAL, IMAGINARY, IMCONJUGATE, and IMCOS, to perform more complex
calculations involving complex numbers.
We encourage you to try using the IMARGUMENT function in your own Google Sheets and see how it can make working with complex numbers easier and more efficient. Whether you’re a student, researcher,
or professional, the IMARGUMENT function is a valuable tool to have in your spreadsheet toolkit.
Video: IMARGUMENT Function
In this video, you will see how to use IMARGUMENT function. We suggest you to watch the video to understand the usage of IMARGUMENT formula.
Related Posts Worth Your Attention
Leave a Comment | {"url":"https://sheetsland.com/imargument-function/","timestamp":"2024-11-10T23:51:32Z","content_type":"text/html","content_length":"52440","record_id":"<urn:uuid:d3fa9de6-935e-4a7e-81c6-db52d7cbc5bc>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00468.warc.gz"} |
Every irreducible element of a GCD domain is prime. A GCD domain is integrally closed, and every nonzero element is primal. In other words, every GCD domain is a Schreier domain.
For every pair of elements x, y of a GCD domain R, a GCD d of x and y and an LCM m of x and y can be chosen such that dm = xy, or stated differently, if x and y are nonzero elements and d is any GCD
d of x and y, then xy/d is an LCM of x and y, and vice versa. It follows that the operations of GCD and LCM make the quotient R/~ into a distributive lattice, where "~" denotes the equivalence
relation of being associate elements. The equivalence between the existence of GCDs and the existence of LCMs is not a corollary of the similar result on complete lattices, as the quotient R/~ need
not be a complete lattice for a GCD domain R.
If R is a GCD domain, then the polynomial ring R[X[1],...,X[n]] is also a GCD domain.^[2]
R is a GCD domain if and only if finite intersections of its principal ideals are principal. In particular, ${\displaystyle (a)\cap (b)=(c)}$ , where ${\displaystyle c}$ is the LCM of ${\displaystyle
a}$ and ${\displaystyle b}$ .
For a polynomial in X over a GCD domain, one can define its content as the GCD of all its coefficients. Then the content of a product of polynomials is the product of their contents, as expressed by
Gauss's lemma, which is valid over GCD domains.
• A unique factorization domain is a GCD domain. Among the GCD domains, the unique factorization domains are precisely those that are also atomic domains (which means that at least one
factorization into irreducible elements exists for any nonzero nonunit).
• A Bézout domain (i.e., an integral domain where every finitely generated ideal is principal) is a GCD domain. Unlike principal ideal domains (where every ideal is principal), a Bézout domain need
not be a unique factorization domain; for instance the ring of entire functions is a non-atomic Bézout domain, and there are many other examples. An integral domain is a Prüfer GCD domain if and
only if it is a Bézout domain.^[3]
• If R is a non-atomic GCD domain, then R[X] is an example of a GCD domain that is neither a unique factorization domain (since it is non-atomic) nor a Bézout domain (since X and a non-invertible
and non-zero element a of R generate an ideal not containing 1, but 1 is nevertheless a GCD of X and a); more generally any ring R[X[1],...,X[n]] has these properties.
• A commutative monoid ring ${\displaystyle R[X;S]}$ is a GCD domain iff ${\displaystyle R}$ is a GCD domain and ${\displaystyle S}$ is a torsion-free cancellative GCD-semigroup. A GCD-semigroup is
a semigroup with the additional property that for any ${\displaystyle a}$ and ${\displaystyle b}$ in the semigroup ${\displaystyle S}$ , there exists a ${\displaystyle c}$ such that ${\
displaystyle (a+S)\cap (b+S)=c+S}$ . In particular, if ${\displaystyle G}$ is an abelian group, then ${\displaystyle R[X;G]}$ is a GCD domain iff ${\displaystyle R}$ is a GCD domain and ${\
displaystyle G}$ is torsion-free.^[4]
• The ring ${\displaystyle \mathbb {Z} [{\sqrt {-d}}]}$ is not a GCD domain for all square-free integers ${\displaystyle d\geq 3}$ .^[5]
G-GCD domains
Many of the properties of GCD domain carry over to Generalized GCD domains,^[6] where principal ideals are generalized to invertible ideals and where the intersection of two invertible ideals is
invertible, so that the group of invertible ideals forms a lattice. In GCD rings, ideals are invertible if and only if they are principal, meaning the GCD and LCM operations can also be treated as
operations on invertible ideals.
Examples of G-GCD domains include GCD domains, polynomial rings over GCD domains, Prüfer domains, and π-domains (domains where every principal ideal is the product of prime ideals), which generalizes
the GCD property of Bézout domains and unique factorization domains.
1. ^ Anderson, D. D. (2000). "GCD domains, Gauss' lemma, and contents of polynomials". In Chapman, Scott T.; Glaz, Sarah (eds.). Non-Noetherian Commutative Ring Theory. Mathematics and its
Application. Vol. 520. Dordrecht: Kluwer Academic Publishers. pp. 1–31. doi:10.1007/978-1-4757-3180-4_1. MR 1858155.
2. ^ Robert W. Gilmer, Commutative semigroup rings, University of Chicago Press, 1984, p. 172.
3. ^ Ali, Majid M.; Smith, David J. (2003), "Generalized GCD rings. II", Beiträge zur Algebra und Geometrie, 44 (1): 75–98, MR 1990985. P. 84: "It is easy to see that an integral domain is a Prüfer
GCD-domain if and only if it is a Bezout domain, and that a Prüfer domain need not be a GCD-domain".
4. ^ Gilmer, Robert; Parker, Tom (1973), "Divisibility Properties in Semigroup Rings", Michigan Mathematical Journal, 22 (1): 65–86, MR 0342635.
5. ^ Mihet, Dorel (2010), "A Note on Non-Unique Factorization Domains (UFD)", Resonance, 15 (8): 737–739.
6. ^ Anderson, D. (1980), "Generalized GCD domains.", Commentarii Mathematici Universitatis Sancti Pauli., 28 (2): 219–233 | {"url":"https://www.knowpia.com/knowpedia/GCD_domain","timestamp":"2024-11-08T15:21:24Z","content_type":"text/html","content_length":"106465","record_id":"<urn:uuid:5fad07f7-1a8d-4080-99b2-b1ab21e6d435>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00320.warc.gz"} |
A Baker Made 20 Pies. A Boy Scout Troop Buys One-fourth Of His Pies, A Preschool Teacher Buys One-third Of His Pies, And A Caterer Buys One-sixth Of His Pies. How Many Pies Does The Baker Have Left?
More ASVAB Topics
A baker made 20 pies. A Boy Scout troop buys one-fourth of his pies, a preschool teacher buys one-third of his pies, and a caterer buys one-sixth of his pies. How many pies does the baker have left?
Detailed Explanation
Convert the different denominators to a common denominator that all the denominators can divide into evenly.
4, 3, and 6 all divide evenly into 12.
To convert \(\frac{\mathrm{1} }{\mathrm{4}}\) to \(\frac{\mathrm{x} }{\mathrm{12}}\), divide 12 (the new common denominator) by 4 (the old common denominator) to get 3.
Then multiply \(\frac{\mathrm{1} }{\mathrm{4}}\) by \(\frac{\mathrm{3} }{\mathrm{3}}\) (another way of saying 1).
The product is 3/12. (1/4 = 3/12).
Do the same calculation for the other fractions: \(\frac{\mathrm{1} }{\mathrm{3}}\) = \(\frac{\mathrm{4} }{\mathrm{12}}\) and \(\frac{\mathrm{1} }{\mathrm{6}}\) = \(\frac{\mathrm{2} }{\mathrm{12}}\).
Then add the new numerators together: 3 + 4 + 2 = 9.
This gives you your new added numerator. Place the added numerator over the new denominator, and you can see that \(\frac{\mathrm{9} }{\mathrm{12}}\) of the pies have been sold.
\(\frac{\mathrm{9} }{\mathrm{12}}\) can be reduced to \(\frac{\mathrm{3} }{\mathrm{4}}\). \(\frac{\mathrm{3} }{\mathrm{4}}\) or 75% of the pies have been sold.
20 × 0.75 = 15. 15 of 20 pies have been sold. 20 – 15 = 5 pies remaining.
Take more free practice tests for other ASVAB topics with our ASVAB practice test now! | {"url":"https://asvab-prep.com/question/a-baker-made-20-pies-a-boy-scout-troop-buys-onefourth-of-5277158094143488","timestamp":"2024-11-08T20:27:39Z","content_type":"text/html","content_length":"59728","record_id":"<urn:uuid:8125e7b9-f5ca-49e3-8f33-dc1b5b85d2fd>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00173.warc.gz"} |
Bonding Curve | Flap
Since v2.4.0, Flap supports multiple bonding curves, but only one is active.
What is a Bonding Curve
Before listing on DEX, each token launched on flap is bonded to a bonding curve. When you buy tokens from the bonding curve, you send your ETH (or other quote token, usually the native token of that
chain) to the bonding curve as a reserve, and the bonding curve mints tokens to your address. Or if you sell your tokens to the bonding curve, the bonding curve would burn your selling tokens and
send the ETH back to you.
A bonding curve defines the relationship between the trading token's supply and the reserve (i.e. Quote tokens like ETH or BNB). The change of the reserve respect to the supply is the price. Our
bonding curve is based on a constant product equation. You may have heard about this equation, which Uniswap popularized.
You may even wonder what is the difference between a bonding curve token launching platform like flap and Unsiwap? The answer is the liquidity. The liquidity does not change on the bonding curve.
However, anyone can add liquidity to the Uniswap pools.
Flap's Bonding Curve-Alpha
Based on the above equation, we can get the reserve of the token from its circulating supply.
Bonding Curves On Different Chains
For different deployment, the value r is different:
Let's take the latest BSC curve for example.
The supply vs reserve graph is as follows:
When the circulating supply reaches 50% of the total supply, the reserve is 4 BNB.
When the circulating supply reaches 80% of the total supply, the reserve is 16 BNB.
On BSC chain, when the circulating supply reaches 80% of the total supply, the token will be listed on DEX. ( check List On DEX for more details ). | {"url":"https://docs.flap.sh/flap/bonding-curve","timestamp":"2024-11-06T18:34:24Z","content_type":"text/html","content_length":"351211","record_id":"<urn:uuid:859b7351-fbf0-4203-b477-f1dd31b277f5>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00763.warc.gz"} |
09 Oct
No account yet? Create an Account
= circumference
= the constant pi
= the radius of the circle
What is the circumference?
The circumference of a circle is defined as the linear distance around it. In other words, if a circle is opened to form a straight line, then the length of that line will be the circle’s
In geometry, the circumference is the perimeter of a circle or ellipse. That is, the circumference would be the arc length of the circle as if it were opened up and straightened out to a line
segment. More generally, the perimeter is the curve length around any closed figure. Wikipedia
How do you find a circumference?
To calculate the circumference, you need the radius of the circle: Multiply the radius by 2 to get the diameter. Multiply the result by π, or 3.14 for an estimation. That’s it; you found the
circumference of the circle.
What are circumference and pi?
Circles are all similar, and “the circumference divided by the diameter” produces the same value regardless of their radius. This value is the ratio of the circumference of a circle to its diameter
and is called π (Pi).
Is the circumference 2 pi r?
The perimeter or circumference of the circle can be found using the equation C=2π(r), where r= the radius of the circle.
What are circumference and diameter?
The circumference is the length of one complete ‘lap’ around a circle, and the diameter is the length of the line segment that cuts a circle in half. Think of circumference as an outer measurement
and diameter as an inner measurement of the circle!
What is the formula for the area of circumference?
What are the 3 circle formulas?
What are all Circle Formulas?
• The diameter of a Circle D = 2 × r.
• Circumference of a Circle C = 2 × π × r.
• Area of a Circle A = π × r^2
Formulas Related to Circles
Diameter of a Circle D = 2 × r
Circumference of a Circle C = 2 × π × r
Area of a Circle A = π × r^2
What value is pi?
In decimal form, the value of pi is approximately 3.14. But pi is an irrational number, meaning that its decimal form neither ends (like 1/4 = 0.25) nor becomes repetitive (like 1/6 = 0.166666…). (To
only 18 decimal places, pi is 3.141592653589793238.)
What is the pi formula?
By definition, pi is the ratio of the circumference of a circle to its diameter. In other words, pi equals the circumference divided by the diameter (π = c/d).
Is the diameter half of the circumference?
Explanation: The circumference of a circle is equal to π⋅d where d is the diameter of the circle. π=3.14159… which is =~3 , so the circumference is about 3 times the diameter.
Is the diameter bigger than the circumference?
The diameter is ALWAYS approximately 3 times smaller than the circumference! Or to put it another way, the circumference is approximately 3 times bigger than the diameter. Look at the table below and
see if you can spot the pattern.
What are the 5 properties of a circle?
Circle Properties
• The circles are said to be congruent if they have equal radii.
• The diameter of a circle is the longest chord of a circle.
• Equal chords of a circle subtend equal angles at the centre.
• The radius drawn perpendicular to the chord bisects the chord.
• Circles having different radius are similar.
Circumference of Circle
The circumference of a circle is the perimeter of the circle. It is the total length of the boundary of the circle. The circumference of a circle is the product of the constant π and the diameter of
the circle. A person walking across a circular park, or a circular table to be bordered requires this metric of the circumference of a circle. The circumference is a linear value and its units are
the same as the units of length.
A circle is a round closed figure where all its boundary points are equidistant from a fixed point called the center. The two important metrics of a circle is the area of a circle and the
circumference of a circle. Here we shall aim at understanding the formula and calculation of the circumference of a circle.
What is the Circumference of a Circle?
The circumference of a circle is its boundary or the length of the complete arc of a circle. Let us understand this concept using an example. Consider a circular park shown below.
If a boy starts running from point ‘A’ and reaches the same point after taking one complete round of the park, a distance is covered by him. This distance or boundary is called the circumference of
the park which is in the shape of a circle. The circumference is the length of the boundary.
Circumference of a Circle Definition
The circumference of a circle refers to the measure of its boundary. If we open a circle and measure the boundary just like we measure a straight line, we get the circumference of the circle in terms
of units of length like centimeters, meters, or kilometers.
Now let us learn about the elements that make up circumference. These are the three most important elements of a circle.
• Center: The center of the circle is a point that is at a fixed distance from any other point from the circumference.
• Diameter: The diameter is the distance across the circle through the center, it is a line that meets the circumference at both ends and it needs to pass through the center.
• Radius: The radius of a circle is the distance from the center of a circle to any point on the circumference of the circle.
Circumference of Circle Formula
The formula for the circumference of a circle is expressed using the radius ‘r’ of the circle and the value of ‘pi’. It is expressed as, Circumference of a circle formula = 2πr. While using this
circumference formula, if we do not have the value of the radius, we can find it using the diameter. That is, if the diameter is known, it can be divided by 2 to obtain the value of the radius
because of the diameter of a circle = 2 × radius. Another way to calculate the circumference of a circle is by using the formula: Circumference = π × Diameter. If we need to calculate the radius or
diameter, when the circumference of a circle is given, we use the formula: Radius = Circumference/2π
How to Find the Circumference of Circle?
Although the circumference of a circle is the length of its boundary, it cannot be calculated with the help of a ruler (scale) like it is usually done for other polygons. This is because a circle is
a curved figure. Therefore, to calculate the circumference of a circle, we apply a formula that uses the radius or the diameter of the circle and the value of Pi (π).
Pi is a special mathematical constant with a value approximated to 3.14159 or π = 22/7. The value of π = 22/7 is used in various formulas. It is the ratio of circumference to diameter, where C = πD.
Let us consider a practical illustration to understand how to calculate the circumference of a circle with the help of the circumference formula.
Example: If the radius of the circle is 25 units, find the circumference of the circle. (Take π = 3.14)
Solution: Given, radius = 25 units
Let us write the circumference formula and then we will substitute the value of r (radius) in it.
Circumference of circle formula = 2πr
C = 2 × π × 25
C = 2 × 3.14 × 25 = 157 units
Therefore, the circumference of a circle is 157 units.
Circumference to Diameter
The circumference to diameter of a circle is a ratio used to define the standard definition of Pi (π). If you know the diameter ‘d’ of a circle, then you can easily find the circumference C using the
relation: C = πd. So, when the circumference C is placed in a ratio with the diameter d, the answer we get is π.
Important Notes on Circumference of a Circle
• π(Pi) is a mathematical constant that is the ratio of the circumference of a circle to its diameter. It is approximated to π = 22/7 or 3.14
• If the radius of a circle is extended further and touches the boundary of the circle, it becomes the diameter of a circle. Therefore, Diameter = 2 × Radius
• The circumference is the distance around a circle or the length of a circle.
• We can find the circumference of a circle using the radius or diameter.
• Circumference formula = π× Diameter; Circumference = 2πr.
You must be logged in to post a comment. | {"url":"https://devina.id/the-circumference/","timestamp":"2024-11-11T01:56:18Z","content_type":"text/html","content_length":"205747","record_id":"<urn:uuid:db48bb6f-5179-423b-b409-f2a66eff7b5a>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00717.warc.gz"} |
Project Euler 162 Solution: Hexadecimal numbers
Project Euler 162: Find hexadecimal numbers containing at most 16 hexadecimal digits exist with all of the digits 0, 1 and A present at least once
Problem Description
In the hexadecimal number system numbers are represented using 16 different digits:
The hexadecimal number AF when written in the decimal number system equals 10×16+15=175.
In the 3-digit hexadecimal numbers 10A, 1A0, A10, and A01 the digits 0,1 and A are all present.
Like numbers written in base ten we write hexadecimal numbers without leading zeroes.
How many hexadecimal numbers containing at most sixteen hexadecimal digits exist with all of the digits 0,1, and A present at least once?
Give your answer as a hexadecimal number.
(A,B,C,D,E and F in upper case, without any leading or trailing code that marks the number as hexadecimal and without leading zeroes , e.g. 1A3F and not: 1a3f and not 0x1a3f and not $1A3F and not #
1A3F and not 0000001A3F)
This is a typical application of the inclusion-exclusion principle. It is helpful in answering questions where the number of elements of a set are required but not the elements themselves. We just
need to know the number of elements in two sets and subtract the number in the intersection.
The equation, after combining some terms looks like: 15 x 16^(n-1) + 41 x 14^(n-1) – (43 x 15^(n-1) + 13^n) for each possible hex number containing n digits {3..16}.
Since the first digit is non zero the total possibilities without consideration for the constraints is 15 x 16^(n-1). Then remove those with no {0, 1, A}. But we need to add back in those that have
no 1 or 0, no 1 or A, no 0 or A. Then remove the duplicates that, again, have no {0, 1, A}. There are 258 4 digit hex numbers that follow this pattern.
Project Euler 162 Solution
Runs < 0.001 seconds in Python 2.7.
Project Euler 162 Solution Python 2.7 source
• This code could be modified to broaden the range of counting problems it could solve.
Project Euler 162 Solution last updated
No comments yet. | {"url":"https://blog.dreamshire.com/project-euler-162/","timestamp":"2024-11-11T16:59:12Z","content_type":"application/xhtml+xml","content_length":"37887","record_id":"<urn:uuid:f29f9dd9-8b97-4beb-ace7-c3203ec5cf9a>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00386.warc.gz"} |
How to read EIS accuracy contour plots Electrochemistry & Battery - Application Note 54 - BioLogic
16 min read
How to read EIS accuracy contour plots Electrochemistry & Battery – Application Note 54
Latest updated: May 28, 2024
The impedance measurement accuracy of an impedance analyzer depends on both the impedance magnitude and the measurement frequency. The EIS specifications for potentiostats/galvanostats are given in a
graph called a contour plot. The aim of this note is to help users understand and use the EIS accuracy contour plots so as to recognize and interpret possible errors in their EIS measurements.
The impedance measurement accuracy of an impedance analyzer depends on the impedance magnitude and on the measurement frequency. Potentiostats/galvanostats/Impedancemeters specifications are given in
graphs called accuracy contour plots^1. The aim of this note is to help the user understand and use EIS accuracy contour plots.
Contour Plot
Figure 1: 3D graph of a two-variable function, level curve (blue curve) and projection in the x, y plan (red curve).
Contour plots are isovalue curves of a function depending on two variables. As an example 3D graph of a two variables function f(x, y) = x^2 + y^2 and a level curve of this function for f(x, y) = 10
are shown in Fig. 1. The level curve is a cross-section of the 3D graph parallel to the (x, y) plan at f(x, y) = 10. The projection of this curve on the x, y plane is a contour plot. For any point (x
, y) inside of this contour, we have f(x, y) < 10 and for any point outside of this contour we have f(x, y) ≥ 10.
EIS Accuracy Contour Plot
SP-300 EIS Accuracy Contour Plot
The scheme of an EIS accuracy contour plot is shown in Fig. 2.
Figure 2: Scheme of an EIS accuracy contour plot.
The two horizontal limits (high (5) and low (1) Z) are related to the ability of the instrument to accurately measure/control the current. To increase the high Z portion of accuracy contour plot, the
instrument has to be able to measure low current (often related to the electrometer impedance in the specifications).
To increase the low part of the accuracy contour plot, the instrument has to be able to manage high current. A current booster may be added to improve the measurement in this area. The vertical limit
(3) depends on the regulation speed of the instrument. This is the maximum frequency that can be reached by the instrument with an acceptable error. It is related to the bandwidth of the instrument.
The two other limits (4) and (2) are related to stray capacitor and stray inductor, respectively. Inductive effects are very sensitive to current levels and the cell connections.
The contour plot of an SP-300 potentiostat is shown in Fig. 3. The shaded areas show different ranges of impedances that can be measured at various frequencies within a specified error in magnitude
and in phase. These errors refer to the borders and not to the areas as explained below.
Figure 3: SP300 EIS accuracy contour plot. Black dot: f = 106 Hz, |Z| = 104 Ω, white dot: f = 104 Hz, |Z| = 102 Ω.
Let us consider the black dot in Fig. 3. This dot corresponds to the measurement of an impedance |Z| = 10^4 Ω measured at a frequency f = 10^6 Hz. As this measurement is located between the 0.3 %,
0.3^◦ and 1 %, 1^◦ contour plots, the accuracy is 0.3 % < |Z|/|Z| < 1 %, for the modulus (relative error) and 0.3^◦ < φ[Z] < 1^◦ for the phase (absolute error). The phase is given in absolute error
because the relative error could be indeterminate in the case where φ = 0. An impedance whose modulus is 100 Ω at 10^4 Hz is measured with a relative uncertainty lower than 0.3 % for the modulus and
lower than 0.3^◦ for the phase (Fig. 3, white dot).
How to Use an EIS Contour Plot
Impedance of the R1+R2/C2 Circuit
The impedance for the R1+R2/C2 is given by
$$Z(f)=R_1+ \frac{R_2}{1+R_2C_2j2\pi f}\tag{1}$$
The modulus of the impedance as a function of the frequency is shown in Fig. 4 for R[1] = 0.1 Ω, R[2] = 3 Ω and C[2] = 10^−6 F.
Figure 4: Modulus Bode diagram of the impedance for the R[1]+R[2] /C[2] circuit. R[1] = 0.1 Ω, R[2] = 3 Ω and C[2] = 10^– 6 F.
Example #1
The modulus Bode impedance diagram of this electrical circuit R[1]+R[2]/C[2] and the SP-300 contour plot are superimposed in Fig. 5. This figure allows us to predict the precision of the impedance
measurements as a function of frequency. One has to determine the frequency values where the modulus diagram moves into a different precision domain. For instance, the modulus diagram moves from the
high precision domain (|Z|/|Z| < 0.3 % and φ[Z] < 0.3^◦) to the mid-range precision domain (0.3 < |Z|/|Z| <1 % and 0.3 < φ[Z] < 1^◦) for a frequency moving from f[1] ≈ 10 kHz to f[2] ≈ 60 kHz (Fig.
Figure 5: Superimposition of the modulus Bode impedance diagram of the electrical circuit R[1]+R[2]/C[2 ]and the SP-300 contour plot.
The Nyquist impedance diagram of the R[1]+R[2]/C[2] circuit is shown in Fig. 6 with the impedances measured at f[1], f[2] and f[3].
Figure 6: Nyquist diagram of the impedance for the R1+R2 /C2 circuit, f1 ≈ 10 kHz, f2 ≈ 60 kHz, f3 ≈ 100 kHz.
Tab. I shows how the precision of the impedance modulus and phase evolves with the frequency for the R[1]+R[2]/C[2] electrical circuit with intermediate Rs and C values.
Table I: Impedance measurement accuracy as a function of the frequency. R[1] = 0.1 Ω, R[2] = 3 Ω and C[2] = 10^−6 F.
Frequency Impedance modulus Impedance phase
$$f \lt f_1$$ $$\Delta|Z|/|Z| \lt 0.3\% $$ $$\phi_Z \lt 0.3°$$
$$f_1 \lt f \lt f_2$$ $$0.3\% \lt \Delta|Z|/|Z| \lt 1\% $$ $$ 0.3° \lt \Delta \phi_Z \lt 1°$$
$$f_2 \lt f \lt f_3$$ $$1\% \lt \Delta|Z|/|Z| \lt 3\% $$ $$ 1° \lt \Delta\phi_Z/° \lt 3°$$
$$f \gt f_4 $$ $$\Delta|Z|/|Z| \gt 3\% $$ $$ \phi_Z \gt 3°$$
Example #2
Another situation can be observed for high values of R[2] and low values of C[2], for instance R[1] = 10^4 Ω, R[2] = 0.2 × 10^7 Ω and C[2] = 5 × 10^−9 F (Figs. 7 and 8).
Figure 7: Superimposition of the modulus Bode impedance diagram of the electrical circuit R[1]+R[2]/C[2] (high Rs values and low C value) and the SP-300 contour plot.
Figure 8: Nyquist diagram of the R1+R2/C2 circuit. R1= R[1] = 10^4 Ω, R[2] = 0.2.10^7 Ω and C[2] = 5.10^-10 F.
As the frequency reaches lower values, measurement precision moves outside of the best precision domain. Tab. II shows how the precision of the impedance modulus and phase, evolves with the frequency
for an R+R/C electrical circuit with high R values and low C values.
Tab. II: Change of impedance measurement accuracy with frequency. R[1] = 10^4 Ω, R[2] = 0.2 × 10^7 Ω and C[2] = 5 × 10−^10 F.
Frequency Impedance modulus Impedance phase
$$f \lt f_1$$ $$0.3\% \lt \Delta|Z|/|Z| \lt 1\% $$ $$0.3° \lt \Delta \phi_Z \lt 1°$$
$$f_1 \lt f \lt f_2$$ $$\Delta|Z|/|Z| \lt 0.3\% $$ $$\Delta \phi_Z \lt 0.3°$$
$$f_2 \lt f \lt f_3$$ $$1\% \lt \Delta|Z|/|Z| \lt 3\% $$ $$ 1° \lt \Delta \phi_Z \lt 3°$$
$$f \gt f_4 $$ $$\Delta|Z|/|Z| \gt 3\% $$ $$ \Delta \phi_Z \gt 3°$$
This note aims to explain how to read and understand EIS contour plots, which are provided with each of our instruments. Contour plots must be used to interpret errors made during EIS measurements
and identify the best frequencies possible to be used for a given impedance range.
Definitions [1]
Quantity intended to be measured, quantity subject to measurement.
Measurement accuracy:
Closeness of agreement between a measured quantity value and a true quantity value of a measurand. The concept `measurement accuracy’ is not a quantity and is not given a numerical quantity value. A
measurement is said to be more accurate when it offers a smaller measurement error.
Measurement uncertainty, uncertainty:
Non-negative parameter characterizing the dispersion of the quantity values being attributed to a measurand.
Measurement precision, precision:
Closeness of agreement between indications or measured quantity values obtained by replicate measurements on the same or similar objects under specified conditions.
1) International vocabulary of metrology. Basic and general concepts an associated term. JCGM (2008).
Revised in 07/2018
[1] Considering the definitions given in [1] and recalled in Appendix A, the word accuracy is a time-honored misnomer. Uncertainty should preferably be used. | {"url":"https://my.biologic.net/documents/eis-contour-plot-electrochemistry-application-note-54/","timestamp":"2024-11-11T13:37:08Z","content_type":"text/html","content_length":"59475","record_id":"<urn:uuid:e4f246c8-2ba9-4141-a69f-fdbb57337a2c>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00307.warc.gz"} |
Factor Factors are numbers we can multiply together to get another number piskic akitāsona
4×5=20; 4 and 5 are factors of 20
2×3×7=42; 2, 3 and 7 are factors of 42
Factoring A number or expression that is multiplied by another to yield a product (e.g., a factor of 24 is 8 because 8 × 3 = 24, and a factor of 3n is n because 3 ⋅ n = 3n). [1] pa piskicipita
[latex]5x-20=5(x-4) \\24=4×6 \\36=2×2×3×3[/latex]
Fifth Constituting number five in a sequence. mwecinîyânan
First Before anything else, constituting number one in a sequence. mwecipeyakwâw
Form The manner or style of arranging and coordinating parts. [6] kayisenakwahk
standard form: [latex]3x+2y=7[/latex]
exponential form: [latex]3×3×3×3×3=3^5[/latex]
expanded form: [latex]537=5×100+3×10+7×1[/latex]
Fourth Constituting number four in a sequence. mwecinewiyihk
Fraction A ratio of numbers or variables. pahki akihtāson
[latex]\frac{x}{2y},\space \frac{2x-1}{3x^2+7}[/latex] | {"url":"https://pressbooks.saskpolytech.ca/creemathdictionary/chapter/f/","timestamp":"2024-11-05T19:55:09Z","content_type":"text/html","content_length":"74352","record_id":"<urn:uuid:c15c0218-ae3c-4e10-87e5-5b888004afe1>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00539.warc.gz"} |
nd o
Our users:
I just wanted to first say your algebra program is awesome! It really helped me it my class! I was really worried about my Algebra class and the step through solving really increased my understanding
of Algebra and allowed me to cross check my work and pointed out where I went wrong during my solutions. Thanks!
Waylon Summerland, TX.
I was searching for months for a piece of software that would help me improve my Algebra skills. My solution was the Algebrator, and now I can tell you how it feels to be an A student.
D.E., Kentucky
When kids who have struggled their entire lives with math, start showing off the step-by-step ability to solve complex algebraic equations, and smile while doing so, it reminds me why I became a
teacher in the first place!
D.C., Maryland
Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among
Search phrases used on 2013-12-22:
• elementary cubic volume worksheets
• DOWNLOAD FREE ALGEBRATOR
• free trig calculator
• how to solve quadratic equations using perfect squares
• algebra © Scott, Foresman And Company test answer key
• steps in graphing hyperbola
• creative algebra II polynomials
• factorization quadratic equations
• ratio formula
• teach yourself algebra software
• slope calculator 3/4
• negative positive number free worksheet
• free online help for 5th grade mixed fractions in a square
• free third grade equation calculator
• all printable homework worksheets
• how to calculate a square root function on a ti-84
• lesson plans on how to help students learn about percent, decimal, and fractions for fifth graders
• reducing rational expressions to lowest terms exponents
• help with algebra the balance method
• buy algebra with pizzazz
• help in algebra
• solving partial differential equation characteristic quadratic
• How to solve slope
• Prentice Hall Mathematics Algebra 1 answere key
• adding fractions and variables caculator
• fractional quadratic equation
• multiplying integers worksheet
• maths tests yr8
• solving quadratic equations by completing the square calculator
• free download software used to find square root of complex numbers
• Free Online Algebra 1 Problems
• year 8 algebra test
• addition and subtraction rational numbers worksheet
• solving ordered pairs when the equation is unknown
• free algebra equation solver
• free physics workbook
• convert standard notation using factor tree
• complex analysis quadratic factors
• investigatory in math geometry
• rearrange formulas to make a linear graph
• maths level f sheet
• advance algebra pdf
• greatest common factors of 28 34
• printable homework for 3rd grade
• basic skills practice ks3 excel maths
• soving lineal algebraic equation
• Math Algebra 1 Answer key help
• adding and multiplying inverse in algebra
• math tests ks3
• formula for factoring cubes
• how to write mixed as percents
• online statistics graphic calculator
• statistics combination problems
• solving integers free worksheet
• substitution method calculator
• fraction power
• free rational calculator
• solver simultaneous equations for three for free
• solving products radicals on a TI-83
• free online math test for year 7
• algebra proportions
• test base questions for ks2 in years 6 sats practice for free
• fourth root of 87 on calculator
• Solving Quadratic equations by extracting roots
• find the lcd caculator
• free download of attitude test materials
• factor tree in math for fourth grade
• convert fraction to mixed number calculator
• complex rational expression
• download kumon worksheets
• sample lesson plan for proving identities
• matlab nonlinear equation solver
• online factoring polynomials calculator
• adding and subtracting 21 AND 19
• formula for Least common denominator
• standard notation in the prentice hall math book
• chemical equation product online
• hardest algebra questions
• garde 11 math exams
• chemical Kinetics and Matlab + powerpoint
• graphing linear equations powerpoint
• mental strategies for adding and subtracting decimals
• gmat discrete probability
• factorising quadratics calculator
• 1998 ks2 sats papers
• Simplifying Radicals Calculator With Factor Tree | {"url":"https://mathworkorange.com/math-help-calculator/trigonometry/differential-equation-2nd.html","timestamp":"2024-11-03T03:05:51Z","content_type":"text/html","content_length":"87437","record_id":"<urn:uuid:aa2649d0-2217-42a6-b72b-856cf54ee434>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00635.warc.gz"} |