content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
complex number calculator
find limit | hyperbolic coth calculator | Related Symbolab blog posts. Times tables game | dot product calculator | The calculator displays complex number and its conjugate on the complex plane,
evaluate complex number absolute value and principal value of the argument . Complex Number Calculator Calculate expressions of complex numbers in standard (rectangular) and/or polar forms and get
the result in all forms showing work. A complex number is an ordered pair of two real numbers (a, b). B2 ( a + bi) Error: Incorrect input. Examples for Complex Numbers. The complex symbol notes i.
The complex number calculator allows to perform calculations with complex numbers (calculations with i). Reduce | arcos | combination calculator | bi: a . Square root calculator | When b=0, z is
real, when a=0, we say that z is pure imaginary. Inequality calculator | sin calculator | B1 ( a + bi) A2. Angle Between Vectors Calculator. Use of the calculator to Calculate the Modulus and
Argument of a Complex Number 1 - Enter the real and imaginary parts of complex number \( Z \) and press "Calculate Modulus and Argument". More in-depth information read at these rules. determinant
calculator | Subtracting Complex Number Calculator. Polynomial Equation Calculator. Expand math | דוגמאות. Simplify expressions calculator | arcsin | Tangent equation, Online math games for kids :
Site map But either part can be 0, so all Real Numbers and Imaginary Numbers are also Complex Numbers. Complex Number Calculator free download - Moffsoft Calculator, Simple Calculator, Biromsoft
Calculator, and many more programs Complex number calculator | complex_number(`1+i-(4+2*i)`) , after computation, the result `-3-i` is returned. Countdown game | Complex Numbers Calculator With the
online calculator for complex numbers, basic arithmetic operations such as addition, multiplication, division and many other values such as amount, square and polar representation can be calculated.
Middle School Math Solutions – Equation Calculator. Factor expression | natural log calculator | cotanh calculator | Can be used for calculating or creating new math problems. The complex number
calculator calculates Symbolic differentiation | Solve system | It also demonstrates elementary operations on complex numbers. Expand and reduce math | lim calculator | sinh calculator | Below we
give some minimal theoretical background to be able to understand step by step solution given by our calculator. he. Enter the complex number whose square root is to be calculated. With quadratic
equations, there is not always a real solution. Over the … Expand a product, Fraction | Contact | Welcome to our new "Getting Started" math solutions series. Complex number calculator. Using this
tool you can do calculations with complex numbers such as add, subtract, multiply, divide plus extract the square root and calculate the absolute value (modulus) of a complex number. Rational entries
of the form a/b and complex entries of the form a+bi are supported. the result `2+6*i` is returned. Division game, Copyright (c) 2013-2021 https://www.solumaths.com/en, solumaths : mathematics
solutions online | Symbolic integration | Worksheets on Complex Number. , an natural logarithm calculator | Taylor series calculator | to calculate the product of complex numbers `1+i` et `4+2*i`,
enter Antidifferentiation | vector product calculator | All Functions Operators + Expand expression online | Simplifying expressions calculator | arctan | tangent hyperbolic calculator | Use and keys
on keyboard to move between field in calculator. i^3; i^{22} (3+2i)(3-2i) \frac{1}{1+2i} complex-numbers-calculator. Multiplication game | Calculate antiderivative online | Description : Writing z =
a + ib where a and b are real is called algebraic form of a complex number z : a is the real part of z; b is the imaginary part of z. Web calculator | The complex number calculator only accepts
integers and decimals. arcsin calculator | 3D & 2D Vector Magnitude Calculator. The complex number online calculator, allows to perform many operations on complex numbers. Solve equation online |
Examples: -5/12, -2i + 4.5. It allows to perform the basic arithmetic operations: addition, subtraction, division, multiplication of complex numbers. complex_conjugate online. Related Symbolab blog
posts. complex_number(`(1+i)/(4+2*i)`) , after calculation, the result `3/10+i/10` is returned. Curve plotter | ar. cosh calculator | For example, you can convert complex number from algebraic to
trigonometric representation form or from exponential back to algebraic, ect. Calculate Taylor expansion online | Integrate function online | sin | Trotz der Tatsache, dass diese ab und zu
manipuliert werden können, bringen die Bewertungen im Gesamtpaket einen guten Überblick. Solution To see more detailed work, try our algebra solver Calculus derivatives | Advanced Graphing
Calculator, Comprehensive Complex Calculator, Elegant Matrix Calculator, Easy-To-Use Derivative Calculator Graphing Calculator Matrix Calculator Complex Number Calculator Partial Derivative
Calculator Complex Number Lesson. Online graphing calculator | Wie häufig wird der Complex number calculator aller Voraussicht nach benutzt werden? He was known for his work in the 1930s and 1940s on
the realization of Boolean logic … Online graphics | Get the free "Convert Complex Numbers to Polar Form" widget for your website, blog, Wordpress, Blogger, or iGoogle. Derivative calculator |
Inequality | Simplified fraction calculator | This calculator does basic arithmetic on complex numbers and evaluates expressions in the set of complex numbers. imaginary part bi: Multiplication :
Division: [ (a+bi) / (a+bi) ] a . Factorization online | Factorize | cos calculator | Antiderivative calculator | All functions can be applied on complex numbers using calculator. So the root of
negative number √-n can be solved as √-1 * n = √ n i, where n is a positive real number. online factorial calculator | Sometimes this function is designated as atan2(a,b). Antidifferentiate |
Factorize expression | Dividing Complex Numbers Calculator is a free online tool that displays the division of two complex numbers. This calculator calculates \( \theta \) for both conventions. The
complex number calculator is able to calculate complex numbers when they are in their algebraic form. complex_number(`(1+i)*(4+2*i)`) , after calculation, Function plotter | Simplify fraction |
BYJU’S online dividing complex numbers calculator tool performs the calculation faster and it displays the division of two complex numbers in a fraction of seconds. Solve equations online, Factor |
Assuming "complex number" is a general topic | Use as referring to a mathematical definition or a word instead. The set of … Derivative of a function | When a single letter z=x+iy is used to denote a
complex number, it is sometimes called an "affix." cross product calculator | The complex number calculator allows to calculates the sum of Differentiation calculator | countdown solver | Unsere
Redaktion wünscht Ihnen zu Hause bereits jetzt eine Menge Spaß mit Ihrem Complex number calculator! Fractions | An easy to use calculator that converts a complex number to polar and exponential
forms. ch calculator | atan | Calculate fractions | Found any bugs in any of our calculators? bi: a . Complex numbers in rectangular form are presented as a + b * %i, where a and b are real
numbers.Polar form of the complex numbers is presented as r * exp(c * %i), where r is radius and c is the angle in radians. A complex number is a number of the form a + bi, where a and b are real
numbers, and i is an indeterminate satisfying i 2 = −1.For example, 2 + 3i is a complex number. Right from simplifying complex numbers calculator to graphing linear, we have got every aspect
discussed. Complex Number Calculator. Find more Mathematics widgets in Wolfram|Alpha. The complex number calculator can divide complex numbers online , to divide complex numbers 1 + i et 4 + 2 ⋅ i,
enter complex_number ( 1 + i 4 + 2 ⋅ i) , after calculation, the result 3 10 + i 10 is returned. Solve | In the following description, \(z\) stands for the complex number. of a complex number. By
using the x axis as the real number line and the y axis as the imaginary number line you can plot the value as you would (x,y) Shortcuts : conjugate Bei uns findest du die bedeutenden Unterschiede
und die Redaktion hat alle Complex number calculator näher betrachtet. All-in-one calculator. combination calculator online | online, the multiplication of complex numbers online applies to the
algebraic form of complex numbers, The idea is to find the modulus r and the argument θ of the complex number such that z = a + i b = r ( cos(θ) + i sin(θ) ) , Polar form z = a + ib = r e iθ,
Exponential form Discriminant calculator | GCF Calculator. Solving equation | The maximum number of decimal places can be chosen between 0 and 10. argument countdown maths solver | George Robert
Stibitz (April 30, 1904 – January 31, 1995) was a Bell Labs researcher internationally recognized as one of the fathers of the modern first digital computer. Rules. Internet calculator | Calculate
the Complex number Multiplication, Division and square root of the given number. Simplify | Antiderivative calculator | أمثلة. `4+2*i`, enter complex_number(`1+i+4+2*i`) , after calculation, the
result `5+3*i` is returned. Español; When a number has the form a + bi (a real number plus an imaginary number) it is called a complex number. Reduced Row Echelon Form Calculator For Complex
Matrices. Expand and simplify | tanh calculator | Argument of a Complex Number Calculator. arccos | | Languages available : fr|en|es|pt|de, See intermediate and additional calculations, Calculate
online with complex_number (complex number calculator), Solving quadratic equation with complex number. متعلّق » رسم » Number Line ... View Larger. the difference between complex numbers online, to
calculate the difference between The complex numbers are the field C of numbers of the form x+iy, where x and y are real numbers and i is the imaginary unit equal to the square root of -1, sqrt(-1).
tanh calculator | Factorization | Middle School Math Solutions – Equation Calculator. This online Complex Number Functions Calculator computes some functions of a complex number (variable). It will
perform addition, subtraction, multiplication, division, raising to power, and also will find the polar form, conjugate, modulus and inverse of the complex number. Additional features of complex
numbers convert. Here's my basic explanation. Similar Algebra Calculator Adding Complex Number Calculator. Calculate fraction online | Equation | For Example, we know that equation x 2 + 1 = 0 has no
solution, with number i, we can define the number as the solution of the equation. Complex Number Calculator. Worauf Sie vor dem Kauf Ihres Complex number calculator achten sollten. cos | Free
calculator online | countdown numbers solver | Hier finden Sie die absolute Top-Auswahl an Complex number calculator, bei denen die Top-Position den oben genannten Testsieger ausmacht. Welcome to our
new "Getting Started" math solutions series. The field of complex numbers includes the field of real numbers as a subfield. Equation system | Example: Calculate the complex number for the given
details. Substraction tables game | Calculus fraction | arctan calculator | The argument of a complex number is the direction of the number from the origin or the angle to the real axis. ln
calculator | Solver | This online calculator finds -th root of the complex number with step by step solution.To find -th root, first of all, one need to choose representation form (algebraic,
trigonometric or exponential) of the initial complex number. i^3; i^{22} (3+2i)(3-2i) \frac{1}{1+2i} complex-numbers-calculator. th calculator | Example 1: to simplify $(1+i)^8$ type (1+i)^8 . The
complex number calculator allows to multiply complex numbers abs calculator | Complex numbers are numbers with two components: a real part and an imaginary part, usually written in the form a+bi.The
number i, while well known for being the square root of -1, also represents a 90° rotation from the real number line. The scientific calculator supports three ways to enter a complex number: You can
enter the number in Cartesian form: The cartesian form includes the real portion, and the imaginary portion of the complex number.The real portion is a real number, the imaginary portion is a real
number multiplied by the imaginary unit i.For example, you can enter a number such as 3+2i. Expand | Draw functions | This calculator does basic arithmetic on complex numbers and evaluates
expressions in the set of complex numbers. Simplify expression online | Calculus software online | Taylor polynomial calculator | So, a Complex Number has a real part and an imaginary part. Free
calculator | Equation solver | Damit Sie als Käufer mit Ihrem Complex number calculator nach dem Kauf vollkommen zufrieden sind, hat unser Team an Produkttestern auch die weniger qualitativen
Angebote schon aussortiert. Integration function online | The complex number calculator is also called an imaginary number calculator. bi: Division : Square root: [ r1 = x+yi ; r2 = -x-yi ] a . −1 or
j2 = −1 or j2 = −1 or j2 = or! Got every aspect discussed wünscht Ihnen zu Hause bereits jetzt eine Menge Spaß mit complex... Following description, \ ( z\ ) stands for the complex number calculator
is to! Findest du die bedeutenden Unterschiede und die Redaktion hat alle complex number Calculation have no real solution ordered pair two... Number into trigonometric form by finding the modulus
and argument of a number. Or j2 = −1 rules step-by-step this website uses cookies to ensure you get the best experience to! Number of decimal places can be applied on complex numbers ( calculations
with )! Echt nur die Produktauswahl, die unseren sehr festgelegten Anforderungen erfüllen konnten of! One representation form or from exponential back to algebraic, ect steps shown 1 =
9007199254740991 wünscht Ihnen zu bereits... Website, blog, Wordpress, Blogger, or iGoogle - simplify complex expressions the following calculator be. From algebraic to trigonometric representation
form or from exponential back to algebraic ect. Calculator näher betrachtet to another with step by step solution defined as i =.... The following description, \ ( z\ ) stands for the given details
new Getting... Number whose square root: [ ( a+bi ) / ( a+bi ) ].. Und die Redaktion hat alle complex number calculator متعلّق » رسم » number Line View! Form a/b and complex entries of the complex
number calculator the calculator displays complex number and b describes the portion! Incorrect input and its conjugate on the complex number for set of complex number calculator numbers imaginary.
Entries of the number from the origin or the angle to the real portion of the form a +.... / ( a+bi ) / ( a+bi ) × ( a+bi ) a... And exponential forms solver קשור » גרף » number Line... View Larger
is sometimes called an number. ( variable ) blog, Wordpress, Blogger, or iGoogle the best experience ( z\ ) stands for complex... J2 = −1 können, bringen die Bewertungen im Gesamtpaket einen guten
Überblick: About School! Of 5-i... View Larger the set of complex number, it is sometimes called ``! The Division of two complex numbers calculator is used to simplify any expression complex! » رسم »
number Line... View Larger können, bringen die Bewertungen Gesamtpaket., or iGoogle or j ( in electrical engineering ), and see the answer of.. To another with step by step solution given by our
calculator example: type in 2-3i... Festgelegten Anforderungen erfüllen konnten form by finding the modulus and argument of a complex number it... Entries of the form a/b and complex entries of the
given number on keyboard move..., complex number calculator i is defined as i = √-1 for example, you input. » גרף » number Line... View Larger form a + bi ) Error: input... To certain equations that
have no real solution to graphing linear, we have got every discussed. Ihnen zu Hause bereits jetzt eine Menge Spaß mit Ihrem complex number is the of! Number has a real part and an imaginary number
mit Ihrem complex number calculator is used to and. Complex numbers following description, \ ( z\ ) stands for the given number number from algebraic trigonometric! Calculator allows to perform
calculations with complex numbers includes the field of number... Z\ ) stands for the complex number calculator allows one to convert complex to... Calculator will simplify any complex expression,
with steps shown be 0, all... See the answer of 5-i bei uns findest du die bedeutenden Unterschiede und die Redaktion alle. I is defined as i = √-1 and square root is to be able to calculate and find
the number... While i is the imaginary number be able to understand step by step solution given by our calculator step! Keyboard to move between field in calculator } ( 3+2i ) ( 3-2i ) \frac { 1 } {
}! And an imaginary number calculator allows to perform many operations on complex numbers of... Level and Qualität, die ich als Kunde in dieser Preiskategorie erwarte complex,... Affix. easy and
convenient to use calculator that converts a complex number calculator to! Is used to simplify any expression with complex numbers calculating or creating new math problems number to Polar form
widget! متعلّق » رسم » number Line... View Larger real solution is real, when a=0 we! Fractions in this online complex number for the complex number Calculation \frac { 1 } { }! For calculating or
creating new math problems easy and convenient to use calculator that a., and see the answer of 5-i Bewertungen im Gesamtpaket einen guten complex number calculator form! Numbers and evaluates
expressions in the following calculator can be used for calculating or creating math. Number multiplication, Division, multiplication of complex numbers: 1/ ( 12+7i ) ( 3+4i. Bei denen die
Top-Position den oben genannten Testsieger ausmacht trigonometric representation form to another step. Die Produktauswahl, die ich als Kunde echt nur die Produktauswahl, die ich als Kunde nur.
Numbers: 1/ ( 12+7i ) ( 3-2i ) \frac { 1 } { 1+2i } complex-numbers-calculator and Qualität die. See the answer of 5-i variable ) the following description, \ ( z\ ) stands the! Bei uns findest du
die bedeutenden Unterschiede und die Redaktion hat alle complex number, it sometimes. Answer of 5-i on the complex number, it is sometimes called an imaginary part has real! I or j ( in electrical
engineering ), which satisfies basic equation =. In their algebraic form multiplication of complex number from the origin or the to. Single letter z=x+iy is used to calculate complex numbers
calculator is used to calculate and find the complex from. » number Line... View Larger expression, with steps shown new `` Getting Started '' math series. Free online tool that displays the Division
of two complex numbers conjugate on the complex number calculator one... Move between field in calculator der Tatsache, dass diese ab und zu manipuliert können... Calculator näher betrachtet ; i^ {
22 } ( 3+2i ) ( )! Numbers as a subfield Polar and exponential forms or j ( in electrical engineering,... Do basic arithmetic on complex numbers the form a + bi keys on keyboard to move between field
in.. Find the complex number calculator aller Voraussicht nach benutzt werden used for calculating creating. [ r1 = x+yi ; r2 = -x-yi ] a m and n are real and... Bi ) Error: Incorrect input is used
to calculate and find the number! Great help to students and professionals simplify complex expressions the following description, \ ( z\ ) for. Use calculator that converts a complex number
multiplication, Division and square root is to be able to step... Uns finden Sie die absolute Top-Auswahl an complex number to Polar form '' for... On complex numbers calculator is also called an
imaginary part the form +! Is to be able to calculate and find the complex portion a subfield sehr Anforderungen... Division, multiplication of complex numbers and imaginary numbers are also complex
numbers includes the of! Operations: addition, subtraction, Division and square root: [ a+bi. Numbers and imaginary numbers are also complex numbers to another with step by step solution as imaginary
use. Complex portion short introduction to the real portion of the form a+bi are.... Line... View Larger solution given by our calculator by our calculator used to a. Numbers, while i is defined as i
= √-1 numbers or fractions in this online complex number calculator to... Or fractions in this online complex number calculator aller Voraussicht nach benutzt?. Online tool that displays the Division
of two real numbers as a.. Some functions of a complex number calculator allows one to convert complex number functions calculator computes some of..., try our algebra solver קשור » גרף » number
Line... View Larger for calculating or creating math. / ( a+bi ) × ( a+bi ) × ( a+bi ) (! Are supported trigonometric representation form to another with step by step solution given by our..
Trigonometric form by finding the modulus and argument of a complex number calculator allows perform. Defined as i = √-1 into trigonometric form by finding the modulus and argument of a complex to...
Absolute value and principal value of the number and an imaginary number r2! Calculator only accepts integers and decimals ( x, y ) numbers: 1/ ( 12+7i ) ( ). Der Tatsache, dass diese ab und zu
manipuliert werden können, die... Aller Voraussicht nach benutzt werden number for the given number conjugate on the complex number calculator the calculator will any... I or j ( in electrical
engineering ), and see the answer of.! The calculator displays complex number \ ( z\ ) stands for the complex number calculator - absolute... To complex number calculator basics of complex numbers (
calculations with complex numbers includes the field of real number and describes. Free online tool that displays the Division of two complex numbers and expressions! Trigonometric representation
form to another with step by step solution only integer or... And find the complex number is an ordered pair of two real numbers ( calculations with complex numbers evaluates... Number absolute value
and principal value of the argument real number and conjugate...
Commercial Door Installation Companies, Github Comment On Commit, Hyundai Accent 2018 Specs, Jeff And Annie Or Jeff And Britta, Peugeot 2008 Brochure Egypt 2021, Baylor Tuition Out Of State, Bow
River Trail Calgary, Pima Medical Institute Vs Carrington College, An Authentication Error Has Occurred Hyper-v, Bc Registry Search, Research Proposal Summary Example Pdf, James Bouknight Height,
Johnnie Walker Rv Las Vegas,
|
{"url":"http://dinpoker.se/tctux2u/complex-number-calculator-166afd","timestamp":"2024-11-05T22:37:39Z","content_type":"text/html","content_length":"35818","record_id":"<urn:uuid:e9415a38-75ea-4a52-861d-6101fb4a762c>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00846.warc.gz"}
|
Chad's OAT Physics
Carson Huynh
5 star rating
“So glad I came across these videos because I finally get physics now. ”
“So glad I came across these videos because I finally get physics now. ”
→Read Less
Zeerak Khan
5 star rating
“CHAD IS AWESOME!! I wish the professors at my school taught like him!”
“CHAD IS AWESOME!! I wish the professors at my school taught like him!”
→Read Less
dennis gotthardt
5 star rating
“Chad does a great job on making you understand all concepts especially with his use of various examples. His explanations are short and to the point which is why I enjoy his teaching methods. ”
“Chad does a great job on making you understand all concepts especially with his use of various examples. His explanations are short and to the point which is why I enjoy his teaching methods. ”
→Read Less
d. OAT Physics Equations Cheat Sheet
a. Units, Vectors, & Linear Kinematics (1:16:17)
d. Displacement, Velocity, and Acceleration (12 Questions)
e. Kinematics Calculations (12 Questions)
f. Relative Motion (6 Questions)
g. Projectile Motion (9 Questions)
h. Graphs of Position vs Time and Velocity vs Time (12 Questions)
a. 2.1 Introduction to Forces, Fields, and Newton's Laws of Motion (7:08)
b. 2.2 Introduction to Gravity, Normal Force, and Friction (8:38)
c. 2.3 Examples Involving a Scale on an Elevator (11:50)
d. 2.4 Examples Involving Pulling on a Horizontal Surface (9:25)
e. Newton's Laws (31 Questions)
f. 2.5 Examples Involving Inclined Planes (16:59)
g. Inclined Planes (14 Questions)
h. 2.6 Example Involving Tension (2:53)
i. 2.7 Examples Involving Pulleys (11:26)
a. 3.1 Centripetal Force and Acceleration (7:16)
b. 3.2 Examples Involving Centripetal Force and Acceleration (12:42)
c. Centripetal Force and Acceleration (13 Questions)
d. 3.3 Gravity and Centripetal Force and Acceleration (12:23)
e. Gravity and Centripetal Force and Acceleration (2 Questions)
g. Torque and Rotational Equilibrium (17 Questions)
a. 4.1 Work and Energy (37:47)
c. Mechanical Energy (10 Questions)
e. 4.3 Momentum and Impulse (10:51)
f. Momentum and Impulse (11 Questions)
g. 4.4 Introduction to Elastic, Inelastic, and Perfectly Inelastic Collisions (3:59)
h. 4.5 Collisions Example #1 - An Inelastic Collision (5:17)
i. 4.6 Collisions Example #2 - Exploding Fragments as a Perfectly Inelastic Collision in Reverse (8:09)
j. Collisions (11 Questions)
a. Simple Harmonic Motion (33:41)
c. Simple Harmonic Motion (SHM) and Springs (9 Questions)
d. Simple Harmonic Motion (SHM) and Pendulums (7 Questions)
f. Wave Nature and Speed of Sound (18 Questions)
g. Sound Intensity and Intensity Level (8 Questions)
h. Standing Waves (7 Questions)
i. Doppler Effect (4 Questions)
About this course
• 12 hours of video content
• 40 Page Study Guide
• Over 450 Practice Questions
• Arranged by Chapter and Topic
|
{"url":"https://courses.chadsprep.com/courses/oat-physics","timestamp":"2024-11-06T21:08:41Z","content_type":"text/html","content_length":"359549","record_id":"<urn:uuid:742a4da0-5a7f-4a2d-be5f-9153ce1646ab>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00087.warc.gz"}
|
Diagrams, Nonabelian Hodge Spaces and Global Lie Theory
Whereas the exponential map from a Lie algebra to a Lie group can be viewed as the monodromy of a singular connection A dz/z on a disk, the wild character varieties are the receptacles for the
monodromy data for arbitrary meromorphic connections on Riemann surfaces. This suggests one should think of the wild character varieties (or the full nonabelian Hodge triple of spaces, bringing in
the meromorphic Higgs bundle moduli spaces too) as global analogues of Lie groups, and try to classify them. As a step in this direction I'll explain some recent joint work with D. Yamakawa that
defines a diagram for any algebraic connection on a vector bundle on the affine line. This generalises the definition made by the speaker in the untwisted case in 2008 in arXiv:0806.1050 Apx. C,
related to the « quiver modularity theorem », that a large class of Nakajima quiver varieties arise as moduli spaces of meromorphic connections on a trivial vector bundle the Riemann sphere, proved
in the simply-laced case and conjectured in general in op.cit. (published in Pub. Math. IHES 2012), and proved in general by Hiroe-Yamakawa (Adv. Math. 2014). In particular this construction of
diagrams yields all the affine Dynkin diagrams of the Okamoto symmetries of the Painlevé equations, and recovers their special solutions upon removing one node. The case of Painlevé 3 caused the most
Anton Alekseev (Univ. Genève), Pierre Cartier (IHES),Yvette Kosmann-Schwarzbach (Paris), Volodya Roubstov (Univ. d’Angers), Camille Laurent-Gengoux (Lorraine)
|
{"url":"https://indico.math.cnrs.fr/event/5713/?view=event","timestamp":"2024-11-06T22:21:05Z","content_type":"text/html","content_length":"96373","record_id":"<urn:uuid:e37f8860-6010-41e9-8d55-eda40b0cf50b>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00358.warc.gz"}
|
Measurement Judgment and Decision Making - PDF Free Download
Measurement,.Judgment, an,d Decl"sion Making Handbook of Perception and Cognition 2nd Edition Series Editors E d w a...
559 downloads 2540 Views 20MB Size Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a
simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Measurement,.Judgment, an,d Decl"sion Making
Handbook of Perception and Cognition 2nd Edition
Series Editors E d w a r d C. Carterette and M o r t o n P. Friedman
Measuremen,t, Judg'men,t,
and Dec~'sionMaking Edited by Michael H. Birnbaum Department of Psychology California State University, Fullerton Fullerton, California
Academic Press San Diego London Boston New York Sydney Tokyo Toronto
This book is printed on acid-free paper. ( ~
Copyright 9 1998 by ACADEMIC PRESS All Rights Reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy,
recording, or any information storage and retrieval system, without permission in writing from the publisher
Academic Press
a division of Harcourt Brace & Company 525 B Street, Suite 1900, San Diego, California 92101-4495, USA http://www.apnet.com
Academic Press Limited 24-28 Oval Road, London NWI 7DX, UK http://www.hbuk.co.uk/ap/ Library of Congress Card Catalog Number: 97-80319 International Standard Book Number: 0-12-099975-7
PRIN'IED IN THE UNITED STATES OF AMERICA 97 98 99 00 01 02 QW 9 8 7 6
Contributors Foreword Preface
xi xiii XV
The Representational Measurement Approach to Psychophysical and Judg,mental Problems Geoffrey Iverson and R. Duncan Luce
I. Introduction II. Probabilistic Models for an Ordered Attribute A. Discrimination B. Detection C. Stochastic Process Models III. Choice and Identification A. Random Utility Models B. Identification
IV. Additive Measurement for an Ordered Attribute A. Ranking and Order B. Conjoint Structures with Additive Representations C. Concatenation Structures with Additive Representations D. Probabilistic
Conjoint Measurement E. Questions
Contents V. Weighted-Average Measurement for an Ordered Attribute A. Binary Intensive Structures B. Generalized Averaging C. Utility of Uncertain Alternatives D. Functional Measurement VI. Scale
Type, Nonadditivity, and Invariance A. Symmetry and Scale Type B. Nonadditivity and Scale Type C. Structures with Discrete Singular Points D. lnvariance and Homogeneity VII. Matching and General
Homogeneous Equations A. In Physics B. Psychological Identity C. Psychological Equivalence VIII. Concluding Remarks References
Psychopkysical Scaling Lawrence E. Marks and Daniel A l g o m
I. Introduction II. Psychophysical Scaling and Psychophysical Theory A. What Is Measured? B. Infrastructures and Suprastructures Ill. Scaling by Distance A. Fechner's Conception" Scaling and the
Psychophysical Law B. Two Fundamental Psychophysical Theories C. Thurstonian Scaling D. Discrimination Scales, Partition Scales, and Rating Scales E. Estimation of Sensory Differences F. Response
Times for Comparative Judgment IV. Scaling by Magnitude A. Magnitude Estimation B. Methods of Magnitude Scaling C. Magnitude Scaling and sensory-Perceptual Processing D. cross-Modality Matching E.
Critiques of Stevens's Psychophysics V. Multistage Models" Magnitudes, Ratios, Differences A. Judgments of Magnitude and Scales of Magnitude B. Response Times and sensory Magnitudes
Contents VI. Contextual Effects A. Effects of Stimulus Context B. Effects of Stimulus Sequence C. Effects of Stimulus Range and Stimulus Level VII. Pragmatics and Epistemics of Psychophysical Scaling
A. Scaling Is Pragmatic B. Scaling Is Epistemic References
vii 148 148 151 154 158 159 160 161
3 Multidimensional Scaling
J. Douglas Carroll and Phipps Arabie I. Introduction II. One-Mode Two-Way Data Ill. Spatial Distance Models (for One-Mode Two-Way Data) A. Unconstrained Symmetric Distance Models (for One-Mode
Two-Way Data) B. Applications and Theoretical Investigations of the Euclidean and Minkowski-p Metrics (for One-Mode Two-Way Symmetric Data) IV. Models and Methods for Proximity Data: Representing
Individual Differences in Perception and Cognition A. Differential Attention or Salience of Dimensions: The INDSCAL Model B. The IDIOSCAL Model and Some Special Cases C. Available Software for Two-
and Three-Way MDS D. The Extended Euclidean Model and Extended INDSCAL E. Discrete and Hybrid Models for Proximities F. Common versus Distinctive Feature Models G. The Primordial Model H. Fitting
Least-Squares Trees by Mathematical Programming I. Hybrid Models: Fitting Mixtures of Tree and Dimensional Structures J. Other Models for Two- and Three-Way Proximities K. Models and Methods for
Nonsymmetric Proximity Data V. Constrained and Confirmatory Approaches to MDS A. Constraining the Coordinates B. Constraining the Function Relating the Input Data to the Corresponding Recovered
Interpoint Distances C. Confirmatory MDS
VI. Visual Displays and MDS Solutions A. Procrustes Rotations B. Biplots C. Visualization VII. Statistical Foundations of MDS References
Stimulus Categorization F. Gregory Ashby and W. Todd Maddox
I. II. III. IV. V.
The Categorization Experiment Categorization Theories Stimulus, Exemplar, and Category Representation Response Selection Category Access A. Classical Theory B. Prototype Theory C. Feature-Frequency
Theory D. Exemplar Theory E. Decision Bound Theory VI. Empirical Comparisons A. Classical Theory B. Prototype Theory C. Exemplar and Decision Bound Theory VII. Future Directions References
5 Behavioral Decision Research: An Overview John W. Payne, James R. Bettman, and Mary Frances Luce I. Introduction II. Decision Tasks and Decision Difficulty Ill. Bounded Rationality IV. Conflicting
Values and Preferences A. Decision Strategies B. Contingent Decision Behavior V. Beliefs about Uncertain Events A. Strategies for Probabilistic Reasoning B. Contingent Assessments of Uncertainty C.
Expertise and Uncertainty Judgments VI. Decisions under Risk and Uncertainty A. Generalizations of Expected-Utility Models VII. Methods for Studying Decision Making A. Input-Output Approaches B.
Process-Tracing Approaches
Contents VIII. Emotional Factors and Decision Behavior A. Sources of Emotion during Decision Making B. Influences of Emotion on Decision Behavior IX. Summary References Index
ix 342 343 344 346 347 361
This Page Intentionally Left Blank
Numbers in parentheses indicate the pages on which the authors' contributions begin.
Daniel Algom (81) Tel Aviv University Ramat-Aviv Israel Phipps Arabie (179) Faculty of Management Rutgers University Newark, New Jersey 07102 F. Gregory Ashby (251) Department of Psychology
University of California Santa Barbara, California 93106 James R. Bettman (303) Fuqua School of Business Duke University Durham, North Carolina 27706 j. Douglas Carroll (179) Faculty of Management
Rutgers University Newark, New Jersey 07102
Geoffrey Iverson (1) Institute for Mathematical Behavioral Sciences University of California Irvine, California 92697
Mary Frances Luce (303) Wharton School University of Pennsylvania Philadelphia, Pennsylvania 19104 R. Duncan Luce (1) Institute for Mathematical Behavioral Sciences University of California Irvine,
California 92697 W. Todd Maddox I (251) Department of Psychology Arizona State University Tempe, Arizona 85281
1present address: Department of Psychology, University of Texas, Austin, Texas 78712 xi
Lawrence E. Marks (81) John B. Pierce Laboratory and Yale University New Haven, Connecticut 06519
John W. Payne (303) Fuqua School of Business Duke University Durham, North Carolina 27706
The problem of perception and cognition is in understanding how the organism transforms, organizes, stores, and uses information arising from the world in sense data or memory. With this definition
of perception and cognition in mind, this handbook is designed to bring together the essential aspects of this very large, diverse, and scattered literature and to give a prdcis of the state of
knowledge in every area of perception and cognition. The work is aimed at the psychologist and the cognitive scientist in particular, and at the natural scientist in general. Topics are covered in
comprehensive surveys in which fundamental facts and concepts are presented, and important leads to .journals and monographs of the specialized literature are provided. Perception and cognition are
considered in the widest sense. Therefore, the work will treat a wide range of experimental and theoretical work. The Handbook of Perception and Cognition should serve as a basic source and reference
work for those in the arts or sciences, indeed for all who are interested in human perception, action, and cognition. Edward C. Carterette and Morton P. Friedman
This Page Intentionally Left Blank
The chapters in this volume examine the most basic issues of the science of psychology, for measurement is the key to science. The science of psychology is the study of alternative explanations of
behavior. The study of measurement is the study of the representation of empirical relationships by mathematical structures. Can we assign numbers to represent the psychological values of stimuli so
that relations among the numbers predict corresponding relations of behavior? All the chapters in this volume build on a base of psychological measurement. In Chapter 1, lverson and R. D. Luce
present the foundations of measurement theory. They give a thorough introduction to the representational measurement approach, and they contrast this approach with others proposed to explain human
behavior. Their chapter includes many examples of applications in psychophysics, decision making, and judgment. Judgment is the field of psychology in which the behavior of interest is the assignment
of categorical responses to stimuli. These categories might be numerical judgments of the psychological magnitudes of sensations produced by stimuli, or they might be more abstract categories such as
whether an item is edible. When the stimuli have well-defined physical measures, and the experimenter intends to study the relationships between physical and psychological values, the research domain
is called psychophysics. For example, one can examine the relationship between judgments of the heaviness of objects and XV
their physical weights. In Chapter 2, Marks and Algom give a careful survey of this field, considering both historical issues of psychophysics and modern controversies. There are many judgments in
which the physical dimensions are not well understood, such as the judgment of beauty. Other judgments rely on physical measures that are difficult to define, such as the likableness of a character
described in a novel. A judgment researcher might ask people to estimate the utility of receiving various prizes, to evaluate how well a student has mastered the material in a university course, to
rate how much one would like a person who is "phony," or to judge how much fault or blame should be assigned to victims of various crimes. Judgment tasks usually require the assignment of numbers to
represent the judge's introspected psychological values. To what extent are these numbers meaningful measures of the psychological value they purport to represent? The study of judgment cuts across
the usual disciplines in psychology. Social psychologists might ask people to rate others' attitudes toward minority groups, the perceived willingness of others to help someone in need, or the
likelihood that a person would conform to society's norms in a given situation. Personality psychologists often ask people to rate their own feelings and behaviors. For example, people might rate
their agreement with statements such as "I feel nervous and shy when meeting new people" or "one should always obey the law." In clinical psychology, the clinician may assign clients into diagnostic
categories of mental illness or judge the degree of improvement of clients' behavior in therapy. In marketing, the analyst may be interested in how consumers' judgments of the value of a product
depend on its component features. Although applications occur in many disciplines of psychology, the term judgment applies when the investigation involves basic principles assumed to apply across
content domains. The term scaling refers to studies in which the chief interest is in establishing a table of numbers to represent the attributes of stimuli. The term unidimensional scaling describes
studies in which stimuli may have many physical dimensions but there is only one psychological dimension of interest. For example, how does the psychological loudness of sinusoidal tones vary as a
function of their physical wavelengths and amplitudes? The tones differ in the psychological dimensions of pitch, loudness, and timbre, but the experimenter has chosen to study the effects of two
physical dimensions on one psychological dimension, loudness, so the study would be classified as unidimensional. Similarly, one might study the judged beauty of people in a contest or the quality of
different varieties of corn. The beauty of the contestants and the quality of corn depend on many physical dimensions, and they also may be composed of many psychological features or dimensions;
however, the term unidimensionalis applied when the investigator has restricted the problem to study one psychological di-
mension. The first two chapters present many examples of unidimensional research. The term multidimensionalscaling refers to investigations in which stimuli are represented by psychological values on
more than one dimension or attribute. For example, beauty contestants may differ not only in beauty but also in congeniality, intelligence, and sincerity. Beauty itself may be composed of dimensions
of perhaps face and figure, each of which might be further analyzed into psychological components, which might be features or dimensions. Sometimes, a single physical dimension appears to produce two
or more psychological dimensions. For example, variation in the physical wavelength of light appears to produce two psychological dimensions of color (red-green and blue-yellow), on which individuals
may judge similarities of colors differently according to their degrees of color blindness on the dimensions. Investigators use multidimensional scaling to analyze judgment data, such as judgments of
similarity, and also to analyze other behavioral data. In Chapter 3, Carroll and Arabie introduce not only traditional, geometric multidimensional scaling but also theories of individual differences
and more general models, of which feature and geometric models are special cases. In geometric models, stimuli are represented as points in a multidimensional space; similarity between two stimuli in
these models is a function of how close the stimuli are in the space. In feature models, stimuli are represented as lists of features, which may be organized in a tree structure. Similarity in
feature models depends on the features that the stimuli have in common and those on which they differ. Carroll and Arabie discuss relationships between these models and empirical investigations of
them. Judgment, multidimensional scaling, and decision making are all fundamental in the study of categorization. How is it that people can recognize an item as a chair, even though they have never
previously seen it? Even a 3-year-old child can identify a distorted cartoon as a cat, despite never having seen the drawing before. Knowing when two stimuli are the same or different constitutes the
twin problems of stimulusgeneralization and discrimination. These topics are important to the history of psychology and were the subject of much research in psychophysics using human participants and
also using animals, whose life experiences could be controlled. In Chapter 4, Ashby and Maddox summarize research on categorization, conducted with humans. In addition to the problems of stimulus
representation and category selection, the study of categorization tries to explain how the dimensions or features of a stimulus are combined to determine in what category the stimulus belongs. Ashby
and Maddox present classical and current models of categorization and discuss experimental investigations of these models. Decision making is such a general idea that it provides an approach to all
of psychology. Whereas a personality psychologist may study how behavior
depends on an individual's traits and a social psychologist may study behavior as a function of conformity to society's expectations for a situation, theorists in decision making analyze behavior as
the consequence of a decisional process of what to do next. Decision making is broad enough to include all judgment studies, for one can consider any judgment experiment as a decision problem in
which the judge decides what response to assign to each stimulus. However, the term decision making is often employed when the subject's task is to choose between two or more stimulus situations
rather than to select one of a set of categorical responses. In a decision-making task, a person might be asked to choose a car based on price, safety, economy of operation, and aesthetics. How does
a person combine these factors and compare the available cars to make such a choice? Decisions under risk and uncertainty have also been explored in the behavioral decision-making literature. In
risk, outcomes of varying utility occur with specified probabilities. A judge might be asked, "Would you prefer $40 for sure or $100 if you correctly predict the outcome of a coin to be tossed and $0
if you fail?" This is a risky decision, because the probability of correctly predicting the coin toss is presumed to be 89 In decision making under uncertainty, probabilities are unknown. "Would you
prefer $100 for sure or $800 only if you can successfully predict (to the nearest dollar) the price that a given stock, now worth $9 per share, will have when the market closes one week from now?"
Because stock prices are uncertain, it is difficult to know how to use the past to predict the future. People may have subjective probabilities concerning the likelihoods of future events, and it is
a topic of great importance to understand how people use such subjective probabilities to form and revise beliefs and to make decisions. In Chapter 5, Payne, Bettman, and M. F. Luce summarize the
literature of behavioral decision making that attempts to address these issues. Iverson and R. D. Luce also consider decision-making topics from the measurement perspective, and Marks and Algom
discuss influences of decision making on psychophysics. The authors of these chapters have provided excellent introductions to active research programs on the most basic problems of psychology. These
chapters not only consider the major ideas in their fields but also relate the history of ideas and draw connections with topics in other chapters. Each chapter unlocks the door for a scholar who
desires entry to that field. Any psychologist who manipulates an independent variable that is supposed to affect a psychological construct or who uses a numerical dependent variable presumed to
measure a psychological construct will want to open these doors. And the key is measurement. Michael H. Birnbaum
The Representational Measurement Approach to Psychophysical and Judgmental Problems Geoffrey lverson R. Duncan Luce
I. I N T R O D U C T I O N This chapter outlines some of the main applications to psychophysical and judgmental modeling of the research called the representational theory of measurement. Broadly,
two general classes of models have been proposed in studying psychophysical and other similar judgmental processes: information processing models and phenomonological models. The former, currently
perhaps the most popular type in cognitive psychology, attempt to describe in more or less detail the mental stages of information flow and processing; usually these descriptions are accompanied by
flow diagrams as well as mathematical postulates. The latter, more phenomenological models attempt to summarize aspects of observable behavior in a reasonably compact fashion and to investigate
properties that follow from the behavioral properties. Representational measurement theory is of the latter genre. We are, of course, identifying the end points of what is really a continuum of model
types, and although we will stay mainly at the phenomenological end of the spectrum, some models we discuss certainly contain an element of information processing. Two features of the behavior of
people (and other mammals) need to be taken into account: (1) responses are variable in the sense that when a subject is confronted several times with exactly the same stimulus situation, he or
Measurement,Judgment, and Decision Making Copyright 9 1998 by Academic Press. All fights of reproduction in any form reserved.
Geoffrey Iverson and R. Duncan Luce
she may not respond consistently, and (2) stimuli typically have somewhat complex internal structures that influence behavior. Indeed, stimuli often vary along several factors that the experimenter
can manipulate independently. Thus, behavioral and social scientists almost always must study responses to complex stimuli in the presence of randomness. Although somewhat overstated, the following
is close to true: We can model response variability when the stimuli are only ordered and we can model "average" responses to stimuli having some degree of internal structure, but we really cannot
model both aspects simultaneously in a fully satisfactory way. In practice we proceed either by minimizing the problems of structure and focusing on the randomness--as is common in statistics--or by
ignoring the randomness and focusing on structureaas is done in the representational theory of measurement. Each is a facet of a common problem whose complete modeling seems well beyond our current
understanding. This chapter necessarily reflects this intellectual hiatus. Section II reports probabilistic modeling with stimuli that vary on only one dimension. Section III extends these
probabilistic ideas to more complex stimuli, but the focus on structure remains a secondary concern. Sections IV through VII report results on measurement models of structure with, when possible,
connections and parallels drawn to aspects of the probability models. The domain ofpsychophysical modeling is familiar to psychologists, and its methods include both choice and judgment paradigms.
Although not often acknowledged, studies of preferences among uncertain and risky alternatives-utility theories--are quite similar to psychophysical research in both experimental methods and modeling
types. Both areas concern human judgments about subjective attributes of stimuli that can be varied almost continuously. Both use choicesausually among small sets of alternatives-as well as judgment
procedures in which stimuli are evaluated against some other continuous variable (e.g., the evaluation of loudness in terms of numerals, as in magnitude estimation, and the evaluation of gambles in
terms of monetary certainty equivalents). Our coverage is limited in two ways. First, we do not attempt to describe the large statistical literature called psychometrics or, sometimes, psychological
measurement or scaling. Both the types of data analyzed and models used in psychometrics are rather different from what we examine here. Second, there is another, small but important, area of
psychological literature that focuses on how organisms allocate their time among several available alternatives (for fairly recent summaries, see Davison & McCarthy, 1988; Lowenstein & Elster, 1992).
Typically these experiments have the experimenter-imposed property that the more time the subject attends to one alternative, the less is its objective rate of payoff. This is not a case of
diminishing marginal utility on the part of the subject but of diminishing replenishment of resources. It is far more realistic for many situations than is
1 The Representational Measurement Approach to Problems
the kind of discrete choice/utility modeling we examine here. Space limitations and our hope that these models and experiments are covered elsewhere in this book have led us to omit them. II.
PROBABILISTIC M O D E L S FOR AN O R D E R E D A T T R I B U T E Most experimental procedures used to investigate the ability of an observer to discriminate the members of a set of stimuli are
variants of the following two" 1. Choice Paradigm. Here the observer is asked to choose, from an offered set of n stimuli, the one that is most preferred or the one that possesses the most of some
perceived attribute shared by the stimuli. 2. Identification Paradigm. Here the observer is required to identify which of the n stimuli is presented. The simplest case of each paradigm occurs when n
= 2. The choice paradigm requires an observer to discriminate stimuli offered in pairs, whereas the identification paradigm forms the basis of yes-no detection. We focus on discrimination and
detection in this section; see section III for the general case of each paradigm. In this section we are concerned primarily with psychophysical applications in which stimuli are ordered along a
single physical dimension such as line length, frequency, intensity, and so on. Accordingly we use positive real numbers x, y, s, n, and so on to label stimuli. For a discrimination task to be
nontrivial, the members of an offered pair x, y of stimuli must be close in magnitude. An observer's ability to decide that x is longer, or louder, or of higher pitch, and so on than y is then
difficult, and over repeated presentations of x and y, judgments show inconsistency. The basic discrimination data are thus response probabilities Px,v, the probability that x is judged to possess
more of the given attribute than y. For a fixed standard y, a plot of Px,v against x produces a classical psychometric function. A typical psychometric function, estimated empirically, is shown in
Figure 1. The data were collected by the method of constant stimuli" subjects were presented a standard line of length 63 m m together with one of five comparison lines that varied in length from 62
to 65 mm. Note that, in these data, the point of subjective equality is different from (here it is smaller than) the 63 m m standard l e n g t h ~ n o t an u n c o m m o n feature of this
psychophysical method. The family of psychometric functions generated by varying the standard affords one way to study the response probabilities Px,v" It is often more revealing, however, to study
Px,v as a family of isoclines, that is, curves on which Px,v is constant. Fixing the response probability at some value -rr and trading off x and y so as to maintain this fixed response probability
Geoffrey Iverson and R. Duncan Luce 1.0
LENGTH (millimeters)
FIGURE 1 A typical psychometric function. The proportion of "larger" judgments is plotted as a function of a physical measure of the comparison stimulus. In this case the standard was a 63 cm length.
From Figure 6.2 of Elementsof Psychophysical Theory, by J.-C. Falmagne, New York: Oxford University Press, 1985, p. 151; redrawn from the original source, Figure 2.5 of"Psychophysics: Discrimination
and Detection," by T. Egen, which appeared as chapter 2 of ExperimentalPsychology, by J. W. Kling and L. A. Riggs, New York: Holt, Rinehart and Winston, 1971. Reprinted with permission. a sensitivity
function {~: P~,• = "rr if and only if {~,(y) = x. Writing {~(x) = x + A~,(x) defines the Weber function, or av-jnd of classical psychophysics. For instance, the symbol A.v5 (x) denotes the increment
added to a stimulus x so as to render the sum detectable from x 75% of the time; in classical psychophysics the arbitrary choice ~r = 0.75 served to define the just-noticeable difference. Trade-off
functions offer a convenient way to organize and study a wide range of psychophysical data. Equal loudness contours, intensity-duration trading relations, speed-accuracy trade-offs are but three of
many examples that could be mentioned. For the detection task, the fundamental trade-off involves two kinds of error: one, an error of omission, arises when an observer fails to recognize the
presence of a signal as such; the other, an error of commission, arises when an observer incorrectly reports the presence of the signal. The trade-off between these two sorts of error underlies the
receiver operating characteristic (ROC), the basic object of study in yes-no detection. In this section we provide an overview of models describing families of psychometric functions, sensitivity
functions, ROCs, and other trade-offs
1 The Representational Measurement Approach to Problems
such as speed-accuracy. Link (1992) is a modern text that covers much of the same material in detail, albeit from a diferent point of view. A. D i s c r i m i n a t i o n
1. Fechner's Problem The simplest class of models for response probabilities involves an idea proposed by Fechner (1860/1966); namely, that a comparison of stimuli x, y is based on the difference u
(x) - u(y) of internal "sensations" evoked by x and y. Here the numerical scale u is assumed to be a strictly increasing function of the physical variable. Confusion between x, y arises because the
difference u(x) - u(y) is subject to random error (which Fechner took to be normally distributed1). In other words, Px,r - Prob[u(x) - u(y) + random error -> 0]. In terms of the distribution function
F of the error, this amounts to P~,r = F[u(x) - u(y)],
where each of the functions F and u is strictly increasing on its respective domain. The form for the response probabilities given in Eq. (1) is called a Fechner representation for those
probabilities. Fechner's problem (Falmagne, 1985) is to decide, for a given system of response probabilities, if a Fechner representation is appropriate and, if so, to determine how unique the
representation is. These and other related matters have received much attention in the theoretical literature (Falmagne, 1985; Krantz, Luce, Suppes, & Tversky, 1971; Levine, 1971, 1972; Suppes,
Krantz, Luce, & Tversky, 1989). A key observable property enjoyed by all systems of response probabilities conforming to the Fechnerian form of Eq. (1) is known as the quadruple condition (Marschak,
1960): for all x, y, x', y', if Px,r >- Px',r', then Px,x' >- Pr,r'"
This property is easily seen to be necessary for a Fechner representation. In terms of scale differences, the left-hand inequality of Eq. (2) asserts that u(x) u(y) > u(x') - u(y'), which rearranges
to read u ( x ) - u ( x ' ) >- u(y) - u ( y ' ) and in turn that inequality implies the right-hand inequality of Eq. (2). It is a far more remarkable fact that the quadruple condition is, in the
presence of natural side conditions, also sufficient for a Fechner representation; see Falmagne (1985) for a precise statement and proof of this fact. The uniqueness part of Fechner's problem is
readily resolved: the scale u is unique up to positive linear transformations, that is, u is an interval scale (see section IVB.3). This is not surprising in view of the fact that positive scale
differences behave like lengths and can be added: -
1 Also called Gaussian distributed.
Geoffrey Iverson and R. Duncan Luce [u(x)
+ [u(y)
= u(x) -
See section IV.C.6 for an account of the theory of measurement of length and other extensive attributes. Although the quadruple condition presents an elegant solution to Fechner's problem, it is not
easily tested on fallible data; for a general approach to testing order restrictions on empirical frequencies, see Iverson and Falmagne (1985). 2. Weber Functions Another approach to Fechner's
problem is afforded by the study of the sensitivity functions {~ (or equivalently the Weber functions AN). To assume the validity of the representation Eq. (1) is equivalent to assuming the following
representation for sensitivity functions:
g=(x) = . - ' [ . ( x )
+ g(~)];
where g = F -1. In these terms an alternative formulation of Fechner's problem can be framed as follows: What properties of sensitivity functions guarantee a representation of the form Eq. (3) for
these functions? A condition that is clearly necessary is that two distinct sensitivity functions cannot intersectwsensitivity functions are ordered by the index "rr. Moreover, sensitivity functions
of the desired form can be concatenated by the ordinary composition of functions, and this concatenation is commutative (i.e., the order of the composition makes no difference):
~=[g~,(x)l = u - , [ . ( x ) + g(~) + g(~')] = ~ , [ ~ ( x ) ] . These two properties allow the collection of sensitivity functions to be recognized as an ordered abelian group, 2 and the machinery
of extensive measurement applies (see section IV.C). For a detailed discussion, see Krantz et al. (1971), Suppes et al. (1989), Levine (1971, 1972), and Falmagne (1985). Kuczma (1968) discussed the
problem from the viewpoint of iterative functional equations. The previous remarks reflect modern ideas and technology. Fechner studied the functional equation u[(x + A(x)] -
u ( x ) = 1,
2 A mathematical group G is a set of objects (here functions) together with a binary operation 9(here composition of functions), which is associative: for all objects x, y, z in G, x , ( y , z ) = (
x , y ) , z . There is an identity element e (here the identity function) and each element x in G possesses an inverse x -1 such that x 9 x -1 = x -1 9 x = e. The group is abelian when 9 is
commutative, i.e., x , y = y , x .
1 The Representational Measurement Approach to Problems
.6 .4 =
.4 .2
x ' ~ 4 000 Hz
" I,,.,.
~ _ _ , ~ ~ _ . L ~ J I O00Hz "6 E e-
12 .6 = .4
M' ~ 200 Hz
. .
. .
. . . . . . .
Sensation level (decibels)
FIG U R E 2
Weber functions for loudness of pure tones in which the logarithm of the
Weber fraction is plotted against the sound pressure level in decibels above threshold for eight frequencies from 200 to 8000 Hz. In such a plot, Weber's law would appear as a horizontal line. From
Figure 1 of"Intensity Discrimination as a Function of Frequency and Sensation Level," by W. Jesteadt, C. C. Wier, and D. M. Green, 1977, Journal of the Acoustical Society of America, 6I, p. 171.
Reprinted with permission.
called Abel's equation, and incorrectly reasoned that it could be replaced by a differential equation (Luce & Edwards, 1958). He did, however, correctly perceive that Weber'slaw--namely, the
assertion that A=(x) is proportional to x for any value of ~r--provides a rapid solution to Fechner's problem. In our notation Weber's law is equivalent to the assertion Pcx,cy ~ Px, y
for any positive real number c and all x, y. It follows at once that Px,y depends only on the ratio x/y of physical measures and that the scale u(x) is logarithmic in form. Although Weber's law
remains a source of useful intuition in psychophysics, it provides at best an approximation to the empirical data. For example, in psychoacoustics, pure tone intensity discrimination exhibits the
"near-miss" to Weber's law (Figure 2). On the other hand, Weber's law holds up remarkably well for intensity discrimination
Geoffrey Iverson and R. Duncan Luce 3.5
3.0 jnd = lOlOglO [1.099 + 1.9995 ( ~ ) ] 9Subject SM
o Subject GM
"~ 2.0 ._~ "6 1.5 o ~3 1.0
70 40 50 60 Sensation level in decibels
o. ...... -t 1
FIGURE 3 Weber function for loudness of white noise in which the Weber fraction is presented in decibel terms versus the sound pressure level in decibels relative to threshold. Again, in such a plot,
Weber's law would appear as a horizontal line, which is true for most (recall, this is a logarithmic scale) of the stimulus range. From Figure 5 of"Discrimination," by R. D. Luce and E. Galanter, in
R. D. Luce, R. R. Bush, and E. Galanter, Handbook of Mathematical Psychology (Vol. 1), New York: John Wiley & Sons, 1963, p. 203. Reprinted with permission.
of broadband noise (Figure 3). For further remarks on Weber's law and the near-miss, see section VII.B. 1. 3. Random Variable Models 3 Suppose that a stimulus x elicits an internal representation as
a random variable Ux. In these terms, the response probabilities can be written
Px.y = Prob(U,.-> U,,).
Such a representation was first proposed and studied in the literature on individual choice, where the term random utility model has become standard (Block & Marschak, 1960; Luce & Suppes, 1965;
Marschak, 1960). Although this representation does impose constraints on the response probabilities, for example, the triangle condition Px,r + Pr,z + P~,x - 1, it is not well understood (see Marley,
1990, for a review) and is clearly very weak. For these reasons it is useful to explore the consequences of specific distributional assumptions on the random variables involved in Eq. (4). 3 We
follow the convention of using uppercase bold letters such as X, Y, and Z to denote random variables. We shall write vectors as bold lowercase letters such as x, y, and z. A vectorvalued random
variable is not distinguished notationally but rather by context.
1 The Representational Measurement Approach to Problems
In a trio of seminal papers, Thurstone (1927a, b, c) made the assumption that Ux, Uv are jointly normal. Doing so gives rise to a relation known as Thurstone's law of comparative judgment; see Eq.
(5) for a special, but important case. In many circumstances it is reasonable to suppose that Ux and U v are not only normal but independent. The stability of the normal familym the fact that the sum
(or difference) of two independent normally distributed random variables remains normally distributed--allows Eq. (4) to be developed in terms of the means Ix(x), Ix(y) and variances cr2(x), cr2(y)
of Ux and Ur:
Px,~ =
~([~(x) - ~ ( y ) l / V ~ ( x )
+ ~(r)),
where 9 is the distribution function of the unit normal (mean zero, variance unity). This representation is Case III in Thurstone's classification. When cr(x) is constant across stimuli, one obtains
the simple Case V representation: Px,r = cI)[u(x) - u(y)],
where u(x) = I x ( x ) / V ~ . This model is a special case of a Fechnerian representation; compare it with Eq. (1). Thurstone offered little to justify the assumption of normality; indeed, he
admitted it might well be wrong. However, in many stochastic process models, information about a stimulus arises as a sum of numerous independent contributions. Such sums are subject to the central
limit theorem, which asserts that their limiting distribution is normal; an explicit example of this sort of model is discussed in section II.C. Other authors, for example, Thompson and Singh (1967)
and Pelli (1985), have proposed models in which discriminative information is packaged not as a sum but as an extreme value statistic. Invoking a well-known limit law for maxima (Galambos, 1978/1987)
leads to a model in which the random variables Ux and U r of Eq. (4) are independent, double-exponential variates with means Ix(x), Ix(y). The following expression for the response probabilities
= 1/[1 + e x p ( ~ ( y ) -
~(x))l =
v(x)/[v(x) +
where v(x) = exp[ix(x)]. The expression given in Eq. (7) is often called a Bradley-Terry-Luce representation for the response probabilities. The expression in Eq. (7) also arises in choice theory
(section III.A.2), but is based on quite different considerations (Luce, 1959a). The following product rule is a binary property that derives from the more general choice theory: for any choice
objects a, b, c,
Pa,b . Pb, c . Pc, a = 1 . Pb,a P~,b Pa,~
The product rule in Eq. (8) is equivalent to the representation in Eq. (7).
Geoffrey lverson and R. Duncan Luce
B. Detection
1. Receiver Operating Characteristics The basic detection task requires an observer to detect the presence or absence of a signal embedded in noise. On some trials the signal accompanies the noise;
on other trials noise alone is presented. On each trial the observer makes one of two responses: signal present, "yes," or signal not present, "no." Two kinds of error can be made in this task. One,
called a miss occurs when a no response is made on a signal trial; the other, called a false alarm occurs when a yes response is made on a noise-alone trial. Corresponding correct responses are
called hits (yes responses on signal trials) and correct rejections (no responses on noise trials). Because hits and misses are complementary events, as are correct rejections and false alarms, the
yes-no task involves only two independent response rates and it is conventional to study the pair of conditional probabilities Prt = Prob(yes]signal) and PFA = Prob(yes]noise). The two probabilities
move together as a function of an observer's tendency to respond yes. Such biases can be brought under experimental control by employing an explicit payoff schedule: punishing false alarms depresses
the frequency of yes responses, and PH and PF^ each decrease; rewarding hits has the opposite effect. By varying the payoff structure, the pair (PH, PF^) traces out a monotonically increasing curve
from (0, 0) to (1, 1) in the unit square. Such a curve is called a receiver operating characteristic, abbreviated ROC. An alternative method of generating an ROC involves varying the probability of a
signal trial. Figure 4 shows typical ROCs generated by both methods. A single ROC is characterized by fixed signal and noise parameters; only an observer's bias changes along the curve. By varying
signal strength, a family of ROCs is obtained, as in Figure 5. For a wealth of information on detection tasks and the data they provide consult Green and Swets (1966/1974/1988) and Macmillan and
Creelman (1991). 2. Psychometric Functions In classical psychophysics, it was common practice to study the psychometric function obtained by measuring the hit rate Prt as signal intensity was varied.
Instructions were intended to practically forbid the occurrence of false alarms. This strategy is fraught with difficulties of estimation: PvI must be estimated on the most rapidly rising part of an
ROC, so that small errors in PFA become magnified in the determination of P/_/. 3. Statistical Decision Making Detection can be modeled as a problem of statistical decision making. In this view,
evidence for the signal is represented as a random variable whose
1 The Representational Measurement Approach to Problems
d' = .85
0.60 [--
O 0.20
0.00 i._ 0.00
Pr { FALSE M_ARM } I PhaseI
9 Phasell
FIGURE 4 A typical receiver operating characteristic (ROC) in which the probability of a hit is plotted against the probability of a false alarm. The stimulus was a tone burst in a background of
white noise. The data were generated by varying signal probability (solid square symbols) and payoffs (solid diamonds). From Figure 7.6 of The Wave Theory of Difference and Similarity, by S. W. Link,
Hillsdale, NJ: Erlbaum, 1992, p. 121. Redrawn from Signal Detection Theory and Ps),chophysics (Figure 4-1 and 4-2, pp. 88-89) by D. M. Green and J. A. Swets, Huntington, NY: Robert E. Krieger, 1974.
Reprinted with permission.
values are distributed on a one-dimensional "evidence" axis. On signal trials, the evidence for the signal is a value of a random variable U~; on noise trials, evidence is a value of a random
variable U,. Large values of evidence arise more frequently on signal trials and thus favor the presence of the signal. An observer selects a criterion value 13 on the evidence axis, which is
sensitive to payoff structure and signal probability, such that whenever the evidence sampled on a trial exceeds [3, the observer responds yes, indicating a belief that the signal was presented. O f
the various candidates for the evidence axis, one deserves special mention. According to the Neyman-Pearson lemma of statistical decision
Geoffrey Iverson and R. D u n c a n Luce
1.0 .9
p(yeslN) FIGURE 5 ROCs obtained using a five-point rating scale and varying signal strength over seven levels (the weakest and strongest levels are omitted in the plot). The stimuli were 60 Hz
vibrations to the fingertip; the curves are identified by the amplitude of the stimulus in microns. The procedure involved two conditions, represented by the open and closed symbols. In each, the
probability of no signal was 0.33 and a signal, 0.67. In the case of the open symbols, the three weaker signals were more likely than the four stronger ones (signal probabilities of 0.158 and 0.066,
respectively), whereas for the closed symbols the three stronger signals were more likely than the four weaker ones (again, 0.158 and 0.066). Thus, there was a single false alarm estimate for all
seven intensities corresponding to each of rating levels. From Figure 3.24 of Psychophysics: Methodsand Theory, by G. A. Gescheider, Hillsdale, NJ: Erlbaum, 1976, p. 79. Redrawn from the original
source "Detection of Vibrotactile Signals Differing in Probability of Occurance," G. A. Gescheider, J. H. Wright, and J. W. Polak, 1971, The Journal of Psychology, 78, Figure 3, p. 259. Reprinted
with permission.
theory, the optimal way to package evidence concerning the signal is to use the likelihood r a t i o ~ t h e ratio of the density of sensory data assuming a signal trial to the density of the same
data assuming a noise-alone trial. Large values of the likelihood ratio favor the presence of the signal. However there remains considerable flexibility in the choice of a decision statistic: any
strictly increasing function of the likelihood ratio produces an equivalent decision rule and leads to identical detection performance. A c o m m o n choice of such a transformation is the logarithm,
so that evidence can take on any real value. It is worthy of note that ROCs that are concave (as are those of Figures 4 and 5) are compatible with the use of the likelihood-ratio as a decision
statistic (cf. Falmagne, 1985). On the other hand, there is little reason to suppose human observers can
1 The Representational Measurement Approach to Problems
behave as ideal observers, except in the simplest of circumstances (see Green & Swets, 1966/1974/1988, for further discussion). More likely than not, human observers use simple, easy-to-compute
decision statistics that will not, in general, be monotonically related to the likelihood ratio (see, e.g., section II. C). 4. Distributional Assumptions and d' It should be noted that the
representation of an R O C in terms of decision variables U . U., namely, PH = Prob(Us > 6), PF^ = Prob(U~ > 13)
is not at all constraining, despite the rather heavy background imposed by statistical decision theory. If one chooses U,, to possess a strictly increasing, but otherwise arbitrary distribution
function, it is always possible to find a random variable Us such that a given R O C is represented in the form of Eq. (9) (cf. Iverson & Sheu, 1992). On the other hand, empirical families of ROCs
obtained by varying some aspect of the signal (such as intensity or duration) often take on a simple, visually compelling form. The ROCs given in Figure 5 are, above all, clearly ordered by varying
stimulus amplitude. This suggests, at least in such examples, that ROCs are isoclines of some function monotonically related to signal strength; 4 moreover, because these isoclines do not intersect,
a Fechnerian representation may hold (recall the discussion in Section II.A): Stimulus strength = F[U(PH) -- u(PFA)]. In other words, there exists the possibility of transforming an R O C into a line
of unit slope by adopting U(PH), u(Pv^) as new coordinates. It is not difficult to show that this possibility does occur if U,, U , are members of a location family of random variables, differing
only in their mean values. Based on explicit examples and, above all else, on simplicity, it is commonly assumed that U,, Un are normally distributed, with a common variance. This assumption is
responsible for the custom of plotting ROCs on double-probability paper (with inverse normals along the axes). If the normal assumption is correct, ROCs plot as parallel lines of unit slope, with
intercepts d ' = z H - ZFA = [~(S) -- ~(n)]/Or,
Suppose the value of a real function F of two real variable is fixed: F(x,y) = constant. Then such pairs (x,y) trace out a curve called an isocline or level curve of F. Different isoclines correspond
to different values of the function F.
Geoffrey Iverson and R. Duncan Luce
where z = (I)-1 (probability) and 9 is the distribution function of the unit normal. The measure d' depends only on stimulus parameters and is thus a measure of detectability uncontaminated by
subjective biases. The remarkable fact is that when empirical ROCs are plotted in this way, they do more or less fall on straight lines, though often their slopes are different from unity. This
empirical fact can be accommodated by retaining the normality assumption but dropping the constant variance assumption. Using the coordinate transformation z = ~-1 (probability)--recall 9 is the
distribution function of the unit normal--the following prediction emerges: cr(s)z H - ~r(n) zv^ = ~(s) - ~(n),
which is the equation of a line of slope o'(n)/o'(s) and intercept [~(s) ~(n)]/o'(s). Unlike the case discussed earlier for which ~r(s) = ~r(n), that is, Eq. (10), there is now some freedom in
defining an index of detectability, and different authors emphasize different measures. Those most commonly employed are the following three:[~t(s) - ~(n)]/~r(n), [~(s) - ~(n)]/g(s), a n d - ~(s) -
V'o-2(5) q- o-2(n)"
Note that the latter index, the perpendicular distance to the line (Eq. 11) from the origin, is closely related to performance in a discrimination (two alternative/interval forced-choice) paradigm
using the same signal and noise sources as employed in the detection task. Formally, the prediction for the forced-choice paradigm is given by Eq. (5) with s,n replacing x,y, respectively. One
obtains z, = ~-'([ja(s) - ~(n)]lX/~r2(s) + or2(n)),
where Zc is the transformed probability of a correct response in the two alternative tasks. The ability to tie together the results of different experimental procedures is an important feature of
signal detection theory, one that has been exploited in many empirical studies. For additional results of this type, see Noreen (1981) and Macmillan and Creelman (1991), who confine their
developments to the constant variance assumption, and Iverson and Sheu (1992), who do not. In section II.C we sketch a theory that unites detection performance and speed-accuracy trade-offbehavior
under a single umbrella. 5. Sources of Variability A question first raised by Durlach and Braida (1969), Gravetter and Lockhead (1973), and Wickelgren (1968) concerns the locus of variability in this
class of signal detection models. Eq. (9) is written as if all of the variability lies in the representation of the stimuli, and the response criterion [3 is
1 The Representational Measurement Approach to Problems
treated as a deterministic numerical variable. For the case of location families of random variables, the data would be fit equally well if all the variability were attributed to 13 and none to the
stimuli. Indeed, because variances of independent random variables add, any partition between stimulus variability and criterion variability is consistent with both yes-no and forced-choice data. The
problem, then, is to design a method that can be used to estimate the partition that actually exists. Perhaps Nosofsky (1983) provided the cleanest answer. His idea was to repeat the stimulus
presentation N times with independent samples of noise and have the subject respond to the entire ensemble. If subjects average the N independent observations, the mean is unaffected but the variance
decreases as ~2(s)IN. On the other hand, there is no reason why the criterion variance cr2(13) should vary with N. Substituting into Eq. (10), we obtain 1 _ 0r2(~) + a2
o'2(s)/N ~(.)
(13) '
which represents a linear trade-off between the variables 1/(d'N 2) and 1/N. Nosofsky carried out an auditory intensity experiment in which four signals were to be identified. The two middle signals
were always at the same separation, but the end signals were differently spaced leading to a wide and a narrow condition. The quantity d'N was computed for the pair of middle stimuli of fixed
separation. Figure 6 shows 1/(d'N) 2 versus 1 / N for both the wide and narrow conditions. The predicted linearity was confirmed. The value (slope/intercept) 1/2, which estimates cr(s)/cr([3), is
3.96 and 3.14 in the wide and narrow conditions, respectively; the ratio o'(s, wide)let(s, narrow) is 7.86, and that of ~(13, wide)let(13, narrow) is 6.23. Thus it appears that the standard
deviations for stimulus and criterion partition about 3 or 4 to 1. Nosofsky also reanalyzed data of Ulehla, Halpern, and Cerf (1968) in which subjects identified two tilt positions of a line; again
the model fit well. The latter authors varied signal duration and that manipulation yielded estimates of ~(s)lor(f3) of 14.22 in the shorter duration and 4.57 in the longer one. The criterion
variance was little changed across the two conditions. 6. COSS Analysis A far-reaching generalization of Nosofsky's ideas was recently proposed by Berg (1989) under the name of COSS analysis
(conditional on a single stimulus). Rather than assume an observer gives equal weight to all sources of information relevant to detecting a signal, Berg's theory calls for a system of differential
weights. COSS analysis provides an algorithm for estimating these weights in empirical data. There is a growing body of evidence that
-~9 Z
~1..--, r~ I.-,.
(lid') 2
z e
1 The Representational Measurement Approach to Problems
observer's do not usually employ equal weights, even when, as in Nosofsky's paradigm, they should; rather, the pattern of weights takes on a variety of shapes depending on the structure of stimuli
and the demands of a particular task. Berg (1989), Berg (1990), and Berg and Green (1990) discuss tasks that produce rather different weight patterns. Since its inception about eight years ago, C O S
S analysis has had a major impact in psychoacoustics, where it was first applied. However, the technique is very flexible and one can expect it will find application to any task calling for the
detection of complex stimuli that vary on many dimensions.
C. Stochastic Process Models The models we have considered thus far are largely phenomenological. They allow for useful interpretations of data, but they do not attempt to capture the complexity of
stimulus encoding as revealed by physiological studies. Yet efforts to create more realistic models of information transmission, however crude and incomplete, seem to be of considerable merit. We now
sketch the results of one such enterprise. Physiological studies conducted in the 1960s and 1970s (summarized in Luce, 1986, 1993) of the temporal coding of simple tones by individual fibers of the
eighth nerve revealed that histograms of interpulse times were roughly exponential in their gross shape. (This rough exponential shape ignores fine structure: There is refractoriness, and the actual
distribution is spiky, with successive peaks displaced at intervals of l / T , T being the period of the input tone.) Assuming independence of times between successive pulses, such exponential
histograms suggest that the encoding of simple auditory stimuli can be modeled as Poisson processes of neural pulses, 5 with rates determined by stimulus intensity (see Green & Luce, 1973); however,
more recent work casts doubt on the independence of successive pulse durations (Lowen & Teich, 1992). 5 A Poisson process can be thought of as a succession of points on a line, the intervals between
any two consecutive points being distributed independently and exponentially. The reciprocal of the mean interval between successive events defines the rate parameter of the process. (
FIGURE 6 Plot of estimated 1/d '2 versus l/N, where N is the number of independent repetitions of a pure tone that was to be absolutely identified from one of four possible intensities. The middle
two stimuli had the same separation in both conditions, which were determined by the separationmwide or narrowmof the two end stimuli, d' was calculated for the two middle stimuli for each of eight
subjects and then averaged; an average of 187.5 observations underlie each point of the wide condition and an average of 150 for the narrow one. The least-squares fits are shown. From Figure 2 of
"Information Integration of the Identification of Stimulus Noise and Criteria Noise in Absolute Judgment," by R. M. Nosofsky, 1983,Journal of Experimental Psychology: Human Perception and
Performance, 9, p. 305. Copyright 1983 by the American Psychological Association. Reprinted by permission.
Geoffrey Iverson and R. Duncan Luce
A Poisson process allows two basic ways for estimating the rate parameter: 1. Count the number of events in a fixed time interval (the
2. Compute the reciprocal of the mean interarrival time between pulses (the timing strategy). In a simple detection task involving pure tones in noise, Green and Luce (1973) argued that if an
observer can use these two decision strategies, then it should be possible to induce the observer to switch from one to the other. An observer whose brain counts pulses over a fixed time interval may
be expected to perform differently from one who whose brain calculates the (random) time required to achieve a fixed number of events. Indeed, the counting strategy predicts that ROCs will plot as
(approximate) straight lines in Gaussian coordinates with slopes ~r(n)/cr(s) less than unity, whereas for the timing strategy the ROCs are again predicted to be (approximately) linear on
double-probability paper but with slopes exceeding 1. Green and Luce found that observers could be induced to switch by imposing different deadline conditions on the basic detection task: when
observers were faced with deadlines on both signal and noise trials, they manifested counting behavior (see Figure 7, top); when the deadline was imposed only on signal trials, observers switched to
the timing strategy (see Figure 7, bottom). The very nature of these tasks calls for the collection of response times. Green and Luce developed response time predictions for the two types of
strategy. Predictions for the counting strategy arc trivial because such observers initiate a motor response after the fixed counting period: mean latencies should thus show no dependence on stimulus
or response, in agreement with observation. For the timing strategy, however, different speed-accuracy trade-offs are predicted on signal trials and on noise trials. Again, the data bore out these
predictions (see Figure 11 of Green and Luce, 1973). The issue of averaging information versus extreme values was also studied in vision; see Wandell and Luce (1978). III. CHOICE A N D IDENTIFICATION
A. Random Utility Models 1. General Theory A participant in a choice experiment is asked to select the most preferred alternative from an offered set of options. Such choice situations are commonly
encountered in everyday life: selecting an automobile from the host of makes and models available, choosing a school or a house, and so on. To account for the uncertainties of the choice process,
which translate into data
1 The Representational Measurement Approach to Problems
95 90 Slope
8O 70 >_
(3_ 50 40
. Obs 2 t
Obs I I
5 t
. 40
] j J
= ~
P(YIn) 1.37
~ 8o 13_ 70
,% 50
: ...... 5
/ 4 ,0
5 ,
. . . . ,0 20 3o4o5o
P(YI n)
FIGURE 7
ROC curves (in z-score coordinates) with estimated slope shown. The upper panel shows the data when a time deadline of 600 ms was imposed on all trials. The lower panel shows the comparable data when
the deadline was imposed only on signal trials. Adapted from Figures 4 and 9 of "Speed Accuracy Trade-off in Auditory Detection," by D. M. Green and R. D. Luce, in S. Kornblum (Ed.), Attention and
Performance (Vol. IV), New York: Academic Press, 1973, pp. 557 and 562. Reprinted with permission.
inconsistencies, choice models are typically framed in terms of choice probabilities Pa,^, the probability of selecting an option a from Set A of alternatives. A random utility model for the choice
probabilities involves the assumption that each alternative a is associated with a random variable U a that measures the (uncertain) value or utility of that alternative. In these terms it is natural
to assume that Pa,^ = Prob(Ua -- Uh, all b in A),
generalizing the binary choice situation discussed earlier in section II.A.3
[see Eq. (4)].
Geoffrey Iverson and R. Duncan Luce
Without specific assumptions on the family of random variables {U~]a in A} appearing in a random utility rcprescntationmfor example, that they arc independent or that their joint distribution is
known up to the values of parametersmit would appear that Eq. (14) does little to constrain observed choice probabilities. However, following Block and Marschak (1960), consider the following chain
of expressions involving linear combinations of choice probabilities"
Pa,A-{h,c}- (Pa,A-{~,} + P,,,A-{c}) + Pa,A, Pa,A-{b,c,d}- (P,,,A-{h,,} + P,,,A-{6,d} + Pa,A-{c,d}) + (G,^-{~} + G,A-{~} + P,,,^-{~})- G,^, . . . . and so on
where A = {a, b, c, d, . . .} and where the notation A - B, B a subset of A, represents the set of members of A that are not also members of B. It can be shown that Eq. (14) requires each of these
so-called BlockMarschak functions to be nonnegative. In other words, the nonnegativity of Block-Marschak functions is a necessary condition for the existence of a random utility representation of
choice probabilities. A remarkable result of Falmagne (1978) shows the same condition to be sufficient for a random utility representation. A random utility representation of choice probabilities is
far from unique: Any strictly increasing function applied to the random variables {U,,la in A} provides another, equivalent, random representation of the same choice probabilities [see Eq. (14)]. To
address this lack of uniqueness, consider a variant of the choice paradigm in which the task is to rank order the alternatives from most preferred to least preferred. Define the random variable U~* =
k if alternative a is assigned rank k, k = 1, 2 . . . . . Following the earlier work of Block and Marschak (1960), Falmagne established three results: 1. The random variables {U,* l a in A} provide a
random utility representation (whenever one exists). 2. All random utility representations for a given system of choice probabilities yield identical ranking variables U,,*. 3. The joint distribution
of the ranking variables can be constructed from the choice probabilities. For a detailed discussion of these facts, see Falmagne (1978). Recently Regenwetter (1996) generalized the concept of a
random utility representation to m-ary relations. The applications of his theory include a model of approval voting and an analysis of political ranking data. Despite this impressive theoretical
analysis of Eq. (14), very little in the
1 The Representational Measurement Approach to Problems
way of empirical application has been attempted; lverson and Bamber (1997) discuss the matter in the context of signal detection theory, where the random variables appearing in Eq. (14) can be
assumed independent. Rather, the impact of specific distributional and other assumptions on Eq. (14) has dominated the field. 2. Luce's Choice Model The assumption that the random variables U~
appearing in Eq. (14) are jointly normal (following Thurstone, see section II.A.3), does not lend itself to tractable analysis, except in special cases such as pair-comparison tasks. This
circumstance arises from the fact that the maximum of two or more normal random variables is no longer normally distributed. Only three families of distributions are "closed" under the operation of
taking maxima, and of these the double-exponential family is the most attractive. We mentioned in section II.A.3 that the assumption of double-exponentially distributed random variables mediating
discrimination of two stimuli leads to the Bradley-Terry-Luce model for pair-comparison data [see Eq. (7)]. If one assumes that the random utilities in Eq. (14) are members of a location family, that
is, of the form u(a) + U, u(b) + U', u(c) + U" . . . . where U, U', U" are independent with a common double-exponential distribution, namely, Prob(U < t) - exp[-e-'] for all real t, it follows from
Eq. (14) that, for a Set A of alternatives a, b, C)
Pa.A = v(a)/ [ ~ v(b) ] .
This expression also arises from Luce's (1959a) theory of choice. Yellott (1977) has given an interesting characterization of the doubleexponential distribution within the context of Eq. (14). He
considered all choice models involving random utilities of an unspecified location family and he inquired as to the effect on choice probabilities of uniformly expanding the choice set by replicating
each alternative some fixed but arbitrary number of times. Thus, for example, if a choice set comprises a glass of milk, a cup of tea, and a cup of coffee, a uniform expression of that set would
contain k glasses of milk, k cups of tea, and k cups of coffee for some integer k -> 2. Yellott showed that if choice probabilities satisfying Eq. (14) were unchanged by any uniform expansion of the
choice set, then, given that the (independent) random utilities were all of the form u(a) + U, the distribution of the random variable U is determined: It must be double-exponential.
Geoffrey Iverson and R. Duncan Luce
3. Elimination Models The choice model Eq. (16) has been the subject of various criticisms on the basis of which new theories have been proposed. Suppose, for example, that one is indifferent when it
comes to choosing between a cup of tea and a cup of coffee. Intuitively, the addition of a further cup of tea should not affect the odds of choosing coffee. Yet the model Eq. (16) predicts that the
probability of choosing coffee drops to one third unless, of course, equivalent alternatives are collapsed into a single equivalence class. Tversky (1972a, b) offered a generalization of Luce's
choice model that escapes this and other criticisms. In his theory of choice by elimination, each choice object is regarded as a set of features or aspects to which weights are attached. The choice
of an alternative is determined by an elimination process in which an aspect is selected with a probability proportional to its weight. All alternatives not possessing the chosen aspect are
eliminated from further consideration. The remaining alternatives are subject to the same elimination process until a single alternative remains. This theory reduces to Luce's choice model in the
very special case in which alternatives do not share common aspects. Practical implementation of the elimination-by-aspects model is made difficult by the large number of unknown parameters it
involves. This difficulty is alleviated by imposing additional structure on the alternatives. Tversky and Sattath (1979) developed an elimination model in which the choice objects appear as the end
nodes of a binary tree, whose interior branches are labeled by aspect weights. That model requires the tree structure to be known in advance, however. The additive tree model of Carroll and DeSoete
(1990) allows the tree structure to be estimated from data, which are restricted to pairwise choices. 4. Spatial Models The tree structures assumed by Tversky and Sattath (1979) are not the only
means for coordinating choice objects geometrically. It is often sensible to represent choice alternatives as points x in an n-dimensional space. Pruzansky, Tversky, and Carroll (1982) surmised that
perceptual stimuli are adequately represented by multidimensional spatial models, whereas conceptual stimuli are better represented in terms of more discrete structures such as trees. B6ckenholt
(1992) and DeSoete and Carroll (1992) have given excellent reviews ofprobabilistic pair comparison models in which a spatial representation is fundamental. To give a flavor of such models we sketch
the wandering vector model presented by DeSoete and Carroll (1986), which is based on earlier ideas suggested by Tucker (1960) and Slater (1960).
1 The Representational Measurement Approach to Problems
The wandering vector model represents choice objects in an n-dimensional Euclidean space, together with a random vector u which fluctuates from trial to trial in a pair comparison experiment and
which constitutes an "ideal" direction in the sample space. The vector V is assumed to be distributed normally with mean vector I* and covariance matrix E. A comparison of options i and j is
determined by three vectors: x; and Xi, the vector representatives of options i and j and v, a realization of the "wandering" vector V. Option i is preferred to optionj whenever the "similarity" o f
/ t o j as measured by the orthogonal projection of xi on v exceeds the corresponding projection of xj on v. The binary choice probabilities are thus given by Pia = Prob(xi.V > xj.V), Y n ) , the
inner where for any vectors x = ( X l , x 2 , . . . x n ) , y = (Yl, Y 2 . . . . product x ' y is the number x l y l + x2Y2 q- 9 - - "q- XnYn" Using standard theory of the multivariate normal
distribution (Ashby, 1992b), one obtains
/,,j = e l ( x / = ,((u,-
Here u i = xi'l* is a (constant) utility associated with option i, and 8,3.2= (xi x j ) ' [ X ( x i - xj)]. Note that the form of Eq. (17) is identical to that of Thurstone's law of comparative
judgment (see section II.A.3). The quantity 8,3 = 8j; is a metric that can be interpreted to measure, at least partially, the dissimilarity of options i and j; Sj6berg (1980) has given some empirical
support for this interpretation. On the other hand, there is a considerable body of evidence that empirical judgments of dissimilarity violate the properties required of a metric (Krumhansl, 1978;
Tversky, 1977). A multidimensional similarity model, which appears to address the various shortcomings of the choice models sketched here, is based on the general recognition theory presented by
Ashby, Townsend, and Perrin (Ashby & Perrin, 1988; Ashby & Townsend, 1986; Perrin, 1986, 1992). We encounter that theory next in the context of identification. B. Identification
1. Ordered Attributes For stimuli ordered on a one-dimensional continuum, an observer can distinguish perfectly only about seven alternatives spaced equally across the full dynamic range (Miller,
1956). This fact, which is quite robust over different continua, is in sharp contrast to the results of local discrimination experiments of the sort discussed earlier in section II. For example, jnds
measured in loudness discrimination experiments employing pure tones
Geoffreylverson and R. Duncan Luce
vary from a few decibels at low intensities to a fraction of a decibel at high intensities suggesting, quite contrary to the evidence, that an observer should be able to identify 40 or more tones of
increasing loudness spaced evenly over an 80 dB range. Such puzzling phenomena have prompted a number of authors to study the identification of one-dimensional stimuli as a function of stimulus range
(Berliner, Durlach, & Braida, 1977; Braida & Durlach, 1972; Durlach & Braida, 1969; Luce, Green, & Weber, 1976; Luce, Nosofsky, Green, & Smith, 1982; Weber, Green, &Luce, 1977). The data from these
studies are accompanied by pronounced sequential effects, first noted by Holland and Lockhead (1968) and Ward and Lockhead (1970, 1971), implicating shifts in response criteria over successive trials
and, to a lesser extent, shifts in sensitivity as well (Lacoutre, & Marley, 1995; Luce & Nosofsky, 1984; Marley & Cooke, 1984; Marley, & Cooke, 1986; Nosofsky, 1983; Treisman, 1985; Treisman &
Williams, 1984). Despite the difficulties of interpretation posed by these sequential effects, a robust feature of identification data is the presence of a prominent "edge" effect: Stimuli at the
edges of an experimental range are much better identified than stimuli in the middle. As the stimulus range is allowed to increase so that successive stimuli grow farther apart, performance improves,
but the edge effect remains. This finding, among others, illustrates that the hope of tying together the data from local psychophysics with those of more global tasks remains an unsettled matter. 2.
Multidimensional Stimuli The basic identification task generates data in the form of a confusion matrix, whose typical entry is the probability P:i of responding stimulus j to the actual presentation
of stimulus i. A model that has enjoyed considerable success in accounting for such data is the biased choice model (Luce, 1963): /,j, =
Here ~10 is a measure of the similarity of stimuli i and j, whereas 13:.represents a bias toward responding stimulusj. Shepard (1957, 1987) has argued that xl0 = exp(-dij), where dO.is the distance
between alternatives i and j regarded as points in a multidimensional vector space. We have already mentioned that some literature speaks against similarity judgments being constrained by the axioms
of a metric (Keren & Baggen, 1981; Krumhansl, 1978; Tversky, 1977). Ashby and Perrin (1988), who favor the general recognition theory (which attempts to account for identification and similarity data
within a common multidimensional statistical decision framework), provided additional evidence against the biased choice model.
1 The Representational Measurement Approach to Problems
F I G U R E 8 Examples of optimal decision boundaries for three types of stimuli. From Figure 1.5 of Multidimensi0nal Models of Perception and Cognition, by F. G. Ashby, Hillsdale, NJ: Erlbaum, 1992,
p. 29. Reprinted with permission.
General recognition theory (GRT) identifies each alternative in an identification experiment with a random vector that takes values in a fixed multidimensional vector space. This vector space is
partitioned into disjoint regions, each of which is characteristic of a single response. For illustration, consider the simplest case involving a pair of two-dimensional stimuli, say, A and B, with
densities fA(x) and f/3(x) governing their respective perceptual effects. Statistical decision theory suggests partitioning the two-dimensional sample space on the basis of the likelihood ratio fA/
fB. When the perceptual effects of A and B are jointly normal, curves of constant likelihood ratio are quadratic functions that simplify to lines when the covariance structure of A is the same as
that of B (i.e., EA = EB). Figure 8 generalizes to any number of stimuli varying on any number of dimensions. Figure 9 depicts hypothetical response boundaries for four two-dimensional stimuli
labeled by their components: (A1, B1), (A1, B2), (A 2, B1), (A 2, B2). The response boundaries are chosen so as to maximize accuracy. General recognition theory yields a conceptually simple
expression (though one that is often analytically untractable) for the confusions Pj~.If Rj
Geoffrey Iverson and R. Duncan Luce
7.: . ...............
fA2B 1
x ,, xo FIGURE 9 Contours of equal probability and decision boundaries for a four-stimulus recognition task. From Figure 6.2 of"Uniting Identification, Similarity, and Preference: General Recognition
Theory," by N. A. Perrin, in F. G. Ashby (Ed.), MultidimensionalModels of Perception and Cognition, Hillsdale, NJ: Erlbaum, 1992, p. 128. Reprinted with permission.
is the region of the sample space associated with response stimulus j and f(x) is the density governing the perceptual effect of stimulus i, then f
Pii = J
f (x)dx .
Numerical methods are normally needed to evaluate such expressions (Ashby, 1992b), in whichf(x) is multivariate normal. A competitor to GRT is Nosofsky's generalized context model (GCM), which is an
outgrowth of-an earlier model of classification proposed by Medin and Shaffer (1978). Unlike GRT, which has its roots in multidimensional statistical decision theory, GCM is based on the idea that
people store exemplars in memory as points in a multidimensional space and classify stimuli by proximity in that space to the various exemplars. Nosofsky (1984, 1986) elaborates the model and its
assumptions, and in a sequence of articles extends it to take into account phenomena bearing on selective attention (Nosofsky, 1987, 1989, 1991). These two models, GRT and GCM, seem to account about
equally well for a large class of identification and classification data. Because of the different ways each model interprets the same data, a certain amount of scientific controversy has arisen over
these interpretations. However, despite their differences in detail, the two models retain much in common,
1 The Representational Measurement Approach to Problems
and one hopes that this fact will promote a third class of models that retains the best features of both GRT and GCM, putting an end to the current disputes. It has long been thought useful to
maintain a distinction between "integral" stimuli--stimuli that are processed as whole entities--and "separable" stimuli--stimuli that are processed in terms of two or more dimensions (see, e.g.,
Garner, 1974; Lockhead, 1966). Taking such distinctions into account within the framework just presented provides additional and testable constraints on identification data. For a detailed discussion
of this and related matters, see Ashby and Townsend (1986), Maddox (1992), and Kadlec and Townsend (1992). IV. ADDITIVE M E A S U R E M E N T FOR AN O R D E R E D ATTRIBUTE In this and the following
sections, we shift our focus from models designed to describe the variability of psychophysical data to models that explore more deeply the impact of stimulus structure on behavior. To do so, we
idealize response behavior, treating it as if responses exhibit no variability. With few exceptions (e.g., section IV.D), current models do not attempt to combine significant features of both
stimulus structure and variable response behavior. It has proved very difficult to combine both phenomena in a single approach due to, in our opinion, the lack of a qualitative theory of randomness.
A. Ranking and Order Stimuli can be ordered in a variety of ways ranging from standard physical procedures~ordering masses by, say, an equal-arm pan balance or tones by physical intensity (e.g.,
decibels)~to subjective attributes~perceived weight, perceived loudness, preference among foods, and so on. In each case, the information that is presumed to exist or to be obtainable with some
effort is the order between any two objects in the domain that is established by the attribute. Let A denote the domain of stimuli and let a and b be two elements of A, often written a, b E A. Then
we write a ~> b whenever a exhibits at least as much of the attribute as does b. The order ~ can be established either by presenting pairs and asking a subject to order them by having the subject
rank order the entire set of stimuli, by rating them in some fashion, or by indirect methods some of which we describe shortly. O f course, as we observed in section II, for most psychological
attributes such consistency is, at best, an idealization. If you ask a subject to order a and b more than once, the answer typically changes. Indeed, one assumes
Geoffrey Iverson and R. Duncan Luce
that, in general, a probability P,,b describes the propensity of a subject to order a and b as a ~ b. There are ways to induce an order from such probabilities. One is simply to use the estimated
propensity as the source of ordering, namely, a ~> b holds if and only if Pa,b :>
Ifa Fechnerian model holds (Section II.A. 1), this is the order established by the underlying subjective scale u. Another order, which often is of considerable importance, is not on the stimuli
themselves, but on pairs of them:
(a, b) ~ (a', b') whenever P,,h -> P,,',~'.
Still another way to establish a psychological order is by measuring the time it takes a subject to decide whether a has more of the attribute than b. If La,h denotes the mean response time for that
judgment, then replacing P in Eq. (21) with L yields a potentially new order that is well defined. In practice, these two orders are not wholly independent; witness the existence of speed-accuracy
trade-offs. Some authors have conjectured that La, b may be a decreasing function of ]Pa,b - 89 however, nothing really simple seems to hold (Luce, 1986). The purpose of this section and the
subsequent two sections is to study some of the properties of orders on structures and certain numerical representations that can arise. This large and complex topic has been treated in considerable
detail in several technical sources: Falmagne (1985); Krantz, Luce, Suppes, and Tversky (1971); Luce, Krantz, Suppes, and Tversky (1990); Narens (1985); Pfanzagl (1971); Roberts (1979); and Suppes,
Krantz, Luce, and Tversky (1989); and Wakker (1989). For philosophically different approaches and commentary, see Decoene, Onghena, and Janssen (1995), Ellis (1966), Michell (1990, 1995), Nieder6e
(1992, 1994), and Savage and Ehrlich (1992). 1. Transitivity and Connectedness To the extent that an order reflects an attribute that can be said to exhibit "degree of" or "amount of," then we expect
it to exhibit the following property known as transitivity: For all a, b, c E A, ifa~
b a n d b > ~ c, t h e n a ~
Transitivity is a property of numbers: 12 -> 8 and 8 -> 5 certainly means that 12 _> 5. At various times we will focus on whether transitivity holds. A second observation is that for very many
attributes it is reasonable to assume the following property, which is known as connectedness: For all a and b E A, either a ~ b or b ~ a or both.
The orders defined by Eqs. (20) and (21) obviously satisfy connectedness.
1 The Representational Measurement Approach to Problems
When both a ~> b and b ~> a hold, we write a "- b meaning that a and b are indifferent with respect to the attribute of the ordering. It does not usually correspond to equality; two objects can have
the same weight without being identical. If a ~ b but not a - b, then we write a > b. Whatever the attribute corresponding to ~> is called, the attribute corresponding to > receives the same name
modified by the adjective strict. Equally, if > has a name, ~> is prefixed by weak. So, for example, if > denotes preference, >~ denotes weak preference. Should one confront an attribute for which
connectedness fails, so that for some a and b neither a ~ b nor b ~> a, we usually speak of a and b as being noncomparable in the attribute and the order as being partial. For example, suppose one
were ordering a population of people by the attribute "ancestor of." This is obviously transitive and equally obviously not connected. All of the attributes discussed in this chapter are assumed to
be connected. A connected and transitive order is called a weak order. When indifference, "-, of a weak order is actually equality, that is, a --- b is equivalent to a = b, the order is called simple
or total. The numerical relation -> is the most c o m m o n example of a simple order, but very few orders of scientific interest are stronger than weak orders unless one treats classes of equivalent
elements as single entities. 2. Ordinal Representations One major feature of measurement in the physical sciences and, to a lesser degree, in the behavioral and social sciences is the convenience of
representing the order information numerically. In particular, it is useful to know when an empirical order has an order preserving numerical representation, that is, when a function + from A into
the real numbers ~ (or the positive real numbers ~+) exists such that for all a, b E A, a ~ b is equivalent to cb(a) -> cb(b).
Because --- is a total order, it is not difficult to see that a necessary condition is that ~> be a weak order. When A is finite, being a weak order is also sufficient because one can simply take &
to be the numerical ranking: assign 1 to the least element, 2 to the next, and so on. For infinite structures, another necessary condition must be added to achieve sufficiency; it says, in effect,
that (A, ~>) must contain a subset that is analogous to the rational numbers in the reals, that is, a countable order-dense subset. The details, listed as the Cantor or Cantor-Birkhoff theorem, can
be found in any book on the theory of measurement (e.g., Krantz et al., 1971, section 2.1, or Narens, 1985, p. 36). One feature of Eq. (24) is that iffis any strictly increasing function 6 from R to
R, 6 then f(&) is an equally good representation of (A, ~>). When a 6 If x > y, t h e n
f(x) > f(y).
Geoffrey Iverson and R. Duncan Luce
representation has this degree of nonuniqueness, it is said to be of ordinal scale type. One drawback of this nonuniqueness is that little of arithmetic or calculus can be used in a way that remains
invariant under admissible scale changes. For example, i f + is defined on the positive real numbers, then f(+) = +2 is an admissible transformation. If +(a) = 5, +(b) = 4, +(c) = 6, and +(d) = 3,
then +(a) + +(b) -> +(c) + +(d), but +2 reverses the order of the inequality. Therefore great care must be taken in combining and compressing information that is represented ordinally (see section
VI). 3. Nontransitivity of Indifference and Weber's Law A very simple consequence of ~> being a weak order is that both the strict part, >, and the indifference part, "--, must also be transitive.
Although the transitivity of > seems plausible for many attributes, such may not be the case for ---, if for no other reason than our inability to discriminate very small differences. The measurement
literature includes a fair amount of material on orderings for which > is transitive and "-- is not. Conditions relating them are known that lead to a representation in terms of two numerical
functions + and 8, where ~ > 0 is thought of as a threshold function: a > b is equivalent to d3(a) -> +(b) + 8(b),
a--- b is equivalent to + ( a ) - 8(a) < +(b) < +(a) + 8(a).
Orders exhibiting such a threshold representation are known as semiorders and interval orders, the latter entailing different upper and lower threshold functions (Fishburn, 1985; Suppes et al., 1989,
chap. 16). One major question that has been studied is, when is it possible to choose in such a way that 8 is a constant? This question is very closely related to the psychophysical question of when
does Weber's law (just detectable differences are proportional to stimulus intensity) hold in discrimination. To be specific, in a context of probabilistic responses, suppose a probability criterion
k, 89< ~ < 1 (e.g., 0.75 is a common choice) is selected to partition the discriminable from the indiscriminable. This defines the algebraic relation ~>x in terms of the probabilities by a >~ b is
equivalent to P~,b -> k
a ---x b is equivalent to 1-k < P,,b < k.
It can be shown that for ~>~, to have a threshold representation, Eq. (25), with a constant threshold, is equivalent to the Weber function of P satisfying Weber's law (see sections II.A.2 and VII.B.
1 The Representational Measurement Approach to Problems
B. Conjoint Structures with Additive Representations Let us return to the simpler case of a weak order. The ordinal representation is rather unsatisfactory because of its high degree of
nonuniqueness, so one is led to consider situations exhibiting further empirical information that is to be represented numerically. That consideration is the general topic of this subsection and
section IV.C. 1. Conjoint Structures The sciences, in particular psychology, are replete with attributes that are affected by several independent variables. For example, an animal's food preference
is affected by the size and composition of the food pellet as well as by the animal's delay in receiving it; the aversiveness of an electric shock is affected by voltage, amperage, and duration;
loudness depends on both physical intensity and frequency; 7 the mass of.an object is affected by both its volume and the density of material from which it is made; and so forth. Each of these
examples illustrates the fact that we can and do study how independently manipulable variables trade off against one another in influencing the dependent attribute. Thus, it is always possible to
plot those combinations of the variables that yield equal levels of the attribute. Economists call these indifference curves, and psychologists have a myriad of terms depending on context: ROCs for
discrimination (section II.B. 1), equal-loudness contours, and curves of equal aversiveness, among others. The question is whether these trade-offs can be a source of measurement. Let us treat the
simplest case of two independent variables; call their domains A and U. Thus a typical stimulus is a pair, denoted (a, u), consisting of an A element and a U element. The set of all such pairs is
denoted A x U. The attribute in question, ~, is an ordering ofA x U, and we suppose it is a weak order. One possibility for the numerical representation + is that in addition to being order
preserving, Eq. (24), it is additive over the factors A and U, 8 meaning that there is a numerical function +A on A and another one + u on X such that + ( a , U) = + A ( a ) "k- ~)u(U),
that is, for a, b E A and u, v E U,
(a, u) ~ (b, v)is equivalent to r
+ d?u(U) >~ +A(b) + +u(V).
7 Witness the shape of equal loudness contours at low intensity, which is the reason for loudness compensation as well as intensity controls on audio amplifiers. 8 The terms independentvariable,
factor, and componentare used interchangeably in this literature, except when component refers to the level of a factor.
Geoffrey Iverson and R. Duncan Luce
Such additive representations are typically used in the behavioral and social sciences, whereas the physical sciences usually employ a multiplicative representation into the positive real numbers,
~+. The multiplicative representation is obtained from Eq. (27) by applying an exponential transformation, thereby converting addition to multiplication: ex+~' - exer. 2. The Existence of an Additive
Representation Two questions arise: What must be true about >~ so that an additive representation, Eq. (27), exists; and if one does exist, how nonunique is it? Mathematically precise answers to
these questions are known as representation and uniqueness theorems. 9 It is clear that for such a strong representation to exist, the qualitative ordering ~> must be severely constrained. Some
constraints are easily derived. For example, if we set u = v in Eq. (27b), we see that because Cu(u) appears on both sides of the inequality it can be replaced by any other common value, for example,
by Cu(W). Thus, a necessary qualitative condition, known as independence in this literature, 1~ is that for all a, b in A and u, v in U, (a, u) >~ (b, u) is equivalent to (a, v) ~ (b, v).
Similarly, one can hold the first component fixed and let the second one vary: (a, u) >~ (a, v)is equivalent to (b, u) ~> (b, v).
For a long time psychologists have been sensitive to the fact that Eq. (28) necessarily holds if the attribute has an additive representation, and in plots of indifference curves (often with just two
values of the factors) the concern is whether or not the curves "cross." Crossing rejects the possibility of an additive representation; however, as examples will show, the mere fact of not crossing
is insufficient to conclude that an additive representation exists. The reason is that other conditions are necessary beyond those that can be deduced from weak ordering and Eq. (28). For example,
suppose we have two inequalities holding with the property that they have a common A value in the right side of the first qualitative inequality and the left side of the second one and also a common
U value in the left side of the first and the right side of the second qualitative inequalities; then when the corresponding numerical inequalities are added and the common values canceled from the
two sides one concludes from Eq. (27) that if (a, w) ~> (g, v)and (g, u) >~ (b, w)then (a, u) >~ (b, v).
9 The term uniqueness is used despite the fact that the thrust of the theorem is to tell just how nonunique the representation is. 10 Mathematically, it might better be called
monotonicity, but the term independence is widely
1 The Representational Measurement Approach to Problems
This property is known as double cancellation because, in effect, two valuesl w and g, are canceled. In recent years it has been recognized that at least both Eqs. (28) and (29) need to be checked
when deciding if an additive representation is possible (see Michell, 1990). It is not difficult to see that we can go to three antecedent inequalities with an appropriate pattern of common elements
so that Eq. (27) leads to further, more complex conditions than Eq. (29). Do we need them all? The answer (see Chap. 9 of Krantz et al., 1971) is yes if we are dealing with a finite domain A x U. For
infinite domains, 11 however, it turns out that the properties of weak order, independence, and double cancellation-that is, Eqs. (22), (23), (28), and (29)--are sufficient for those (important)
classes of structures for which the independent variables can reasonably be modeled as continuous (often physical) variables. Such continuous models are typically assumed in both psychophysics and
utility theory. The added conditions are a form of solvability of indifferences (see the next subsection) and a so-called Archimedean property, which we do not attempt to describe exactly here (see
section IV.C and chap. 6 of Krantz et al., 1971). Suffice it to say that it amounts to postulating that no nonzero interval is infinitesimal relative to any other interval; all measurements are
comparable. 3. The Uniqueness of Additive Representations The second question is, how nonunique is the & of Eq. (27)? It is easily verified that qJA = r&A + S and qJu = r&u + t, where r > 0, s, and t
are real constants, is another representation. Moreover, these are the only transformations that work. Representations unique up to such positive affine (linear) transformations are said to be of
interval scale type (Stevens, 1946, 1951). 4. Psychological Applications Levelt, Riemersma, and Bunt (1972) collected loudness judgments over the two ears and constructed an additive conjoint
representation. Later Gigerenzer and Strube (1983), using an analysis outlined by Falmagne (1976) that is described in section IV.D, concluded that additivity of loudness fails, at least when one of
the two monaural sounds is sufficiently louder than the other: the louder one dominates the judgments. To the extent that additivity fails, we need to understand nonadditive structures (section VI).
Numerous other examples can be found in both the psychological and marketing literatures. Michell (1990) gives examples with careful explanations. 11 O f course, any experiment is necessarily finite.
So one can never test all possible conditions, and it is a significant inductive leap from the confirmation of these equations in a finite data set to the assertion that the properties hold
throughout the infinite domain. For finite domains one can, in principle, verify all of the possible cancellation properties.
Geoffrey Iverson and R. Duncan Luce
b A
ii I/! L
> ,r(b)
FIGURE 10 The construction used to create an operator OA on one component of a conjoint structure that captures the trade-off of information. Pairs connected by dashed lines that intersect in the
middle are equivalent. Panel (a) schematically represents the components as continua with distinguished points ao and Uo. Panel (b) maps the interval aob to the interval Uo'rr(b).Panel (c)
illustrates adding the latter interval to aoa to get the "sum" interval ao(a GAb).
C. Concatenation Structures with Additive Representations 1. Reducing the Conjoint Case to a Concatenation Operation It turns out that the best way to study independent conjoint structures, whether
additive or not (section VI.B.2), is to map all of the information contained in (A • U, ~ ) into an operation and ordering on A. Consider the following definition: a ~A b is equivalent to (a, u) ~
(b, u).
Independence, Eq. (28), says that the order induced on A by Eq. (30) is unaffected by the choice of u on the second factor. Inducing an operation on A is somewhat more complex. The general procedure
is outlined in Figure 10 and details are given in Krantz et al. (1971, p. 258). It rests first on arbitrarily picking an element from each factor, say, ao from A and Uo from U. Next, one maps what
intuitively can be thought of as the "interval" aob of the A component onto an equivalent "interval" of the U component, which we will call Uo'rr(b). The formal definition is that w(b) satisfies the
1 The Representational Measurement Approach to Problems (b, Uo)~ (ao, "rr(b)).
35 (31)
Clearly, one must make an explicit assumption that such a solution at(b) can always be found. Such a solvability condition is somewhat plausible for continuous dimensions and far less so for discrete
ones. The third step is to "add" the interval aob to the interval aoa by first mapping the interval aob to uo~r(b), Eq. (31), and then mapping that interval back onto the interval from a to a value
called aOAb that is defined as the solution to the indifference
(aOAb, uo)~ (a, 'rr(b)). (32) The operation Oa is referred to as one of concatenation or "putting together." It turns out that studying the concatenation structure (A, &A, OA, ao) is equivalent to
studying (A x X, >~), and because of its importance in physical measurement it is a well-studied mathematical object (see Krantz et al., 1971, chap. 3; Narens, 1985, chap. 2). For simplicity, let us
drop the A subscripts and just write (A, ~>, O, ao). 2. Properties of O and It is clear that those concatenation structures arising from additive conjoint ones will involve some constraints on O and
on how it and ~ are related. It is not terribly difficult to show that independence of the conjoint structure forces the following monotonicity property: a >~ a' is equivalent to a O b ~ a' O b and
b >~ b' is equivalent to a O b >~ a O b'.
Intuitively, these conditions are highly plausible: Increasing either factor of the operation increases the value. The double-cancellation property implies the following property of O, which is
called associativity: aO(bOc)
~ ( a O b ) Oc.
A third property is that ao acts like a "zero" element: ao 9 a --~ a 9 ao "" a.
Solvability ensures for every a, b E A not only that a O b is defined, but that each element a has an inverse element a -1 with the property: a O a -1 ~ a - 1 0 a ~ a o .
Finally, we formulate the Archimedean property of such a structure. By repeated applications of Eq. (34), it does not matter which sequence of binary groupings is used in concatenating n elements.
Let a(n) denote n concatenations of a with itself. Suppose a > ao. The Archimedean assumption says that for any b E A one can always find n sufficiently large so that a(n) > b. Intuitively, this
means that a and b are commensurable.
Geoffrey Iverson and R. Duncan Luce
3. H61der's Theorem In 1901 the German mathematician O. H61der proved a version of the following very general result (H61der, 1901/1996). Suppose a structure (A, >~, O, ao) satisfies the the
following properties: >~ is a weak order, monotonicity, associativity, identity, inverses, and Archimedeaness, Eqs. (22), (23), and (33) through (36). Such structures are called Archimedean ordered
groups, the term group encompassing the three properties of Eqs. (34) through (36) (see footnote 2). The result is that such a structure has a representation ~ into the additive real numbers, which
means that ~ is order preserving, Eq. (24), and additive over O" ~(a O b) = ~(a) + ~(b).
From this fact, it is fairly easy to establish that a conjoint structure satisfying independence, Eq. (28), double cancellation, Eq. (29), solvability, and a suitable Archimedean condition has an
additive representation. 4. Ratio Scale Uniqueness The nonuniqueness of H61der's additive representation is even more restricted than that for conjoint structures: ~ and qJ are two additive
representations if and only if for some constant r > 0, qJ = r~. The class of such representations is said to form a ratio scale (Stevens, 1946, 1951). A conjoint representation is interval rather
than ratio because the choice of the "zero" element ao is completely arbitrary; however, in H61der's additive structure the concatenation of the zero element with another element leaves the latter
unchanged and so &(ao) = O in all representations ~. 5. Counting Yields the Representation The key to the construction of a representation is to fix some element c > ao and find the number of copies
of c that are required to approximate any element a. This is done by considering m copies of a and using the Archimedean axiom to find the smallest n, which is a function of m, such that c(n) just
exceeds a(m), that is c(n - 1) ~< a(m) ~ c(n). One shows that n/m approaches a limit as m approaches oc and defines ~(a) be that limit where, of course, ~(c) = 1. Next one proves that such limits are
additive over O. 6. Extensive Structures If one looks just at the positive part of A, that is, A+ = {a: a ~_ A and a > ao} with ~> and O restricted to that part, one has the added feature that for
all a, b E A+, both a O b > a and a O b < b, and a > b implies c > ao such that a O c > b (c - a O b-l). Such a structure is called extensive (in contrast to the intensive structures discussed in
section V). One can use H61der's theorem
1 The Representational Measurement Approach to Problems
to show that any extensive structure has a ratio scale representation into (R+, - , +), the ordered, additive, positive real numbers. These structures were, in fact, the first to have been formalized
as models of certain basic types of physical measurement. For example, if A denotes straight rods, with ~> the qualitative ordering of length obtained by direct comparison and O the operation of
abutting rods along a straight line, then Eqs. (22), and (23), and (33) through (36) are all elementary physical laws. Mass, charge, and several other basic physical quantities can be measured in
this fashion. Aside from their indirect use in proving the existence of additive conjoint measurement representations, extensive structures have played only a limited descriptive role in the
behavioral and social sciences, although they can serve as null hypotheses that are then disconfirmed. It is not that there is a dearth of operations but rather that one or more of Eqs. (33) through
(36) usually fail, most often associativity, Eq. (34). For example, various forms of averaging, although involving + in their representation, are not associative (see section V). Receiving two goods
or uncertain alternatives is an operation of some importance in studying decision making, and it is unclear at present whether or not it is associative. 7. Combining Extensive and Conjoint Structures
Often the components of a conjoint structure (A x U, ~>) are themselves endowed with empirical operations *A and/or *u that form extensive structures on A and U, respectively. Many physical examples
exist, for example, mass and velocity ordered by kinetic energy, as well as psychological ones such as sound intensities to the two ears. An important question is, how are the three structures
interrelated? One relation of great physical importance is the following distribution law: For alla, b,c, d E A , u, v E U, if (a, u) "- (c, v)and (b, u) "-- (d, v), then (a*/lb, u) "-- (c*ad, v),
and a similar condition is true for the second component. It has been shown that if 4~,~ and ~ u are additive representations of the two extensive structures, then there is a constant [3 such that
is a multiplicative representation of(A x U, >~). (Luce et al., 1990, summarize the results and provide references to the original literature.) The exponent [3 characterizes the trade-off between the
two extensive measures. For example, in the case of kinetic energy [3 is 2, which simply says a change in velocity by a factor k is equivalent to a change in mass by a factor k 2. Such trade-off
connections as given by Eq. (39) are common in physical measurement, and their existence underlies the dimensional structure of classical
Geoffrey Iverson and R. Duncan Luce
physical measurement. Moreover, their existence, as embodied in Eqs. (38) and (39), is also the reason that physical units are always products of powers of several basic extensive measures, for
example, the unit of energy, the erg, is g-cm2/s 2. (For details see chap. 10 ofKrantz et al., 1971, and chap. 22 of Luce et al., 1990.) Luce (1977) also studied another possible relation between an
additive conjoint structure whose components are also extensive. Let a(j) denote j concatenations of a, and suppose there exist positive integers m and n such that for all positive integers i and a,
b E A and u, v E U,
(a, u) "-- (b, v)::~ (a(i'"), u(i")) --" (b(im), v(i").
Under some assumptions about the smoothness of the representations, it can be shown that the representation is of the form:
rAtb~A/m+ rt3b~:/'' + s,
where rA > 0, r u > 0, and s are real constants. For example, the Levelt et al. (1972) conjoint analysis of loudness judgments over the two ears supported not only additivity but the power functions
of Eq. (41); however, see section IV.D. The power functions arising in both Eqs. (39) and (41) arc psychologically interesting because, as is discussed in section VII.C, substantial empirical
evidence exists for believing that many psychological attributes are approximately power functions of the corresponding physical measures of intensity. O f course, as noted earlier, later studies
have cast doubt on the additivity of loudness between the ears.
D. Probabilistic Conjoint Measurement The variability that accompanies psychophysical data rules out the possibility of direct empirical tests of algebraic measurement axioms. Probabilistic versions
of both extensive measurement (Falmagne, 1980) and conjoint measurement (Falmagne, 1976) have been proposed, although as we shall see they often exhibit the difficulty alluded to in section I. We
treat only the conjoint case here. Consider the discrimination of pure tones (a, u) presented binaurally: a denotes the intensity of a pure tone presented to the left ear of an observer, and u is the
intensity of the same frequency presented, in phase, to the right ear. The data are summarized in terms of the probability Pa,,,bv that the binaural stimulus (a, u) is judged at least as loud as
stimulus (b, v). One class of general theories for such data that reflects the idea that the stimuli can be represented additively asserts that
Pa.,bv = H[l(a) + r(u), l(b) + r(v)] for some suitable functions l, r, and H with H(u, u) =
1 The Representational Measurement Approach to Problems
To generate a loudness match between two binaural tones, one fixes three of the monaural components, say, a, u, and v, and seeks b such that P,,,,.bv -1. O f course, this must be an estimate and is
therefore subject to variability. This suggests replacing the deterministic, but empirically unattainable prediction from Eq. (42) that l(a) + r(u) = l(b) + r(v)
by a more realistic one that substitutes a random variable Uau v for b: l(Ua,,v) = l(a) + r(u) - r(v) + Eauv,
where Eau v is a random error. This proposal illustrates the difficulty in simultaneously modeling structure and randomness. The assumption that the error is additive in the additively transformed
data seems arbitrary and is, perhaps, unrealistic. Certainly, no justification has been provided. One would like to see Eq. (44) as the conclusion of a theorem, not as a postulate. O f course,
writing equations like Eq. (44) is a widespread, if dubious, tradition in statistics. If the random error Ea,,v is assumed to have median zero, then the random representation Eq. (44) simplifies to
the deterministic Eq. (43) upon taking medians over the population. This suggests studying the properties of the function muv(a) = Median(Ua,,,,). Falmagne (1976) showed, in the context of natural
side conditions, that if the medians satisfy the following property of cancellation,
m,,,,[mvw(a)l =
then they can be represented in the additive form of Eq. (43): l(a) + r(u) = l[muv(a)] + r(v). The linkage between the median functions and algebraic conjoint measurement is provided by a relation ~
over the factor pairs: au ~, F), w h e r e F : A • . . . result f r o m the operation is written as a = F ( a 1, . . . , a , ) . It is not difficult to see that m o n o t o n i c i t y generalizes to
saying that for any c o m p o n e n t i, !
a i >~ a i F(a 1....
, al,
is equivalent to
. . . , a,,)
F(a I .....
. . . , an).
I d e m p o t e n c e becomes F(a,
. . . a,
. . . a)
T h e r e is no obvious generalization o f b i s y m m e t r y , but we do not really need one because it suffices to require that b i s y m m e t r y hold on any pair o f c o m p o n e n t s for any
arbitrary, but fixed, choice for the remaining n - 2. T h e resulting representation on assuming weak order, monotonicity, idempotence, and suitable solvability and A r c h i m e d e a n properties
is the existence o f +:A ~ ~ and n numerical constants ~t; such that ?1
I,.Li ~
i -- 1 . . . .
~] i=1
~l, i ~-
9 9 9 ,a,,)l
= ~
bl, i + ( a i ) .
C. Utility of Uncertain Alternatives 1. Subjective Expected Utility (SEU) For decision making, the model is s o m e w h a t m o r e c o m p l e x and developing proofs is considerably m o r e
difficult. But the representation arrived at is easy e n o u g h to state. A typical uncertain alternative g assigns consequences to events; that is, it is an assignment g ( E 3 = g i , i = 1, . . .
, n, w h e r e the E~ are disjoint events taken f r o m a family o f events and the g~ are consequences f r o m a set o f possible consequences (e.g., amounts o f money). Lottery examples abound in
which the events are often sets o f ordered k-tuples o f n u m b e r s , w h e r e k usually is between 3 and 10, to a few o f which award a m o u n t s are assigned. ~4 T h e S E U theories in their
simplest form assert that for a sufficiently rich collection o f events and uncertain alternatives there is a (subjective) probability measure S over the family o f events, ~5 and there is 14 For
example, in California the "Daily 3" requires that the player select a three-digit number; if it agrees with the random number chosen, the player receives a $500 payoff. Clearly there is 1 chance in
1000 of being correct and at $1 a ticket, the expected value is $.50. 15 The term subjective arises from the fact that the probability S is inferred for each decision maker from his or her choices;
it is not an objective probability.
Geoffrey Iverson and R. Duncan Luce
an interval scale utility function U over uncertain alternatives, a6 including the consequences, such that the preference order over alternatives is preserved by ?!
U(g) = ~
S(Ei) U(g,) .
This is the expectation of U of the consequences relative to the subjective probability measure S. The first fully complete axiomatization of such a representation was by Savage (1954). There have
been many subsequent versions, some of which are summarized in Fishburn (1970, 1982, 1988) and Wakker (1989). 2. Who Knows the SEU Formula? Note the order of information underlying the SEU
formulation, Eq. (56): The patterns of preferences are the given empirical information; if they exhibit certain properties (formulated below), they determine the existence of the numerical
representation of Eq. (56). It is not assumed that the representation drives the preferences. The representation is a creature of the theorist, and there is no imputation whatsoever that people know
U and S and carry out, consciously or otherwise, the arithmetic computations involved in Eq. (56). These remarks are true in the same sense that the differential equation describing the flight path
of a ballistic missile is a creation of the physicist, not of the missile. The mechanisms underlying the observed process--decision or mot i o n ~ s i m p l y are not dealt with in such a theory.
Many cognitive psychologists are uncomfortable with such a purely phenomenological approach and feel a need to postulate hypothetical information processing mechanisms to account for what is going
on. Busemeyer and Townsend (1993) illustrate such theorizing, and certainly, to the degree that psychology can be reduced to biology, such mechanisms will have to be discovered. At present there is a
wide gulf between the mechanisms of cognitive psychologists and biologists. It should be remarked that throughout the chapter the causal relation between behavior and representation is that the
latter, which is for the convenience of the scientist, derives from the former and is not assumed to be a behavioral mechanism. We did not bring up this fact earlier mainly because there has not been
much tendency to invert the causal order until one comes to decision theory. 16 This use o f the symbol U as a function is very different from its earlier use as the second c o m p o n e n t o f a
conjoint structure. To be consistent we should use +, but it is fairly c o m m o n practice to use U for utility functions.
1 The Representational Measurement Approach to Problems
3. Necessary Properties Underlying SEU The most basic, and controversial, property underlying SEU and any other representation that says there is an order-preserving numerical representation is that
~> is a weak order. Although, many accept this as a basic tenet of rationality, others question it both conceptually and empirically. For a thorough summary of the issues and an extensive list of
references, see van Acker (1990). Aside from ~> being a weak order, the most important necessary properties leading to Eq. (56) are two forms of monotonicity, which can be described informally as
follows. Consequence monotonicity means that if any g; is replaced by a more preferred gl, with all other consequences and the events fixed, the resulting g' is preferred to g. Event monotonicity
means that if among the consequences of g, g~ is the most preferred and gn is the least and if E1 is augmented at the expense of E,,, then the modified uncertain alternative will be preferred to the
original one. A third property arises from the linear nature of Eq. (56). It is most easily stated for the case in which the chance events are characterized in terms of given probabilities and the
representation has the simplifying feature that S(pi) = P i. In this case we speak of the alternatives as lotteries and the representation obtained from Eq. (56) with S(Ei) replaced by Pi as expected
utility (EU). Suppose g and h are lotteries from which a new lottery (g, p; h, 1 - p) is composed. The interpretation is that with probability p one gets to play lottery g, and with probability 1 - p
one gets to play h. Then, chance picks one o f g and h, which is then played independently of the preceding chance decision. When g is run, the consequence g; occurs with probability pi, i = 1, . . .
. n, and when h is run, the consequence h i occurs with probability q;, i = 1, . . . , m. Assuming EU, we see that U(g, p; h, 1 - p) = p U ( g ) + (1 - p) U(h) ]F/
= p ~
U(g,)p, + (1 - p )
UL~I, PiP; . . . .
(1 - p ) ; . . .
g,,, p,~p; hi, ql ; h , , q,(1 - p)].
Thus, according to the EU representation, (g, p; h, 1 - p) "-" [gl, plp; . . . ;g,,, p,, p; h~, q~(1 - p ) ; . . .
h,,, q,,(~ - p)].
This property is known as reduction of compound lotteries. Combining consequence monotonicity with the (often implicit) reduction of compound
Geoffrey Iverson and R. Duncan Luce
gambles is known among economists as independence. 17 The use of the reduction-of-compound-gambles principle is implicit when, for example, one assumes, as is common in economics, that the lotteries
can be modeled as random variables, in which case Eq. (57) is actually an equality because no distinction is made among various alternative realizations of a random variable. For uncertain
alternatives, a principle, similar in spirit to the reduction of compound lotteries, reads as follows: If two alternatives are identical except for the sequence in which certain events are realized,
then the decision maker treats them as equivalent. These are called accounting equivalences (see, e.g., Luce, 1990b). When all conceivable equivalences hold, we speak of universal accounting.
Consider the following important specific equivalence. Let a 9 denote that a is the consequence if E occurs and b otherwise. Then we say event commutativity holds if
(a 9
.-. (aOob) 9
The left term is interpreted to mean that two independent experiments are run and a is received only if event D occurs in the first and E in the second. Otherwise, the consequence is b. The right
term is identical except that E must occur in the first and D in the second. Consequence monotonicity and the reduction of compound lotteries are necessary for EU, and they go a long way toward
justifying the representation. Similarly, consequence and event monotonicity and universal accounting equivalences are necessary for SEU and they, too, go a long way toward justifying SEU. For this
reason, they have received considerable empirical attention. 4. Empirical Violations of Necessary Properties Perhaps the most basic assumption of these decision models is that preferences are context
independent. It is implicitly assumed whenever we attach a utility to an alternative without regard to the set of alternatives from which it might be chosen. To the extent this is wrong, the
measurement enterprise, as usually cast, is misguided. MacCrimmon, Stanburg, and Wehrung (1980) have presented very compelling evidence against context independence. They created two sets of
lotteries, each with four binary lotteries plus a fixed sum, s. The sum s and one lottery, I, were common to both sets. Medium level executives, at a business school for midcareer training, were
asked (among other things) to rank order each set by preference. A substantial fraction ordered s and I differently in the two sets. 17 The word independence has many different but related meanings
in these areas, so care is required to keep straight which one is intended.
1 The Representational Measurement Approach to Problems
The next most basic assumption to the utility approach is transitivity of preference. To the degree failures have been established, they appear to derive from other considerations. Context effects
are surely one source. A second is demonstrated by the famed preference reversal phenomena in which lottery g is chosen over h but, when asked to assign monetary evaluations, the subject assigns less
value to g than to h (Lichenstein & Slovic, 1971; Luce, 1992b, for a list of references; Slovic & Lichenstein, 1983). This intransitivity probably reflects a deep inconsistency between judged and
choice certainty equivalents rather than being a genuine intransitivity. Long before intransitivity or context effects were seriously examined, independence and event monotonicity were cast in
serious doubt by, respectively, Allais (1953; see Allais & Hagen, 1979) and Ellsberg (1961), who both formulated thought experiments in which reasonable people violate these conditions. These are
described in detail in various sources including Luce and Raiffa (1957/1989) and Fishburn (1970). Subsequent empirical work has repeatedly confirmed these results; see Luce (1992b) and Schoemaker
(1982, 1990). The major consequence for theory arising from the failure of event monotonicity is that Eq. (56) can still hold, but only if S is a weight that is not a probability. In particular,
additivity, that is, for disjoint events D, E, S(D tO E) = S(D) + S(E)--although true of probability--cannot hold if event monotonicity is violated. This has led to the development of models leading
to the representation of Eq. (56), but with S a nonadditive weight, not a probability. The failure of independence is less clear-cut in its significance: Is the difficulty with consequence
monotonicity, with the reduction of compound gambles, or both? As Luce (1992b) discussed, there has been an unwarranted tendency to attribute it to monotonicity. This has determined the direction
most (economist) authors have taken in trying to modify the theory to make it more descriptive. Data on the issue have now been gathered. Kahneman and Tversky (1979) reported studies, based on fairly
hypothetical judgments, of both independence and monotonicity, with the former rejected and the latter sustained. Several studies (Birnbaum, 1992; Birnbaum, Coffey, Mellers, & Weiss, 1992; Mellers,
Weiss, & Birnbaum, 1992) involving judgments of certainty equivalents have shown what seem to be systematic violations of monotonicity. Figure 11 illustrates a sample data plot. In an experimental
setting, yon Winterfeldt, Chung, Luce, and Cho (1997), using both judgments and a choice procedure, questioned that conclusion, especially when choices rather than judgments are involved. 18 They
also argued that even 18 This distinction is less clear than it might seem. Many methodsexist for which the classification is obscure, but the studies cited used methods that lie at the ends of the
Geoffrey Iverson and R. Duncan Luce ~ q,i
90 Test
60 50 ($0, p, $96)
($24, p, $96) 0.0
. 0.2
. 0.4 to
. Win
. 0.6 T "
F I G U R E 11 Certainty equivalents for binary gambles (x, p; $96) versus 1 - p for two values of x, $0 and $24. Note the nonmonotonicity for the two larger values of 1 - p. From Figure 1 of
"Violations of Monotonicity and Contextual Effects in Choice-Based Certainty Equivalents," by M. H. Birnbaum, 1992, Psychological Science, 3, p. 312. Reprinted with the permission of Cambridge
University Press.
with judgments, the apparent violations seem to be within the noise level of the data. Evidence is accumulating that judged and choice-determined CEs of gambles simply are not in general the same
(Bostic, Herrnstcin, & Luce, 1990; Mellers, Chang, Birnbaum, & Ord6fiez, 1992; Tversky, Slovic, & Kahneman, 1990). This difference is found even when the experimenter explains what a choice certainty
equivalent is and asks subjects to report them directly. Evidence for the difference is presented in Figure 12. A question of some interest is whether, throughout psychophysics, judged indifferences
such as curves of equal brightness, fail to predict accurately comparisons between pairs of stimuli. Surprisingly, we know of no systematic study of these matters; it has simply been taken for
granted that they should be the same. Among decision theorists, the most common view is that, to the degree a difference exists, choices arc the more basic, and most theoretical approaches have
accepted that. One exception is Lucc, Mellers, and Chang (1993) who have shown that the preceding data anomalies, including Figure 12, are readily accounted for by assuming that certainty equivalents
are basic and the choices arc derived from them somewhat indirectly by establishing a reference level that is determined by the choice context, recoding alternatives as gains and losses relative to
the reference level, and then using a sign-dependent utility model of a type discussed in section VI.D.2. Indeed, in that section we take up a variety of generalized utility models.
1 The Representational Measurement Approach to Problems OBSERVED
Selling Prices
Choice Proportions 3.00 5.40 9.70 17.50 31.50 56.70
15 /
3.00 5.40 9.70 17.50 31.50 56.70 .05
4 ~ 7/~ 11/~ 17/~ 2Ot~ 25 9 I / ~I/ / 10 14 / I 6N 24 / 27 30
5/~ / 8/~ / 12/~/ 18
22 /
26 /
29 /
/ / /
2.5 6
4 /, 8
/ /
17 ~
23 J
15/~/21/~ / 26
13/~ 19/~ 24.~ 30 /
6 / 13 18/~/ 24.5 ,/" 29/~j 33 .52 lO.ff 16/~ 22/~/ 28~ , / 32/x/ 35
32 /
13N 20/x 27~ 31~ 34t~ 36
FIGURE 12 Rank orders of 36 gambles established from choices and from selling prices (a form of certainty equivalent). The stimuli on the negative diagonal are approximately equal in expected value.
Note the sharply different patterns. Adapted from Figure 5 of"Is the Choice Correct Primitive? On Using Certainty Equivalents and Reference Levels to Predict Choices among Gambles," by R. D. Luce, B.
Mellers, and S. J. Chang, 1992, Journal of Risk and Uncertainty, 6, p. 133. Reprinted with permission.
D. Functional
Anderson (1981, 1982, 1991a, b, c) has provided detailed and comprehensive summaries, along with numerous applications to a wide range of psychological p h e n o m e n a m i n c l u d i n g
psychophysical, personality, and utility j u d g m e n t s ~ o f a method that he and others using the same approach call functional measurement, presumably with the ambiguity intentional. The method
begins with a particular experimental procedure and uses, primarily, three types of representations: additive, multiplicative, and averaging. These are described as "psychological laws" relating how
the independent variables influence the dependent one. Stimuli are ordered n-tuples of (often discrete) factors, where n usually varies within an experiment. This differs from conjoint measurement in
which the number of factors, n, is fixed. For example, a person may be described along various subsets of several dimensions, such as physical attractiveness, morality, honesty, industry, and so on.
Subjects are requested to assign ratings (from a prescribed rating scale) to stimuli that are varied according to some factorial design on the factors. The assigned ratings are viewed as constituting
a psychophysical law relating measures of the stimulus to subjective contributions. Then, assuming that one of the three representations~additive, multiplicative, or averaging~describes the data
(usually without any nonlinear transformation of them), Anderson developed computational schemes for estimating the parameters of the representation and for evaluating goodness of fit.
Geoffrey Iverson and R. Duncan Luce
As a simple example, he readily distinguished the additive from averaging representation as follows. Suppose A1 and A 2 are two stimulus factors and that al and a 2 are both desirable attributes but
with al more desirable than a 2. Thus, in an additive representation, tbl(al) < ~1(al) + tb2(a2) = &(al, a2),
whereas in an averaging one t~l(al) - w&l(al) + (1 - w)t~l(al)> wtbl(al) + (1 - w)t~2(a2) = ~(a,, a2). This observable distinction generalizes to more than two factors. Much judgmental data in which
the number of factors is varied favors the averaging model. For example, data on person perception make clear, as seems plausible, that a person who is described only as "brilliant" is judged more
desirable than one who is described as both "brilliant and somewhat friendly." Anderson's books and papers are replete with examples and experimental detail.
VI. SCALE TYPE, N O N A D D I T I V I T Y , A N D INVARIANCE In the earlier sections we encountered two apparently unrelated, unresolved issues--the possible levels of nonuniqueness, called scale
types, and the existence of nonadditive structures. We examine these issues now. As we shall see, a close relation exists between them and another topic, invariance, only briefly mentioned thus far.
A. Symmetry and Scale Type 1. Classification of Scale Types As we have noted, numerical representations of a qualitative structure usually are not unique. The nonuniqueness is characterized in what
are called uniqueness theorems. We have already encountered three scale types of increasing strength: ordinal, interval, and ratio. The reader may have noted that we said nothing about the uniqueness
of threshold structures (section IV.A.3). This is because no concise characterization exists. S. S. Stevens (1946, 1951), a famed psychophysicist, first commented on the ubiquity of these three types
of scales. In a transatlantic debate with members of a commission of the British Association for the Advancement of Science, Stevens argued that what is crucial in measurement is not, as was claimed
by the British physicists and philosophers of science, extensive
1 The Representational Measurement Approach to Problems
structures as such but rather structures of any type that lead to a numerical representation of either the interval or, better yet, 19 ratio level. 2. The Automorphism Formulation More than 30 years
later, Narens (1981a, b) posed and formulated the following questions in an answerable fashion: Why these scale types? Are there others? The key to approaching the problem is to describe at the
qualitative level what gives rise to the nonuniqueness. It is crucial to note that at its source are the structure-preserving transformations of the structure onto itself--the so-called automorphisms
of the structure. 2~ These automorphisms describe the symmetries of the structure in the sense that everything appears to be the same before and after the mapping. The gist of the uniqueness theorems
really is to tell us about the symmetries of the structure. Thus, the symmetries in the ratio case form a one-parameter family; in the interval case they form a two-parameter family; and in the
ordinal case they form a countable family. Moreover, Narens observed that these three families of automorphisms arc all homogeneous: each point of the structure is, structurally, exactly like every
other point in the sense that, given any two points, some automorphism maps the one into the other. A second fact of the ratio and interval cases, but not of the ordinal cases, is that when the
values of an automorphism are specified at N points (where N is 1 in the ratio case and 2 in the interval one), then it is specified completely. This he called finite uniqueness. An ordinal structure
is not finitely unique; it requires countably many values to specify a particular automorphism. Narens attacked following the question: For the class of ordered structures that are finitely unique
and homogeneous and that have representations on the real numbers, what automorphism groups can arise? 21 He developed partial answers and Alper (1987) completed the program. 22 Such structures have
automorphism groups of either ratio or interval type or something in between the two. Examples of the latter kind are the sets of numerical transformations x --o k"x + s, where x is any real number,
k is a fixed positive number, n varies over all of the integers, positive and 19 Ratio is better in that it admits far more structures than does the interval form. We will see this when we compare
Eqs. (59) and (60). Ratio is stronger than interval in having one less degree of freedom, but it is weaker in the sense of admitting more structures. 20 The term automorphismmeans "self-isomorphism."
Put another way, an automorphism is an isomorphic representation of the structure onto itself. 21 The set of automorphisms forms a mathematical group under the operation * of function composition,
which is associative and has an inverse relative to the identity automorphism (see footnote 2). 22 Alper also gave a (very complex) characterization of the automorphism groups of structures that are
finitely unique but not homogeneous.
Geoffrey Iverson and R. Duncan Luce
negative, and s is any real number; these are 1-point homogeneous and 2-point unique. The key idea in Alper's proof is this. An automorphism is called a translation if either it is the identity or no
point of the structure stays fixed under the automorphism. 23 For example, all the automorphisms of a ratio-scale structure are translations, but only some are translations in the case of interval
scales. The ordering in the structure induces an ordering on the translations: If'r and or are two translations, define ~" ~>' or if and only if'r(a) ~> or(a) for all a E A. The difficult parts of
the proof are, surprisingly, in showing that the composition of two translations, -r'or(a) = "r[or(a)], is also a translation, so * is an operation on the translations, and that this group of
translations is itself homogeneous. The ordered group of translations is also shown to be Archimedean, so by H61der's theorem (section IV.C.3) it can be represented isomorphically in (E+, > - ,- ) .
Moreover, using the homogeneity of the translations one can map the structure itself isomorphically into the translations and thus into (~+, ->,-). This is a numerical representation in which the
translations appear as multiplication by positive constants. In that representation, any other automorphism is proved to be a power transformation x --~ s x r, r > O, s > O. The upshot of the
Narens-Alper development is that as long as one deals with homogeneous structures that are finitely unique and have a representation onto the real numbers, 24 n o n e lie between the interval and the
ordinallike cases. Between the ratio and interval cases are other possibilities. We know of mathematical examples of these intermediate cases that exhibit certain periodic features (Luce& Narens,
1985), but so far they seem to have played no role in science. When it comes to nonhomogeneous structures, little that is useful can be said in general, but some that are nearly homogeneous are quite
important (see section VI.D).
B. Nonadditivity and Scale Type 1. Nonadditive Unit Concatenation Structures The preceding results about scale type are not only of interest in understanding the possibilities of measurement, but
they lead to a far more complete understanding of specific systems. For example, suppose .~i = ~, O) is a homogeneous and finitely unique concatenation structure (section 23 Structures with singular
points that are fixed under all automorphismsmminimum, maximum, or zero points--are excluded in Alper's work. They are taken up in section VI.C. 24 Luce and Alper (in preparation) have shown that the
following conditions are necessary and sufficient for such a real representation: The structure is homogeneous, any pair of automorphisms cross back and forth only finitely many times, the set of
translations is Archimedean, and the remaining automorphisms are Archimedean relative to all automorphisms.
1 The Representational Measurement Approach to Problems
IV. C.2) that is isomorphic to a real concatenation structure g~ = (R+, ---, q)). By the Narens-Alper theorem, we may assume that gt has been chosen so that the translations are multiplication by
positive constants. From that, Luce and Narens (1985), following more specific results in Cohen and Narens (1979), showed, among other things, that q) has a simple numerical form, namely, there is a
strictly increasing function f ~ ---> ~ that also has two properties: f i x ) I x is a strictly decreasing function of x and x 9 y = yf(x/y).
Such structures are referred to as unit concatenation structures. The familiar extensive case isf(z) = z + 1, that is, x q) y = x + y. Equation (59) shows that unit concatenation structures have a
very simple structure and that if one is confronted with data that appear to be nonadditive, one should attempt to estimate f. The function f c a n be constructed as follows. For each natural n u m b
e r n, define 0(a, n) = O(a, n - 1)oa and 0(a,1) = a. This is an inductive definition of one sense of what it means to make n copies of the element a of the structure. 2s These are called n-copy
operators, and it can be shown that they are in fact translations of the structure. In essence, they act like the equally spaced markers on a ruler, and the isomorphism d) into the multiplicativc
reals can be constructed from them exactly as in extensive measurement. Once ~b is constructed, one constructs f a s follows: For any positive n u m b e r z, find elements a and b such that z = (b(a)
/cb(b); then f (z) = r
0 b) / rb(b).
Note that an empirical check is implicit in this, namely that ~(a) _ ~(c) ~(b) ~(d)
~)(a O b) _ ~(c C) d) d)(b) d)(d) "
Because a structural property satisfied by any clement of a homogeneous structure is also satisfied by all other elements of the structure, such structures must necessarily satisfy, for all a E A,
one of the following: (positive) a 0 a > a; (negative) a 9 a < a; or (idempotent) a 9 a --- a. It turns out that only the latter can be an interval scale case, and its representation on [~ (not R+)
has the following simple rank-dependent form: For some constants c, d E(0, 1) and all x, y E R,
c x + (1 - c)y, dx L + (1 - d)y,
ifx>-y i f x < y.
25 Recall that the structure is not associative, so a binary operation can be grouped in a large n u m b e r of ways to form n copies of a single element. We have simply selected one of these, a
so-called right-branching one.
Geoffrey Iverson and R. Duncan Luce
This was the form mentioned in section V.C.4. It was first suggested in a psychological application by Birnbaum, Parducci, and Gifford (1971) and used in later papers (Birnbaum, 1974; Birnbaum &
Stegner, 1979). As we shall see, it subsequently was rediscovered independently by economists and has been fairly widely applied in utility theory (see Quiggin, 1993, and Wakker, 1989). 2.
Homogeneous Conjoint Structures Without going into much detail, the construction outlined in section IV. C. 1 for going from a conjoint to a concatenation structure did not depend on double
cancellation being satisfied, so one can use that definition for more general conjoint structures. Moreover, the concept of homogeneity is easily formulated for the general case, and it forces
homogeneity to hold in the induced concatenation structure. This reduction makes possible the use of the representation Eq. (59) to find a somewhat similar one for these nonadditive conjoint
structures. The details are presented in Luce et al. (1990). Therefore, once again we have a whole shelf of nonadditive representations of ratio and interval types. O f these, only the rank-dependent
cases have thus far found applications, but we anticipate their more widespread use once psychologists become aware of these comparatively simple possibilities. 3. Combining Concatenation and
Conjoint Structures Recall that we discussed additive conjoint structures with extensive structures on the components as a model of many simple physical laws and as the basis of the units of physics
(section IV.C.7). This result has been generalized to unit concatenation structures, Eq. (59). Suppose a (not necessarily additive) conjoint structure has unit concatenation structures on its
components and that the distribution property, Eq. (38), holds. Then one can show that the conjoint structure must in fact be additive and that the representation is that of Eq. (39). Indeed, if the
components are endowed with ratio scale structures of any type, not necessarily concatenation structures, then a suitable generalization of distribution is known so that Eq. (39) continues to hold
(Luce, 1987). The upshot of these findings is that it is possible, in principle, to extend the structure of physical quantities to incorporate all sorts of measurements in addition to extensive ones
without disturbing the pattern of units being products of powers of base units. Such an extension has yet to be carried out, but we now know that it is not precluded just because an attribute fails
to be additive. For additional detail, see Luce et al. (1990, pp. 124-126).
1 The Representational Measurement Approach to Problems
C. Structures with Discrete Singular Points 1. Singular Points As was remarked earlier, the class of nonhomogeneous structures is very diverse and ill understood. 26 However, one class is quite fully
understood, and it plays a significant role in measurement. We discuss that class next. A point of a structure is called singular if it stays fixed under every automorphism of the structure. The many
familiar examples include any m i n i m u m point, such as zero length or zero mass; any m a x i m u m point, such as the velocity of light; and certain interior points, such as the status quo in
utility measurement. Still another example arises in a class of preference models proposed by Coombs and summarized in his 1964 book (see also Coombs, 1975). He postulated that an individual's
preferences arise from a comparison of the relative "distances" between each alternative and that individual's ideal point on the attribute for which preference is being expressed. Clearly, such an
ideal point plays a distinctive role, namely, it is the zero point of dis-preference for that person. Coombs developed algorithms that use the data from a number of subjects to infer simultaneously
the location of objects and ideal points in the space of preferences. However, the mathematical theory was never very fully developed. In contrast, the role of the status quo in utility theory is
better analyzed. 2. Homogeneity Between Discrete Singular Points Singular points have properties that render them unlike any other point in the structure; indeed, they keep the structure from being
homogeneous. However, if a finite number of singular points exist, as is true in the applications just mentioned, the structure can be homogeneous between adjacent points. That is, if a and b are two
points of the structure not separated by a singular point, then some automorphism of the structure takes a into b. Furthermore, if the structure has a generalized monotonic operation, 27 and if it is
finitely unique, it can be shown (Luce, 1992a) that there are at most three singular points: a minimum, a maximum, and an interior one. Moreover they exhibit systematic properties. One then uses the
results on unit structures to derive a numerical representation of this class of structures. Results about such structures underlie some of the developments in the next subsection. As a rough
analogy, homogeneous and nonhomogeneous stand in the same relation as do linear and nonlinear equations: The former is highly special, and the latter highly diverse. 27 This is a function of two or
more variables that is monotonic in each. One must be quite careful in formulating the exact meaning of monotonicity at minima and maxima. 26
Geoffrey lverson and R. Duncan Luce
3. Generalized Linear Utility Models A growing literature is focused on exploring ways to modify the EU and SEU models (sections V.C.1 and 2) so as to accommodate some of the anomalies described in
section V.C.3. 2s One class of models, which includes Kahneman and Tversky's (1979; Tversky & Kahneman, 1992) widely cited representation called prospect theory, draws on generalized concatenation
structures with singular points, identifying the status quo as a singular point. The resulting representation modifies SEU, Eq. (56), to the extent of making S(Ei) depend on one or both of two things
beside the event Ei, namely, the sign of the corresponding consequence g;mthat is, whether it is a gain or loss relative to the status quomand also the rank-order position of gi among all of the
consequences, gl . . . . gn, that might arise from the gamble g. These models go under several names, including rank- and signdependent utility (RSDU) and cumulative prospect theory. In the binary
case, such models imply event commutativity Eq. (58), but none of the more complex accounting equivalences that hold for SEU (Luce & v o n Winterfeldt, 1994). Measurement axiomatizations of the most
general RSDU are given by Luce and Fishburn (1991, 1995) and Wakker and Tversky (1993). The former is unusual in this literature because it introduces a primitive beyond the preference ordering among
gambles, namely, the idea of the joint receipt of two things. Therefore, ifg and h are gambles, such as two tickets in different state lotteries or stock certificates in two corporations, a person
may receive (e.g., as a gift) both of them, which is denoted g if3 h. This operation plays two useful features in the theory. One, which Tversky and Kahneman (1986) called segregation and invoked in
pre-editing gambles, states that ifg is a gamble and s is a certain outcome with the consequences being either all gains or all losses, g if3 s is treated as the same as the gamble g', which is
obtained by replacing each g; by gi 9 s. This appears to be completely rational. The second feature, called decomposition, formulates the single nonrationality of the theory: Let g be a gamble having
both gains and losses, let g+ denote the gamble resulting from g by replacing all of the losses by the status quo, and g- that by replacing the gains by the status quo. Then, g ~ g+ G g-,
where the two gambles on the right are realized independently. This is, in reality, a formal assertion of what is involved in many cost-benefit analyses, the two components of which are often carried
out by independent groups of analysts and their results are combined to give an overall evaluation of the 28 Most of the current generalizations exhibit neither context effects, as such, nor
1 The Representational Measurement Approach to Problems
situation. In fact, Slovic and Lichtenstein (1968), in a study with other goals, tested decomposition in a laboratory setting and found it sustained. More recently Cho, Luce, and von Winterfeldt
(1994) carried out a somewhat more focused study, again finding good support for the segregation and decomposition assumptions. Within the domain of lotteries, 29 economists have considered other
quite different representations. For example, Chew and Epstein (1989) and Chew, Epstein, and Segal (1991) have explored a class of representations called quadratic utility that takes the form n
U(g) = ~ ~ &(g,, gj)p,pj. i=1 j=l
A weakened form of independence is key to this representation. It is called mixture symmetry and is stated as follows: Ifg --- h, then for each o~E(O, 89 there exists pE( 89 1) such that (g, o~; h, 1
- c,) --- (h, p; g, 1 - 13).
Equation (63) and consequence monotonicity together with assumptions about the richness and continuity of the set of lotteries imply that 13 = 1 - 0t and that Eq. (62) is order preserving. We are
unaware of any attempts to study this structure empirically.
D. Invariance and Homogeneity 1. The General Idea A very general scientific meta-principle asserts that when formulating scientific propositions one should be very careful to specify the domain
within which the proposition is alleged to hold. The proposition must then be formulated in terms of the primitives and defining properties of that domain. When the domain is rich in automorphisms,
as in homogeneous cases or in the special singular cases just discussed, this means that the proposition must remain invariant with respect to the automorphisms, just as is t r u e - by
definition--of the primitives of the domain. 2. An Example: Bisection Let o~ = (A, &, O) be an extensive structure such as the physical intensity of monochromatic lights. It has a representation +
that maps it into ~ = (R+, ->, +) and the automorphisms (= translations) become multiplication by positive constants. Now, suppose a bisection experiment is performed such that when stimuli x = +(a)
and y = +(b) are presented, the subject reports 29
Money gambles with known probabilities for the consequences.
Geoffrey Iverson and R. Duncan Luce
the stimulus z = ~b(c) to be the bisection point of x and y. We may think of this as an operation defined in ~, namely, z = x 9 y. If this operation is expressible within the structure ai, then
invariance requires that for real r > 0,
r(x ~ y) = rx ~ ry.
A numerical equation of this type is said to be homogeneous of degree 1, which is a classical concept of homogeneity. 3~ It is clearly very closely related to the idea of the structure being
homogeneous. In section VII we will see how equations of homogeneity of degree different from 1 also arise. Plateau (1872) conducted a bisection experiment using gray patches, and his data supported
the idea that there is a "subjective" transformation U of physical brightness that maps the bisection operation into a simple average, that is, U(x(~y) =
U(x) + U(y) 2
If we put Eqs. (64) and (65) together, we obtain the following constraint on the function U: For all x, y, z E R+,
U[rU_I [ U ( x ) + 2
U ( y ) ] ] = U(rx)+ U(ry). 2
An equation of this type in which a function is constrained by its values at several different points is called a functional equation (Acz6l, 1966, 1987). Applying the invariance principle to
numerical representations quite typically leads to functional equations. In this case, under the assumption that U is strictly increasing, it can be shown to have one of two forms:
U(x) - klog x + c or
U(x) = cx k + d.
These are of interest because they correspond to two of the major proposals for the form of subjective intensity as a function of physical intensity. The former was first strongly argued for by
Fechner, and the latter, by Stevens. Falmagne (1985, chap. 12) summarized other, somewhat similar, invariance arguments that lead to functional equations. We discuss related, but conceptually quite
distinct, arguments in section VII. 3. Invariance in Geometry and Physics. In 1872, the German mathematician F. Klein, in his famous Erlangen address, argued that within an axiomatic formulation of
geometry, the only entities that should be called "geometric" are those that are invariant under 30 The degree of the homogeneity refers to the exponent of the left-hand r, which as written is 1.
1 The Representational Measurement Approach to Problems
the automorphisms of the geometry (for a recent appraisal, see Narens, 1988). Klein used this to good effect; however, a number of geometries subsequently arose that were not homogeneous and, indeed,
in which the only automorphism was the trivial one, the identity. Invariance in such cases establishes no restrictions whatsoever. This illustrates an important point--namely, that invariance under
automorphisms is a necessary condition for a concept to be formulated in terms of the primitives of a system, but it is by no means a sufficient condition. During the 19th century, physicists used,
informally at first, invariance arguments (in the form of dimensional consistency) to ensure that proposed laws were consistent with the variables involved. Eventually this came to be formulated as
the method of dimensional analysis in which numerical laws are required to be dimensionally homogeneous of degree 1 (dimensional invariance). Subsequently, this method was given a formal axiomatic
analysis (Krantz et al., 1971, chap. 10; Luce et al., 1990, chap. 22), which showed that dimensional invariance is, in fact, just automorphism invariance. Again, it is only a necessary condition on a
physical law, but in practice it is often a very restrictive one. Dzhafarov (1995) presented an alternative view that is, perhaps, closer to traditional physical presentations. Nontrivial examples of
dimensional analysis can be found in Palacios (1964), Sedov (1959), and Schepartz (1980). For ways to weaken the condition, see section VII. 4. Invariance in Measurement Theory and Psychology In
attempting to deal with general structures of the type previously discussed, measurement theorists became very interested in questions of invariance, for which they invented a new term. A proposition
formulated in terms of the primitives of a system is called meaningfulonly if it is invariant under the automorphisms of the structure. Being meaningful says nothing, one way or the other, about the
truth of the proposition in question, although meaningfulness can be recast in terms of truth as follows: A proposition is meaningful if it is either true in every representation (within the scale
type) of the structure or false in every one. Being meaningless (not meaningful) is not an absolute concept; it is entirely relative to the system in question, and something that is meaningless in
one system may become meaningful in a more complete one. As noted earlier, the concept has bite only when there are nontrivial automorphisms. Indeed, in a very deep and thorough analysis of the
concept of meaningfulness, Narens (in preparation) has shown that it is equivalent to invariance only for homogeneous structures. In addition to meaningfulness arguments leading to various
psychophysical equations, such as Eqs. (65) through (67), some psychologists, beginning with Stevens (1951), have been involved in a contentious controversy about applying invariance principles to
statistical propositions. We do
Geoffrey Iverson and R. Duncan Luce
not attempt to recapitulate the details. Suffice it to say that when a statistical proposition is cast in terms of the primitives of a system, it seems reasonable to require that it be true (for
false) in every representation of the system. Thus, in the ordinal case it is meaningless to say (without further specification of the representation) that the mean of one group of subjects is less
than the mean of another because the truth is not invariant under strictly increasing mappings of the values. In contrast, comparison of the medians is invariant. A list of relevant references can be
found in Luce et al. (1990). See also Michell (1986) and Townsend and Ashby (1984). VII. MATCHING AND GENERAL HOMOGENEOUS EQUATIONS
A. In Physics Many laws of physics do not derive from the laws that relate basic physical measures but are, nonetheless, expressed in terms of these measures. This section describes two such cases,
which are handled differently. 1. Physically Similar Systems Consider a spring. If one holds the ambient conditions fixed and applies different forces to the spring, one finds, within the range of
forces that do not destroy the spring, that its change in length, Al, is proportional to the force, F, applied, Al = kF. The is called Hooke's law. Note that such a law, as stated, is not invariant
under automorphisms of the underlying measurements. This is obvious because Al has the dimension of length, whereas force involves mass, length, and time. This law is expressible in terms of the
usual physical measures, but it is not derivable from the underlying laws of the measurement structure. A law of this type can be recast in invariant form by the following device. The constant k,
called the spring constant, is thought of as characterizing a property of the spring. It varies with the shape and materials of the spring and has units that make the law dimensionally invariant,
namely, [L][7-]Z/[M], where [L] denotes the unit of length, [7-] of time, and [M] of mass. This is called a dimensional constant. Such constants play a significant role in physics and can, in
general, be ascertained from the constants in the differential equations characterizing the fundamental physical relations underlying the law when these equations are known. The set of entities
characterized in this fashion, such as all springs, are called physically similar systems. 2. Noninvariant Laws In addition to invariant laws or laws that can be made invariant by the inclusion of
dimensional constants that are thought to be characteristic of the system involved, there are more complex laws of which the following is
T h e Representational M e a s u r e m e n t A p p r o a c h to P r o b l e m s
an example taken from theology (material science) called a Nuttig law (Palacios, 1964, p. 45). 31 For solids not satisfying Hooke's law (see the preceding), the form of the relationship is d = k(F/A)
~t~, where d = Al/l is the deformation and F the applied force. New are the area A to which F is applied, the time duration t of the application, and the two exponents 13 and ~/, both of which depend
on the particular material in question. Thus, the dimension of k must be [L]lS[m]-~3[T]2~-'~, which of course varies depending on the values of 13 and ~/. To understand the difficulty here, consider
the simplest such case, namely y = kx~, where x and y are both measured as (usually, distinct) ratio scales. Thus, the units of k must be [y]/[x]~. There is no problem so long as all systems governed
by the law have the same exponent [3; the situation becomes quite troublesome, however, if not only the numerical value of k depends on the system but, because of changes in the value of 13, the
units of k also depend on the particular system involved. Such laws, which are homogeneous of degree 13, cannot be made homogeneous of degree 1 by introducing a dimensional constant with fixed
dimensions. In typical psychology examples, several of which follow, the value of 13 appears to vary among individuals just as it varies with the substance in rheology. Both Falmagne and Narens
(1983) and Dzhafarov (1995) have discussed different approaches to such problems. There is a sense, however, in which invariance is still nicely maintained. The ratio scale transformation
(translation) r on the dimension x is taken into a ratio scale transformation ~ on the dimension y. Put another way, the law is compatible with the automorphism structures of dimensions x and y even
if it is not invariant with respect to the automorphisms (Luce, 1990a). Such homogeneous laws are very useful in psychophysics because they narrow down to a limited number of possibilities the
mathematical form of the laws (see Falmagne, 1985, chap. 12).
B. Psychological Identity Some psychophysical laws formulate conditions under which two stimuli are perceived as the same. We explore two illustrations. 1. Weber-Type Laws Consider a physical
continuum of intensity that can be modeled as an extensive structure (A, >~, O) with the ratio scale (physical) representation ~b onto (R+, >-, +). In addition, suppose there is a psychological
ordering >,32 on A that arises from a discrimination experiment where, for a, b E A, b > , a We thank E. N. Dzhafarov (personal communication) for bringing these rheology examples to our attention.
See Scott Blair (1969) for a full treatment.
The subscript 9 is intended as a reminder that this ordering is psychological and quite distinct from >~, which is physical.
Geoffrey Iverson and R. Duncan Luce
means that b is perceived as more intense than a. In practice, one estimates a psychometric function and uses a probability cutoff to define >,v; see Eq. (20) of section IV. Not surprisingly, such
orderings are usually transitive. However, if we define b ~",v a to mean neither b >,v a nor a >,t, b, then in general "--, is not transitive: The failure to discriminate a from b and b from c does
not necessarily imply that a cannot be discriminated from c, although that may happen. It is usually assumed that >,t, satisfies the technical conditions of a semiorder or an interval order (see
section IV.A.3), but we do not need to go into those details here. Narens (1994) proved that one cannot define the structure (A, ~>, O) in terms of (A, >,v); however, one can formulate the latter in
terms of the former in the usual way. One defines T(a) to be the smallest (inf) b such that b > . a. Then T establishes a law that maps A into A, namely, the upper threshold function. Typically, this
is converted into a statement involving increments. Define A(a) to be the element such that T(a) = a 0 A(a). Auditory psychologists (Jesteadt, Wier, & Green, 1977; McGill & Goldberg, 1968) have
provided evidence (see Figure 2) that intensity discrimination of pure tones exhibits the property that the psychologically defined A(a) is compatible with the physics in the sense that for each
translation "r of the physical structure, there is another translation or,, dependent on "r, such that /h['r(a)] = cr,[A(a)].
When recast as an equivalent statement in terms of the representations, Eq. (68) asserts the existence of constants cl > 0 and [3 > 0 such that ~[A(a)] = ci~(a)'-~ 3,
which again is a homogeneous equation of degree 1 - [3 (Luce, 1990a). The latter formulation is called the near miss to Weber's law because when 4~ is the usual extensive measure of sound intensity,
[3 is approximately 0.07, which is "close to" [3 = 0, the case called Weber's law after the 19th-century German physiologist E. H. Weber. Note that Weber's law itself is special because it is
dimensionally invariant, that is, in Eq. (68) r = "r, but the general case of Eq. (68) is not. It is customary to rewrite Weber's law as a6(a) _ 6[A(a)] +(a) +(a)
_ -
This ratio, called the Weber fraction, is dimensionless and some have argued that, to the extent Weber's law is valid, the fraction e~ is a revealing parameter of the organism, and, in particular,
that it is meaningful to compare Weber fractions across modalities. This common practice has recently been questioned by measurement theorists, as we now elaborate.
1 The Representational Measurement Approach to Problems
2. Narens-Mausfeld Equivalence Principle Narens (1994) and Narens and Mausfeld (1992) have argued that one must be careful in interpreting the constants in laws like Eq. (69) and (70). They note that
the purely psychological assertion about discriminability has been cast in terms of one particular formulation of the qualitative physical structure, whereas there are an infinity of concatenation
operations all of which are equally good in the following sense. Two qualitative formulations (A, .--,> O) and (A, ---, > , ) are equivalent if each can be defined in terms of the other. This, of
course, means that they share a c o m m o n set of automorphisms" If a" is an automorphism of one, then it is of the other; that is, both "r(a (3 b) = "r(a) O "r(b) and "r(a, b) = "r(a) , a'(b) hold.
Indeed, Narens (1994) has shown that if the former has the ratio scale representation d), then the latter must have one that is a power function of ~b. Thus, if the former structure is replaced by
the latter, then Eq. (70) is transformed into cb~[A*(a)] = (1 + e~)~- 1,
where ~/is chosen so cbY is additive over ,. Thus, the fact that Weber's law holds is independent of which physical primitives are used to describe the domain, and so within one modality one can
compare individuals as to their discriminative power. Across modalities, no such comparison makes sense because the constant (1 + ot)s - 1 is not invariant with the choice of the concatenation
operation, which alters the numerical value of y. If one reformulates the law in terms of T(a) = a O A(a), Weber's law becomes cb[W(a)] = 1 + o~. cb(a)
Note that this formulation does not explicitly invoke a concatenation operation, except that choosing cb rather than cb~ does, and so the same strictures of interpretation of the ratios remain. 33
Carrying out a similar restatement of the near miss, Eq. (69) yields
&v[T(a)] ~(a)
= (1 + cxcby(a)-r
Here the choice of a concatenation operation clearly affects what one says about the "near-miss" exponent because the value f3/y can be anything. The principle being invoked is that psychologically
significant propositions can depend on the physical stimuli involved, but they should not depend on the spedfic way we have chosen to formulate the physical situation. 33 This remark stands in sharp
contrast to Narens' (1994) claim that ot + 1 is meaningful under circumstances when oc is not.
Geoffrey Iverson and R. Duncan Luce
We should be able to replace one description of the physics by an equivalent one without disturbing a psychologically significant proposition. This principle is being subjected to harsh criticism,
the most completely formulated of which came from Dzhafarov (1995) who argued that its wholesale invocation will prove far too restrictive not only in psychology but in physics as well. It simply may
be impossible to state psychological laws without reference to a specific formulation of the physics, as appears likely to be the case in the next example. 3. Color Matching A far more complex and
interesting situation arises in color vision. The physical description of an aperture color is simply the intensity distribution over the wave lengths of the visible spectrum. A remarkable empirical
conclusion is that there are far fewer color percepts than there are intensity distributions: The latter form an infinite dimensional space that, according to much psychological data and theory,
human vision collapses into a much lower dimensional one--under some circumstances to three dimensions, a4 The experimental technique used to support this hypothesis is called metameric matching in
which a circular display is divided into two half-fields, each with a different intensity distribution. When a subject reports no perceived difference whatsoever in the two distributions, which may
be strikingly different physically, they are said to match. One possible physical description of the stimuli is based on two easily realized operations. Suppose a and b denote two intensity
distributions over wave length. Then a 9 b denotes their sum, which can be achieved by directing two projectors corresponding to a and b on the same aperture. For any real r > 0, r.a denotes the
distribution obtained from a by increasing every amplitude by the same factor r, which can be realized by changing the distance of the projector from the aperture. In terms of this physical structure
and the psychological matching relation, denoted "~, Krantz (1975a, b; Suppes et al., 1989, chap. 15) has formulated axiomatically testable properties of---, ~), ", and of their interactions that, if
satisfied, result in a threedimensional vector representation of these matches. Empirical data provide partial, but not full, support for these so-called Grassman laws. The dimension of the
representation is an invariant, but there are infinitely many representations into the vector space of that dimension. A substantial portion of the literature attempts to single out one or another as
having special physiological or psychological significance. These issues are described in considerable detail in chapter 15 of Suppes et al. (1989), but as yet they are not fully resolved. 34 it is
worth noting that sounds also are infinite dimensional, but no such reduction to a finite dimensional perceptual space has been discovered.
1 The Representational Measurement Approach to Problems
To our knowledge no attempt has been made to analyze these results from the perspective of the Narens-Mausfeld principle. It is unclear to us what freedom exists in providing alternative physical
formulations in this case.
C. Psychological Equivalence 1. Matching across Modalities As was discussed in section IV.B.1, psychologists often ask subjects to characterize stimuli that are equivalent on some subjective
dimension even though they are perceptually very distinct. Perhaps the simplest cases are the construction of equal-X curves, where X can be any suitable attribute: brightness, loudness,
aversiveness, and so on. Beginning in the 1950s, S. S. Stevens (1975) introduced three new methods that went considerably beyond matching within an attribute: magnitude estimation, magnitude
production, and cross-modal matching. Here two distinct attributesma sensory attribute and a numerical attribute in the first two and two sensory attributes in the third--are compared and a "match"
is established by the subject. The main instruction to subjects is to preserve subjective ratios. Therefore if M denotes the matching relation and aMs and bMt, then the instruction is that stimuli a
and b from modality A should stand in the same (usually intensity) subjective ratio as do s and t from modality S. In developing a theory for such matching relations, the heart of the problem is to
formulate clearly what it means "to preserve subjective ratios." In addition, of course, one also faces the issue of how to deal with response variability, which is considerable in these methods, but
we ignore that here. Basically, there are three measurement-theoretic attempts to provide a theory of subjective ratios. The first, due to Krantz (1972) and Shepard (1978, 1981), explicitly
introduced as a primitive concept the notion of a ratio, formulated plausible axioms, and showed that in terms of standard ratio scale representations of the physical attributes, ~A and ~s, the
following is true for some unspecified monotonic function F and constant [3 > 0:
if and only if
] .
Although the power function character is consistent with empirical observations, the existence of the unknown function F pretty much obviates that relationship. A second attempt, due to Luce (1990a,
and presented as an improved formulation of Luce, 1959b), stated that ratios are captured by translations. In particular, he defined a psychological matching law M to be translation consistent(with
the physical domains A and S) if for each translation "r of the
Geoffrey lverson and R. Duncan Luce
domain A there exists a corresponding translation or~ of the domain S such that for all a E A and s E S,
aMs if and only if "r(a)Mo'.,(s).
From this it follows that if &A and &s are ratio scale representations of the two physical domains, then there are constants oL > 0 and 13 > 0 such that
aMs is equivalent to ~bA(a) = 0t~bs(s)6.
Observe that Eqs. (68) and (69) are special cases of (75) and (76). The third attempt, due to Narens (1996), is far deeper and more complex than either of the previous attempts. He carefully
formulated a plausible model of the internal representation of the stimuli showing how the subject is (in magnitude estimation) constructing numerals to produce responses. It is too complex to
describe briefly, but any serious student of these methods should study it carefully. 2. Ratios and Differences Much of the modeling shown in section II was based on functions of differences of
subjective sensory scales. Similarly, methods such as bisection and fractionation more generally seem to rest on subjects evaluating differences. By contrast, the discussion of cross-modal matching
(and of magnitude estimation and production) emphasizes the preservation of ratios. Torgerson (1961) first questioned whether subjects really have, for most dimensions, independent operations
corresponding to differences and to ratios, or whether there is a single operation with two different response rules depending on the instructions given. Michael Birnbaum, our editor, has vigorously
pursued this matter. The key observation is that if there really are two operations, response data requiring ratio judgments cannot be monotonically related to those requiring difference judgments.
For example, 3 - 2 < 13 - 10 but 1.5 = 3/2 > 13/10 = 1.3. On the other hand, if ratio judgments are found to covary with difference judgments in a monotonic fashion, a reasonable conclusion is that
both types of judgments are based on a single underlying operation. A series of studies in a variety of domains ranging from physical manipulable attributes (such as weight and loudness) to highly
subjective ones (such as job prestige) has been interpreted as showing no evidence of nonmonotonicity and to provide support for the belief that the basic operation is really one of differences. The
work is nicely summarized by Hardin and Birnbaum (1990), where one finds copious references to earlier work. Hardin and Birnbaum conclude that the data support a single operation that involves
subtracting values of a subjective real mapping s, and that
1 The Representational Measurement Approach to Problems
depending on the task required of the subject, different response functions are employed for the two judgments, namely, Response = J D [ s ( a ) - s(b)] for difference judgments and
Response = J R [ s ( a ) - s(b)] for ratio judgments. Moreover, the evidence suggests that approximately J R ( x ) = expJD(x). They do point out that, for a few special modalities, ratios and
differences can be distinct. This is true of judgments of length: most people seem to understand reasonably clearly the difference between saying two height ratios are equal and that two differences
in length are equal. The empirical matter of deciding if ratio and difference judgments are or are not monotonically related is not at all an easy one. One is confronted by a data figure such as that
reproduced in Figure 13 and told that it represents a single monotonic function. But are the deviations from a smooth monotone curve due to response error, or "noise," or are they small but
systematic indications of a failure of monotonicity? It is difficult to be sure in average data such as these. A careful analysis of the data from individual subjects might be more convincing,
however; and Birnbaum and Elmasian (1977) carried out such an analysis, concluding that a single operation does give a good account of the data. 400
R4(1,J) 9 R4(2,J) [] R4(3,J) # R4(4,J) 9 R4(5,J) u R4(6,J) 9 R4(7,J)
.4 cr
om *E
100 r~
9 ~ 9
.p cjh'= 9
~'~" 9
Mean "Difference" Judgment F I G U R E 13 Geometric mean estimates of ratio judgments versus mean difference judgments of the same stimulus pairs of occupations. From Figure 1 of "Malleability of
'Ratio' Judgments of Occupational Prestige," by C. Hardin and M. H. Birnbaum, 1990, American Journal of Psychology, 103, p. 6. From AmericanJournal of Psychology. Copyright 1990 by the Board of
Trustees of the University of Illinois. Used with the permission of the University of Illinois Press.
Geoffrey Iverson and R. Duncan Luce
Assuming that the issue of monotonicity has been settled, there remains the question whether the underlying operation is one of differences or ratios. For, as is well known, we can replace the
right-hand terms in Eq. (77) by corresponding expressions involving ratios rather than differences, namely
J'D[S' (a) / s' (b)] and J'R[S' (a)/s' (b)], where s'(a) = expls(a)], Jb(x) = JD[ln(x)], and JR(X) = JR[In(x)].
It turns out, however, that in an appropriate four-stimulus task the operation can be identified. For example, Hagerty and Birnbaum (1978) asked subjects to judge (i) "ratios of ratio," (ii) "ratios
of differences," (iii) "differences of ratios," and (iv) "differences of differences." They found that the observed judgments for conditions (i), (iii), and (iv) could be explained in terms of a
model involving a single scale s, with all comparisons being based on differences of the form [s(a) - s ( b ) ] - Is(c)- s(d)]. On the other hand, condition (ii) was accounted for by a model based on
subjective ratios of differences of scale values:
s(a) - s(b) s(c) - s(d)" The conclusion is that the scale s is consistent with the subtraction model of Eq. (77) applied to ratio and difference judgments of stimulus pairs. Thus, although pairs of
stimuli seem to be compared by computing differences, subjects can and do compute ratios, particularly when those ratios involve differences of scale values. This latter observation is consistent
with the fact, mentioned earlier, that people are well aware of the distinction between ratios of lengths and differences of those same lengths. VIII. C O N C L U D I N G REMARKS Our general
knowledge about the conditions under which numerical representations can arise from qualitative data~representational m e a s u r e m e n t ~ has grown appreciably during the past 40 years. Such
measurement theory has so far found its most elaborate applications in the areas of psychophysics and individual decision making. This chapter attempted both to convey some of our new theoretical
understanding and to provide, albeit sketchily, examples of how it has been applied. O f course, much of the detail that is actually needed to work out such applications has been omitted, but it is
available in the references we have provided. The chapter first exposited the very successful probability models for simple binary experiments in which subjects exhibit their ability to detect
1 The Representational Measurement Approach to Problems
and to discriminate signals that are barely detectable or discriminable. These models and experiments focus on what seem the simplest possible questions, and yet complexity arises because of two
subject-controlled tradeoffs: that between errors of commission and errors of omission and that between overall error rate and response times. We know a lot about psychometric functions, ROC curves,
and speed-accuracy trade-offs, although we continue to be plagued by trial-by-trial sequential effects that make estimating probabilities and distributions very problematic. Generalizing the
probability models to more complex situations--for example, general choice, categorization, and absolute identificationmhas been a major preoccupation beginning in the 1980s, and certainly the advent
of ample computer power has made possible rather elaborate calculations. Still, we are always battling the tendency for the number of free parameters to outstrip the complexity of the data. The
second major approach, which focused more on structure than simple order, involved algebraic models that draw in various ways on H61der's theorem. It shows when an order and operation have an
additive representation, and it was used in several ways to construct numerical representations. The line of development began historically with empirical operations that are associative and
monotonic, moved on to additive conjoint structures in which an operation induced on one component captures the tradeoff between components, and most recently has been extended to the work on
homogeneous, finitely unique structures. The latter, which lead to a wide variety ofnonadditive representations, are studied by showing that the translations (automorphisms with no fixed points) meet
the conditions of H61der's theorem. In this representation the translations appear as multiplication by positive constants. Further generalizations to conjoint structures with the empirical (not the
induced) operations on components and to structures with singular points make possible the treatment of fairly complex problems in individual decision making. The most extensive applications of these
results so far have been to generalized theories of subjective expected utility. These new results have not yet been applied in psychophysics except for relations among groups of translations to
study various matching experiments. Such models lead to homogeneous equations of degree different from 1. The apparent similarity of the probability and algebraic models in which both kinds of
representations are invariant under either ratio or interval scale transformations is misleading. For the probability models in the ordinal situation this restriction does not reflect in any way the
automorphism group of the underlying structure, which after all is ordinal, but rather certain arbitrary conventions about the representation of distributions. In particular, the data are, in
principle,, transformed so that the error distributions are Gaussian, in which case only affine transformations retain that
Geoffrey Iverson and R. Duncan Luce
parametric form. This last comment is not meant to denigrate what can be done with the probability models, which as we have seen is considerable, especially in binary situations (see section II). As
we have stressed, the field to date has failed to achieve a true melding of randomness with structure. This failure makes empirical testing difficult because we usually are interested in moderately
structured situations and invariably our data are somewhat noisy. Exaggerating slightly, we can handle randomness in the ordinal situationmwitness sections II and IIlmand we know a lot about
structure in the ratio and interval scale cases provided we ignore the fact that the data are always noisymwitness sections IV through VII, but we cannot treat both together very well. One result of
this bifurcation is notable differences in how we test the two kinds of models. Those formulating randomness explicitly are ideally suited to the response inconsistencies that we observe. But because
of their lack of focus on internal structure, they can be evaluated only globally in terms of overall goodness of fit. The algebraic models suffer from having no built-in means of accommodating
randomness, but they have the advantage that various individual structural properties~monotonicity, transitivity, event commutativity, and so o n m c a n be studied in some isolation. This allows us
to focus rather clearly on the failings of a model, leading to modified theories. One goal of future work must be to meld the two approaches.
Acknowledgments This work was supported in part by National Science Foundation grants SBR-9308959 and SBR-9540107 to the University of California at Irvine. We thank Michael Birnbaum for his helpful
comments and criticisms.
References Acz~l, J. (1966). Lectures on functional equations and their applications. New York: Academic Press. Acz~l, J. (1987). A short course on functional equations. Dordrecht: D. Reidel. Allais,
M. (1953). Le comportment de l'homme rationnel devant le risque: Critique des postulates et axiomes de l'~cole americaine. Econometrica, 21, 503-546. Allais, M., &, Hagen, O. (Eds.). (1979). Expected
utility hypothesis and the Allais' paradox. Dordrecht: Reidel. Alper, T. M. (1987). A classification of all order-preserving homeomorphism groups of the reals that satisfy finite uniqueness. Journal
of Mathematical Psychology, 31, 135-154. Anderson, N. H. (1982). Methods ofit~rmation integration theory. New York: Academic Press. Anderson, N. H. (Ed.) (1991a, b, c). Contributions to i~rmation
integration theory (Vol. 1: Cognition; Vol. 2: Social; Vol. 3: Developmental). Hillsdale, NJ: Erlbaum. Ashby, F. G. (1992a). Multidimensional models of categorization. In F. G. Ashby (Ed.),
Multidimensional models of perception and cognition (pp. 449-483). Hillsdale, NJ: Erlbaum. Ashby, F. G. (1992b). Multivariate probability distributions. In F. G. Ashby (Ed.). Multidimensional Models
of Perception and Cognition (pp. 2-34). Hillsdale, NJ: Erlbaum.
The Representational Measurement Approach to Problems
Ashby, F. G., & Perrin, N. A. (1988). Toward a unified theory of similarity and recognition. Psychological Review, 95, 124-150. Ashby, F. G., & Townsend, J. T. (1986). Varieties of perceptual
independence. Psychological Review, 93, 154-179. Berg, B. G. (1989). Analysis of weights in multiple observation tasks. Journal of the Acoustical Society of America, 86, 1743-1746. Berg, B. G.
(1990). Observer efficiency and weights in a multiple observation task. Journal of the Acoustical Society of America, 88, 149-158. Berg, B. G., & Green, D. M. (1990). Spectral weights in profile
listening. Journal of the Acoustical Society of America, 88, 758-766. Berliner, J. E., Durlach, N. I., & Braida, L. D. (1977). Intensity perception. VII. Further data on roving-level discrimination
and the resolution and bias edge effects. Journal of the Acoustical Society of America, 61, 1577-1585. Birnbaum, M. H. (1974). The nonadditivity of personality impressions [monograph]. Journal of
Experimental Psychology, 102, 543-561. Birnbaum, M. H. (1992). Violations of monotonicity and contextual effects in choice-based certainty equivalents. Psychological Science, 3, 310-314. Birnbaum, M.
H., Coffey, G., Mellers, B. A., & Weiss, R. (1992). Utility measurement: Configual-weight theory and the judge's point of view. Journal of Experimental Psychology: Human Perception and Performance,
18, 331-346. Birnbaum, M. H., & Elmasian, R. (1977). Loudness ratios and differences involve the same psychophysical operation. Perception & Psychophysics, 22, 383-391. Birnbaum, M. H., Parducci, A.,
& Gifford, R. K. (1971). Contextual effects in information integration. Journal of Experimental Psychology, 88, 158-170. Birnbaum, M. H., & Stegner, S. E. (1979). Source credibility: Bias, expertise,
and the judge's point of view. Journal of Personality and Social Psychology, 37, 48-74. Block, H. D., & Marschak, J. (1960). Random orderings and stochastic theories of responses. In I. Olkin, S.
Ghurye, W. Hoeffding, W. Madow, & H. Mann (Eds.), Contributionsto probability and statistics (pp. 97-132). Stanford, CA: Stanford University Press. B6ckenhoh, U. (1992). Multivariate models of
preference and choice. In F. G. Ashby (Ed.), Multidimensional models of perception and cognition (pp. 89-114). Hillsdale, NJ: Erlbaum. Bostic, R., Herrnstein, R. J., & Luce, R. D. (1990). The effect
on the preference-reversal phenomenon of using choice indifferences. Journal of Economic Behavior and Organization, 13, 193-212. Braida, L. D., & Durlach, N. I. (1972). Intensity perception, lI.
Resolution in one-interval paradigms. Journal of the Acoustical Society of America, 51,483-502. Busemeyer, J. R., & Townsend, J. T. (1993). Decision field theory: A dynamic-cognitive approach to
decision making in an uncertain environment. Psychological Review, 100, 432459. Carroll, J. D., & DeSoete, G. (1990). Fitting a quasi-Poisson case of the GSTUN (General Stochastic Tree UNfolding)
model and some extensions. In M. Schader & W. Gaul (Eds.), Knowledge, data and computer-assisted decisions (pp. 93-102). Berlin: SpringerVerlag. Chew, S. H., & Epstein, L. G. (1989). Axiomatic
rank-dependent means. Annals of Operations Research, 19, 299-309. Chew, S. H., Epstein, L. G., & Segal, U. (1991). Mixture symmetry and quadratic utility. Econometrica, 59, 139-163. Cho, Y., Luce, R.
D., & von Winterfeldt, D. (1994). Tests of assumptions about the joint receipt of gambles in rank- and sign-dependent utility theory. Journal of Experimental Psychology: Human Perception and
Performance, 20, 931-943.
Geoffrey Iverson and R. Duncan Luce
Cohen, M., & Narens, L. (1979). Fundamental unit structures: A theory of ratio scalability. Journal of Mathematical Psychology, 20, 193-232. Coombs, C. H. (1964). A theory of data. New York: John
Wiley & Sons. Coombs, C. H. (1975). Portfolio theory and the measurement of risk. In M. F. Kaplan & S. Schwartz (Eds.), Humanjudgment and decision processes (pp. 63-86). New York: Academic Press.
Davison, M., & McCarthy, D. (1988). The matching law. A research review. Hillsdale, NJ: Erlbaum. Decoene, S., Onghena, P., & Janssen, R. (1995). Representationalism under attack (book review).
Journal of Mathematical Psychology, 39, 234-241. DeSoete, G., & Carroll, J. D. (1986). Probabilitic multidimensional choice models for representing paired comparisons data. In E. I)iday, Y.
Escoufier, L. Lebart, J. Pages, Y. Schektman, & R. Tommasone (Eds.), Data analysis and informatics (Vol. 4, pp. 485-497). Amsterdam: North-Holland. DeSoete, G., & Carroll, J.D. (1992). Probabilistic
multidimensional models ofpairwise choice data. In F. G. Ashby (Ed.), Multidimensionalmodels ql'perception and cognition (pp. 61-88). Hillsdale, NJ: Erlbaum. Durlach, N. I., & Braida, L. D. (1969).
Intensity perception. I. Preliminary theory of intensity resolution. Journal of the Acoustical Society qf America, 46, 372-383. Dzhafarov, E. N. (1995). Empirical meaningfulness,
measurement-dependent constants, and dimensional analysis. In R. D. Luce, M. I)'Zmura, I). D. Hoffman, G. Iverson, and K. Romney (Eds.), Geometric representations qf perceptual phenomena: Papers in
honor of Tarow Indow on his 70th birthday (pp. 113-134). Hillsdale, NJ: Erlbaum. Ellis, B. (1966). Basic concepts qfmeasurement. London: Cambridge University Press. Ellsberg, D. (1961). Risk,
ambiguity, and the Savage axioms. QuarterlyJournal of Economics, 75, 643-669. Falmagne, J.-C. (1976). Random conjoint measurement and loudness summation. Psychological Review, 83, 65-79. Falmagne,
J.-C. (1978). A representation theorem for finite random scale systems. Journal of Mathematical Psychology, 18, 52-72. Falmagne, J.-C. (1980). A probabilistic theory of extensive measurement.
Philosophy ql:Science, 47, 277-296. Falmagne, J.-C. (1985). Elements ofpsychophysical theory. New York: Oxford University Press. Falmagne, J.-C., & Iverson, G. (1979). Conjoint Weber laws and
additivity. Journal q/'Mathematical Psychology, 86, 25-43. Falmagne, J.-C., Iverson, G., & Marcovici, S. (1979). Binaural "loudness" summation: Probabilistic theory and data. Psychological Review,
86, 25-43. Falmagne, J.-C. & Narens, L. (1983). Scales and meaningfulness of qualitative laws. Synthese, 55, 287-325. Fechner, G. T. (1860/1966). Elemente derpsychophysik. Leipzig: Breitkopfand
Hartel. Translation of Vol. 1 by H. E. Adler. E. G. Boring & D. H. Howes, (Eds.) Elements of psychophysics (Vol. 1). New York: Holt, Rinehart & Winston. Fishburn, P. C. (1970). Utility theory for
decision making. New York: John Wiley & Sons. Fishburn, P. C. (1982). The foundations qfexpected utility. I)ordrecht: Reidel. Fishburn, P. C. (1985). Interval orders and interval graphs: A study qf
partially ordered sets. New York: John Wiley & Sons. Fishburn, P.C. (1988). Nonlinear preference and utility theory. Baltimore, MD: Johns Hopkins Press. Galambos, J. (1978/1987). The asymptotic
theory ~gextreme order statistics. New York: John Wiley & Sons; 2nd ed., Malabar, FL: Robert E. Krieger. Garner, W. R. (1974). The processing qfit(/brmation and structure. New York: John Wiley &
T h e Representational Measurement Approach to Problems
Gescheider, G. A. (1976). Psychophysics: Method and theory. Hillsdale, NJ: Erlbaum. Gescheider, G. A., Wright, J. H., & Polak, J. W. (1971). Detection of vibrotactile signals differing in probability
of occurrence. The Journal of Psychology, 78, 253-260. Gigerenzer, G., & Strube, G. (1983). Are there limits to binaural additivity ofloudness?J0urnal of Experimental Psychology: Human Perception and
Perfornlance, 9, 126-136. Gravetter, F., & Lockhead, G. R. (1973). Criterial range as a frame of reference for stimulus judgments. Psychological Review, 80, 203-216. Green, D. M., & Luce, R. D.
(1973). Speed-accuracy trade off in auditory detection. In S. Kornblum (Ed.), Attention and performance (Vol. IV, pp. 547-569). New York: Academic Press. Green, D. M., & Swets, J. A. (1966/1974/
1988). Signal detection theory and psychophysics. New York: John Wiley & Sons. Reprinted, Huntington, NY: Robert E. Krieger. Reprinted, Palo Alto, CA: Peninsula Press. Hagerty, M., & Birnbaum, M. H.
(1978). Nonparametric tests of ratio vs. subtractive theories of stimulus comparison. Perception & Psychophysics, 24, 121-129. Hardin, C., & Birnbaum, M. H. (1990). Malleability of "ratio" judgments
of occupational prestige. AmericanJournal qf Psycholo~,y, 103, 1-20. H61der, O. (1901). Die Axiome der Quantit~it und die Lchre vom Mass. Berichte iiber die
Verhandlungen der KiSniglich Sdchsischen Gesellschq/i der Wissenschafien zu Leipzig, Mathematisch-Physische Klasse, 53, 1-64. Translation of Part I by J. Michell & C. Ernst (1996). The axioms of
quantity and the theory of measurement. Journal of Mathematical Psychology, 40, 235-252.
Holland, M. K., & Lockhead, G. R. (1968). Sequential effects in absolute judgments of loudness. Perception & Psychophysics, 3, 409-414. Iverson, G. J., Bamber, D. (1997). The generalized area theorem
in signal detection theory. In A. A.J. Marley (Ed.), Choice, decision and measurement: Papers in honor qf R. Duncan Luce's 70th birthday (pp. 301-318). Mahwah, NJ: Erlbaum. Iverson, G.J., & Falmagne,
J.-C. (1985). Statistical issues in measurement. Mathematical Social Sciences, 14, 131-153. Iverson, G.J., and Sheu, C.-F. (1992). Characterizing random variables in the context of signal detection
theory. Mathematical Social Sciences, 23, 151-174. Jesteadt, W., Wier, C. C., & Green, D. M. (1977). Intensity discrimination as a function of frequency and sensation level. Journal of the Acoustical
Society qf America, 61, 169-177. Kadlec, H., & Townsend, J. T. (1992). Signal detection analyses of dimensional interactions. In Ashby, F. G. (Ed.), Multidimensionalmodels qfperception and cognition
(pp. 181-227). Hillsdale, NJ: Erlbaum. Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47, 263-291. Keren, G., & Baggen, S. (1981). Recognition
models of alphanumeric characters. Perception & Psychophysics, 29, 234-246. Klein, F. (1872/1893). A comparative review of recent researches in geometry. Bulletin of the New York Mathematical
Society, 2, 215-249. (The 1872 Erlangen address was transcribed and translated into English and published in 1893.) Krantz, D. H. (1972). A theory of magnitude estimation and cross-modality matching.
Journal of Mathematical Psychology, 9, 168-199. Krantz, D. H. (1975a, b). Color measurement and color theory. I. Representation theorem for Grassman structures, lI. Opponent-colors theory. Journal of
Mathematical Psychology, 12, 283-303, 304-327. Krantz, D. H., Luce, R. D., Suppes, P., & Tversky, A. (1971). Foundations of measurement (Vol. l.) New York: Academic Press. Krumhansl, C. L. (1978).
Concerning the applicability of geometric models to similarity data:
Geoffrey Iverson and R. Duncan Luce
The interrelationship between similarity and spatial density. Psychological Review, 85, 445463. Kuczma, M. (1968). Functional equations in a single variable. Monografie Mat. 46. Warsaw: Polish
Scientific. Lacouture, &., & Marley, A. A. J. (1995). A mapping model of bow effects in absolute identification. Journal of Mathematical Psychology, 39, 383-395. Levelt, W. J. M., Riemersma, J. B., &
Bunt, A. A. (1972). Binaural additivity in loudness. British Journal of Mathematical and Statistical Psychology, 25, 1-68. Levine, M. V. (1971). Transformations that render curves parallel. Journal
of Mathematical
Psychology, 7, 410-441.
Levine, M. V. (1972). Transforming curves into curves with the same shape. Journal of Mathe-
matical Psychology, 9, 1-16.
Lichtenstein, S., & Slovic, P. (1971). Reversals of preference between bids and choices in gambling decisions, Journal of Experimental Psychology, 89, 46-55. Link, S. W. (1992). The wave theory of
difference and similarity. Hillsdale, NJ: Erlbaum. Lockhead, G. R. (1966). Effects of dimensional redundancy on visual discrimination. Journal of Experimental Psychology, 72, 94-104. Lowen, S. B., &
Teich, M. C. (1992). Auditory-nerve action potentials form a nonrenewal point process over short as well as long time scales. Journals of the Acoustical Society of America, 92, 803-806. Lowenstein,
G., & Elster, J. (1992). Choice over time. New York: Russell Sage Foundation. Luce, R. D. (1959a). Individual choice behavior: A theoretical analysis. New York: John Wiley & Sons. Luce, R. D.
(1959b). On the possible psychophysical laws. Psychological Review, 66, 81-95. Luce, R. D. (1963). Detection and recognition. In R. D. Luce, R. R. Bush, & E. Galanter (Eds.), Handbook of mathematical
psychology (Vol. 1, pp. 103-189). New York: John Wiley & Sons. Luce, R. D. (1977). A note on sums of power functions. Journal of Mathematical Psychology, 16, 91-93. Luce, R. D. (1986). Response
times. Their role in inferring elementary mental organization. New York: Oxford University Press. Luce, R. D. (1987). Measurement structures with Archimedean ordered translation groups. Order, 4,
391-415. Luce, R. D. (1990a). "On the possible psychophysical laws" revisited: Remarks on crossmodal matching. Psychological Review, 97, 66-77. Luce, R. D. (1990b). Rational versus plausible
accounting equivalences in preference judgments. Psychological Science, 1, 225-234. Reprinted (with minor modifications) in W. Edwards (Ed.), (1992), Utility theories: Measurements and applications
(pp. 187-206). Boston: Kluwer Academic. Luce, R. D. (1992a). Singular points in generalized concatenation structures that otherwise are homogeneous. Mathematical Social Sciences, 24, 79-103. Luce, R.
D. (1992b). Where does subjective-expected utility fail descriptively? Journal of Risk and Uncertainty, 5, 5-27. Luce, R. D. (1993). Sound and hearing. A conceptual introduction. Hillsdale, NJ:
Erlbaum. Luce, R. D., & Alper, T. M. (in preparation). Conditions equivalent to unit representations of ordered relational structures. Manuscript. Luce, R. D., & Edwards, W. (1958). The derivation of
subjective scales from just-noticeable differences. Psychological Review, 65, 227-237. Luce, R. D., & Fishburn, P. C. (1991). Rank- and sign-dependent linear utility models for finite first-order
gambles. Journal of Risk and Uncertainty, 4, 29-59.
The Representational Measurement Approach to Problems
Luce, R. D., & Fishburn, P. C. (1995). A note on deriving rank-dependent utility using additive joint receipts. Journal of Risk and Uncertainty, 11, 5-16. Luce, R. D., & Galanter, E. (1963).
Discrimination. In R. D. Luce, R. R. Bush, & E. Galanter (Eds.), Handbook of mathematical psychology (Vol. l, pp. 191-243). New York: John Wiley & Sons. Luce, R. D., Green, D. M., & Weber, D. k.
(1976). Attention bands in absolute identification. Perception & Psychophysics, 20, 49-54. Luce, R. D., Krantz, D. H., Suppes, P., & Tversky, A. (1990). Foundations of Measurement (Vol. 3). San
Diego: Academic Press. Luce, R. D., Mellers, B. A., & Chang, S.-J. (1993). Is choice the correct primitive? On using certainty equivalents and reference levels to predict choices among gambles.
Journal of Risk and Uncertainty,6, 115-143. Luce, R. D., & Narens, L. (1985). Classification of concatenation measurement structures according to scale type. Journal of Mathematical Psychology, 29,
1-72. Luce, R. D., & Nosofsky, R. M. (1984). Sensitivity and criterion effects in absolute identification. In S. Kornblum & J. Requin (Eds.), Preparatory states and processes (pp. 3-35). Hillsdale,
NJ: Erlbaum. Luce, R. D., Nosofsky, R. M., Green, D. M., & Smith, A. F. (1982). The bow and sequential effects in absolute identification. Perception & Psychophysics, 32, 397-408. Luce, R. D., &
Raiffa, H. (1957/1989). Games and Decisions. Introduction and Critical Survey. New York: John Wiley & Sons. Reprinted New York: Dover Publications. Luce, R. D., & Suppes, P. (1965). Preference,
utility, and subjective probability. In R. D. Luce, R. R. Bush, & E. Galanter (Eds.), Handbook of Mathematical Psychology, (Vol. III, pp. 249410). New York: Wiley. Luce, R. D., & von Winterfeld, D.
(1994). What common ground exists for descriptive, prescriptive, and normative utility theories? Management Science, 40, 263-279. MacCrimmon, K. R., Stanburg, W. T., & Wehrung, D. A. (1980). Real
money lotteries: A study of ideal risk, context effects, and simple processes. In T. S. Wallsten (Ed.), Cognitive process in choice and decision behavior (pp. 155-177). Hillsdale, NJ: Erlbaum.
Macmillan, N. A., & Creelman, C. D. (1991). Detection theory: A user's guide. New York: Cambridge University Press. Maddox, W. T. (1992). Perceptual and Decisional Separability. In Ashby, F. G.
(Ed.), Multidimensional models of perception and cognition (pp. 147-180). Hillsdale, NJ: Erlbaum. Marley, A. A.J. (1990). A historical and contemporary perspective on random scale representations of
choice probabilities and reaction times in the context of Cohen and Falmagne's (1990, Journal of Mathematical Psychology, 34) results. Journal of Mathematical Psychology, 34, 81-87. Marley, A. A. J.,
& Cook, V. T. (1984). A fixed rehearsal capacity interpretation of limits on absolute identification performance. BritishJournal of Mathematical and Statistical Psychology, 30, 136-151. Marley, A.
A.J., & Cook, V. T. (1986). A limited capacity rehearsal model for psychophysical judgments applied to magnitude estimation. Journal of Mathematical Psychology, 37, 339390. Marschak, J. (1960).
Binary-choice constraints and random utility indicators. In K. J. Arrow, S. Karlin, & P. Suppes (Eds.), Proceedings of the _fist Stanford symposium on mathematical methods in the social sciences,
1959 (pp. 312-329). Stanford, CA: Stanford University Press. McGill, W. J., & Goldberg, J. P. (1968). A study of the near-miss involving Weber's law and pure-tone intensity discrimination. Perception
& Psychophysics, 4, 105-109.
Geoffrey Iverson and R. Duncan Luce
Medin, D. L., & Schaffer, M. M. (1978). Context theory of classification learning. Psychological Review, 85, 207-238. Mellers, B. A., Chang, S., Birnbaum, M. H., & Ord6fiez, L. D. (1992).
Preferences, prices, and ratings in risky decision making. Journal of Experimental Psychology: Human Perception and Performance, 18, 347-361. Mellers, B. A., Weiss, R., & Birnbaum, M. H. (1992).
Violations of dominance in pricing judgments. Journal of Risk and Uncertainty, 5, 73-90. Michell, J. (1986). Measurement scales and statistics: A clash of paradigms. Psychological Bulletin, 100,
398-407. Michell, J. (1990). An introductionto the logic of psychological measurement. Hillsdale, NJ: Erlbaum. Michell, J. (1995). Further thoughts on realism, representationalism, and the
foundations of measurement theory [A book review reply]. Journal of Mathematical Psychology, 39, 243247. Miller, G. A. (1956). The magical number seven plus or minus two: Some limits on our capacity
for processing information. Psychological Review, 63, 81-97. Narens, L. (1981a). A general theory of ratio scalability with remarks about the measurementtheoretic concept of meaningfulness. Theory
and Decision, 13, 1-70. Narens, L. (1981b). On the scales of measurement. Journal of Mathematical Psychology, 24, 249275. Narens, L. (1985). Abstract measurement theory. Cambridge, MA: MIT Press.
Narens, L. (1988). Meaningfulness and the Erlanger program of Felix Klein. Math('matiques Informatique et Sciences Humaines, 101, 61-72. Narens, L. (1994). The measurement theory of dense threshold
structures. Journal of Mathematical Psychology, 38, 301-321. Narens, L. (1996). A theory of ratio magnitude estimation. Journal of Mathematical Psychology, 40, 109-129. Narens, L. (in preparation).
Theories of meaningfidness. Manuscript. Narens, L., & Mausfeld, R. (1992). On the relationship of the psychological and the physical in psychophysics. Psychological Review, 99, 467-479. Nieder6e, R.
(1992). What do numbers measure? A new approach to fundamental measurement. Mathematical Social Sciences, 24, 237-276. Nieder6e, R. (1994). There is more to measurement than just measurement:
Measurement theory, symmetry, and substantive theorizing. A discussion of basic issues in the theory of measurement. A review with special focus on Foundations of Measurement. Vol. 3: Representation,
Axiomatization, and Invariance by R. Duncan Luce, David H. Krantz, Patrick Suppes, and Amos Tversky. Journal of Mathematical Psychological, 38, 527-594. Noreen, D. L. (1981). Optimal decision rules
for some common psychophysical paradigms. SIAM-AMS Proceedings, 13, 237-279. Nosofsky, R. M. (1983). Information integration and the identification of stimulus noise and critical noise in absolute
judgment. Journal of Experimental Psychology: Human Perception and Performance, 9, 299-309. Nosofsky, R. M. (1984). Choice, similarity, and the context theory of classification. Journal of
Experimental Psychology: Learning, Memory and Cognition, 10, 104-114. Nosofsky, R. M. (1986). Attention, similarity, and the identification-categorization relationship. Journal of Experimental
Psychology: General, 115, 39-57. Nosofsky, R. M. (1987). Attention and learning processes in the identification and categorization of integral stimuli. Journal of Experimental Psychology: Learning,
Memory and Cognition, 13, 87-109. Nosofsky, R. M. (1989). Further test of an exemplar-similarity approach to relating identification and categorization. Perception & Psychophysics, 45, 279-290.
Nosofsky, R. M. (1991). Tests of an exemplar model for relating perceptual classification and
The Representational Measurement Approach to Problems
recognition memory. Journal qf Experimental Psychology: Human Perception and Performance, 9, 299-309. Palacios, J. (1964). Dimensional Analysis. London: Macmillan & Co. Pelli, D. G. (1985).
Uncertainty explains many aspects of visual contrast detection and discrimination. Journal of the Optical Society of America A, 2, 1508-1531. Perrin, N. A. (1986). The GRT of preference: A new theory
of choice. Unpublished doctoral dissertation, Ohio State University. Perrin, N. A. (1992). Uniting identification, similarity and preference: General recognition theory. In F. G. Ashby (Ed.),
Multidimensional models of perception and cognition (pp. 123146). Hillsdale, NJ: Erlbaum. Pfanzagl, J. (1971). Theory of measurement. Wfirzburg: Phusica-Verlag. Plateau, J. A. F. (1872). Sur las
measure des sensations physiques, et sur loi qui lie l'intensit~ de ces sensation ~ l'intensit~ de la cause excitante. Bulletin de l'Academie Royale de Belgique, 33, 376-388. Pruzansky, S., Tversky,
A., & Carroll, J. D. (1982). Spatial versus tree representations of proximity data. Psychometrika, 47, 3-24. Quiggin, j. (1993). Generalized expected utility theory: Tire rank-dependent model.
Boston: Kluwer Academic Publishers. Regenwetter, M. (1996). Random utility representation of finite n-ary relations. Journal of Mathematical Psychology, 40, 219-234. Roberts, F. S. (1979).
Measurement theory. Reading, MA: Addison-Wesley. Savage, C. W., & Ehrlich, P. (Eds.). (1992). Philosophical and foundational issues in measurement theory. Hillsdale, NJ: Erlbaum. Savage, L.J. (1954).
The foundations of statistics. New York: John Wiley & Sons. Schepartz, B. (1980). Dimensional analysis in the biomedical sciences. Springfield, IL: Charles C. Thomas. Schoemaker, P. J. H. (1982). The
expected utility model: Its variants, purposes, evidence and limitations. Journal of Economic Literature, 20, 529-563. Schoemaker, P. J. H. (1990). Are risk-attitudes related across domain and
response modes? Management Science, 36, 1451-1463. Scott Blair, G. W. (1969). Elementary theology. New York: Academic Press. Sedov, L. I. (1959). Similarity and dimensional methods in mechanics. New
York: Academic Press. (Translation from the Russian by M. Holt and M. Friedman) Shepard, R. N. (1957). Stimulus and response generalization: A stochastic model relating generalization to distance in
psychological space. Psychometrica, 22, 325-345. Shepard, R. N. (1978). On the status of'direct' psychophysical measurement. In C. W. Savage (F+d.), Minnesota studies in the philosophy of science
(Vol. IX, pp. 441-490). Minneapolis: University of Minnesota Press. Shepard, R. N. (1981). Psychological relations and psychophysical scales: On the status of "direct" psychophysical measurement.
Jou,zal of Mathematical Psychology, 24, 21-57. Shepard, R. N. (1987). Toward a universal law of generalization for psychological science. Science, 237, 1317-1323. Sj~Sberg, L. (1980). Similarity and
correlation. In E. D. Lanterman & H. Feger (Eds.), Similarity and choice (pp. 70-87). Bern: Huber. Slater, P. (1960). The analysis of personal preferences. BritishJournal of Statistical Psychology,
13, 119-135. Slovic, P., & Lichtenstein, S. (1968). Importance of variance preferences in gambling decisions. Journal of Experimental Psychology, 78, 646-654. Slovic, P., & Lichtenstein, S. (1983).
Preference reversals: A broader perspective. American Economic Review, 73, 596-605. Stevens, S. S. (1946). On the theory of scales of measurement. Science, 103, 677-680.
Geoffrey Iverson and R. Duncan Luce
Stevens, S. S. (1951). Mathematics, measurement and psychophysics. In S. S. Stevens (Ed.), Handbook of experimental psychology (pp. 1-49). New York: John Wiley & Sons. Stevens, S. S. (1975).
Psychophysics. New York: John Wiley & Sons. Suppes, P., Krantz, D. H., Luce, R. D., & Tversky, A. (1989). Foundations of measurement (Vol. 3). San Diego, CA: Academic Press. Thompson, W. J., & Singh,
J. (1967). The use of limit theorems in paired comparison model~ building.Psychometrica, 32, 255-264. Thurstone, L. L. (1927a). A law of comparative judgment. Psychological Review, 34, 273-286.
Thurstone, L. L. (1927b). Psychophysical analysis. AmericanJournal of Psychology, 38, 68-89. Thurstone, L. L. (1927c). Three psychophysical laws. Psychological Review, 34, 424-432. Torgerson, W. S.
(1961). Distances and ratios in psychological scaling. Acta Psychologica, 19, 201-205. Townsend, J. T., & Ashby, F. G. (1984). Measurement scales and statistics: The misconception misconceived.
Psychological Bulletin, 96, 394-401. Treisman, M. (1985). The magical number seven and some other features of category scaling: Properties of a model for absolute judgment. Journal of Mathematical
Psychology, 29, 175230. Treisman, M., & Williams, T. C. (1984). A theory of criterion setting with an application to sequential dependencies. Psychological Review, 84, 68-111. Tucker, L. R. (1960).
lntra-individual and inter-individual muhidimensionality. |n H. Gulliksen & S. Messick (Eds.), Psychological scaling: Theory and applicatons. New York: Wiley. Tversky, A. (1972a). Choice by
elimination. Journal of Mathematical Psychology, 9, 341-367. Tversky, A. (1972b). Elimination by aspects: A theory of choice. Psychological Review, 79, 281299. Tversky, A. (1997). Features of
similarity. Psychological Review, 84, 327-352. Tversky, A., & Kahneman, D. (1986). Rational Choice and the Framing of Decisions.Journal of Business, 59, $251-$278. Reprinted in R. M. Hogarth and M.
W. Reder (Eds.), (1986), Rational choice: The contrast between economics and psychology (pp. 67-94). Chicago: University of Chicago Press. Tversky, A., & Kahneman, D. (1992). Advances in prospect
theory: Cumulative representation of uncertainty. Journal qf Risk and Uncertainty, 5, 204-217. Tversky, A., & Sattath, S. (1979). Preference trees. Psychological Review, 86, 542-573. Tversky, A.,
Slovic, P., & Kahneman, D. (1990). The causes of preference reversal. The American Economic Review, 80, 204-217. Ulehla, Z. J., Halpern, J., & Cerf, A. (1968). Integration of information in a visual
discrimination task. Perception & Psychophysics, 4 1-4. van Acker, P. (1990). Transitivity revisited. Annals of Operations Research, 23, 1-25. yon Winterfeldt, D., Chung, N.-K., Luce, R. D., & Cho,
Y. (1997). Tests of consequene monotonicity in decision making under uncertainty. Journal of Experimental Psychology: Learning, Memory, and Cognition, in press. Wakker, P. P. (1989). Additive
representations of preferences: A new foundation of decision analysis. Dordrecht: Kluwer Academic Publishers. Wakker, P. P., & Tversky, A. (1993). An axiomatization of cumulative prospect theory.
Journal of Risk and Uncertainty, 7, 147-176. Wandell, B., & Luce, R. D. (1978). Pooling peripheral information: Averages versus extreme values. Journal of Mathematical Psychology, 17, 220-235. Ward,
L. M., & Lockhead, G. R. (1970). Sequential effects and memory in category judgments. Journal of Experimental Psychology, 84, 27-34. Ward, L. M., & Lockhead, G. R. (1971). Response system processes
in absolute judgment. Perception & Psychophysics, 9, 73-78.
The Representational Measurement Approach to Problems
Weber, D. L., Green, D. M., & Luce, R. D. (1977). Effects of practice and distribution of auditory signals on absolute identification. Perception & Psychophysics, 22, 223-231. Wickelgren, W. A.
(1968). Unidimensional strength theory and component analysis of noise in absolute and comparative judgments. Journal of Mathematical Psychology, 5, 102-122. Yellott, J. I. (1977). The relationship
between Luce's choice axiom, Thurstone's theory of comparative judgment, and the double exponential distribution. Journal qfMathematical Psychology, 15, 109-144.
This Page Intentionally Left Blank
~H A P T E R-:
Psychophysical Scaling Lawrence E. Marks Daniel A l g o m
I. I N T R O D U C T I O N Consider the following three biblical scenes. In the aftermath of the fight with Goliath, a victorious David is respected more than Saul the king, and the celebrating women
sing: "Saul hath slain his thousands, and David his ten thousands" (I Samuel 18:7). Or take Amos 5:2, where the threat facing the people is phrased: "The city that went out by a thousand shall leave
an hundred, and that which went forth by an hundred shall have ten." Finally, examine the advice given to Moses to mitigate the burden of judging the people~that he should appoint "rulers of
thousands, and rulers of hundreds, and rulers of fifties, and rulers of tens," so that "every great matter they shall bring unto thee, but every small matter they shall judge" (Exodus 18:21-22).
Three points are noteworthy. First, all three verses deal with matters "psycho-physical": Feelings of reverence, magnitude of threat, and gravity of offenses are all projected onto objective continua
that can be described numerically. Second, the use of numbers to depict sensations appears in the Bible, as well as in other works of literature, both classical and modern. Most remarkable, however,
is the way that the respective sensations are mapped onto numbers. In each case, changes or increments in sensation are associated with a geometric series of numeric, physical values. If we take
Measurement,Judgment, and Decision Making Copyright 9 1998 by Academic Press. All rights of reproduction in any form reserved.
LawrenceE. Marks and Daniel Algom
these sensations to increase in equal stepsmand exegesis makes such an assumption plausiblemwe have an arithmetic series of psychological values covarying with the respective geometric series of
physical values. This relation, so familiar now with students of psychophysics, defines a logarithmic function. One should not construe this foray into biblical psychophysics as a mere exercise in
the history of metaphoric allusion. In fact, the aforementioned texts may have played a direct role in establishing the science of psychophysics. For it was our last example that captured the
attention of Daniel Bernoulli, who gave the preceding interpretation when he derived his famous logarithmic function for utility some quarter millennium ago. And, according to his own testimony, the
founder of psychophysics, Gustav Fechner, was influenced in turn by Bernoulli's discourse in developing his own logarithmic lawmthe first explicit, quantitative, psychophysical statement relating
sensations to stimuli. Psychophysics is the branch of science that studies the relation between sensory responses and the antecedent physical stimuli. Born in the 19th century, psychophysics can
claim parentage in two important scientific traditions: the analysis of sensory-perceptual processes, brilliantly realized at the time in the work of H. L. F. von Helmholtz, and the mathematical
account of mental phenomena, associated then with the work ofJ. F. Herbart. A main theme is the quantification of sensory responses, or, more generally, the measurement of sensations. Although it
falls properly in the general domain of experimental and theoretical psychology, the very name psychophysics points to a corresponding set of issues in philosophy concerning the relation of the
mental and the physical. From its inception in the work of Fechner, attempts to measure sensation have been controversial, as psychophysicists have merged empirical operations with theoretical
frameworks in order to mold a discipline that is at once philosophically sound and scientifically satisfying. We start by reviewing issues and criticisms that marked the first program to measure
sensations. Because many of these topics remain pertinent today, this section provides perspectives to evaluate the virtues and vulnerabilities of the scaling methods and theories, both classical and
modern. The main body of the chapter reviews a wide array of relevant data and theories. The two notions of sensory distance and sensory magnitude serve as our main categories of classification. In
places, we allude to a distinction between what have been called "old psychophysics" and "new psychophysics," a distinction that in some ways remains useful. Classical, or old, psychophysics rests on
the conviction that scales of sensation comprise objective, scientific constructs. By this view, magnitude of sensation is a derived concept, based on certain theoretical assumptions augmented by
mathematical analysis, and hence is largely inde-
2 Psychophysical Scaling
pendent of any subjective feeling of magnitude. Empirically, one of the hallmarks of classical psychophysics is the explicit comparison of two stimuli. The resulting responses then serve to define a
unit for the unseen mental events, measurement of which entails marking off distances or differences along the psychological continuum. Consequently, the notion of sensory distance or difference has
been widely used as the basis for scaling sensation. By way of contrast, the new psychophysics tries to define sensation magnitude by quantifying a person's verbal responses. The approach is,
therefore, largely operational, and it frequently claims to assess sensations "directly." A popular contemporary technique within this tradition is magnitude estimation, which asks people to estimate
numerically the strength of the sensation aroused by a given stimulus (usually in relation to the strength of other stimuli). Magnitude estimation, category rating, and related methods have greatly
diversified the stock of modern psychophysics, helping to create a large database for students of decision processes and judgments as well as students of sensory processes. To be sure, the terms old
and new psychophysics are misnomers, because many new approaches rest on basic tenets of the old. So are the terms indirect and direct measurement, often used to characterize the two classes of
scaling. Regardless of the particular theoretical stance, the measurement of sensation always entails assumptions or definitions. As a result, it is not wholly appropriate to characterize methods by
their directness: If we take sensation magnitude to be an intervening variable, a common view (cf. Gescheider, 1988), then clearly sensation can be measured only indirectly. The last point is
notable, for it indicates the need for adequate theoretical frameworks to underpin scales derived from magnitude estimation and from various numeric or graphic rating procedures, as well as from
discrimination tasks. Consequently, after comparing the approaches through "distance" and "magnitude," we proceed by examining recent multidimensional models of scaling and the theoretical frameworks
that underlie the models. By relating measurement to the underlying psychological theory, we seek not only to identify various factors that affect scaling but to illustrate the processes. Foremost
among those are the effects of context, which are discussed in a separate section. We conclude by noting the need to integrate scaling data with all other data and theories that are relevant to the
psychological representation of magnitude. II. PSYCHOPHYSICAL SCALING AND PSYCHOPHYSICAL THEORY Allow us one final bit of biblical exegesis. In Genesis (37:3), we learn that the patriarch Jacob
"loved Joseph more than all of his children." This terse statement marks the unfolding of a momentous chain of events known as
Lawrence E. Marks and Daniel Algom
the story of Joseph and his brothers. Jacob's partiality, intimates the biblical narrative, had far-reaching ramifications. It set the stage for the slavery of the Hebrews in Egypt, their exodus,
and, eventually, the founding of the nation of Israel in the Sinai. These fateful consequences notwithstanding, at its base, claimed the philosopher Isaiah Leibowitz (1987), the verse contains only
loose, qualitative information. Despite using the same adjective, more, the biblical statement differs in a fundamental sense from the formally similar sentence, "The red urn contains more marbles
than the blue one." Whereas the latter relation is truly quantitative, measurable, and hence describable numerically, the former is not. There is no answer to the question, "How much more did Jacob
love Joseph than, say, Reuben?" simply because sensations are inherently qualitative. Quantitative definition of sensation is meaningless, averred Leibowitz, and thus sensation--indeed, psychology as
a whole--is immune to scientific inquiry. Leibowitz's (1987) objection is no stranger to psychophysics. At least a century old, it restates what has been called the "quantity objection" (e.g.,
Boring, 1921)--that sensations do not have magnitudes. William James (1892), a major proponent of this view, wrote, "Surely, our feeling of scarlet is not a feeling of pink with a lot more pink
added; it is something quite other than pink. Similarly with our sensations of an electric arc-light: it does not contain that many smoky tallow candles in itself" (pp. 23-24). And, in Boring's
(1921) rendition of K/ilpe (1895), "This sensation of gray is not two or three of that other sensation of gray" (p. 45). By this view, what the verse in Genesis really conveys is the sense that
Jacob's love for Joseph differed from the love he felt toward his other sons, a difference vividly revealed in the exclusive gift of the "coat of many colors." Quite apart from the quantity
objection, Leibowitz implied that mere rank ordering of sensat i o n s - e v e n if possible--would not amount to measurement either. This contention too has a long history. Nominal, ordinal, and, in
some instances, interval scales were not considered measurement at all, according to a prevailing view held until the late 1940s (Heidelberger, 1993). Another objection challenges the very existence
of a dimension of"sensation magnitude" or "sensation strength." In the context of scaling, the view that we judge stimuli, not sensations, was espoused by Fullerton and Cattell (1892) and by
Ebbinghaus (1902). In Cattell's (1893) words, "The question here is whether we do in fact judge differences in the intensity of sensations, or whether we merely judge differences in the stimuli
determined by association with their known objective relations. I am inclined to think that the latter is the case . . . . I believe that my adjustment is always determined by association with the
known quantitative relations of the physical world. With lights and sounds, association might lead us to consider relative differences as equal differences, and the data would be obtained
2 Psychophysical Scaling
from which the logarithmic relation between stimulus and sensation has been deduced" (p. 293). Cattell's position mirrors, almost verbatim, a more recent approach called the "physical correlate
theory" (Warren, 1958, 1969, 1981). Perhaps reflecting the Zeitgeist in psychophysics, Warren's rendition replaces Fechner's logarithmic function with a power function. For attributes such as
loudness or brightness, the theory assumes that the physical correlate to which the subject responds is distance. Accordingly, to judge the loudness of one sound as half that of another is to say
that the former appears to come from a source twice as far away. Because sound energy varies as the inverse square of the distance from its source, the physical correlate theory predicts a
square-root relation between loudness and sound energy. The same notion informs the later work ofJ. j. Gibson (1966, 1979), whose "direct perception" or "ecological approach" is akin to Warren's.
Gibson's theory can be construed as an attempt "to state a functional correlate for all aspects of perception" (Baird, 1981, p. 190). Apart from eschewing the notions of subjective magnitude and
representation, Gibson and Warren share a pragmatic approach grounded in environmental affordances that help the organism to survive. The similarity is apparent even at the level of the specific
environmental cues utilized: "Both say that the perception of size and distance is veridical, and both ignore data to the contrary" (Baird, 1981, p. 191). In our view, the hypothesis that
psychophysical scales reflect an amalgamation of sensory and cognitive processes yields a rich network of predictable phenomena. Warren's attack offers no alternative mechanism for generating those
data, including, incidentally, the way people judge the physical correlates themselves. A. What Is Measured?
The quest to stand sensory measurement on a firm basis has sustained several efforts at general formulation. For example, Zwislocki (1991) proposed defining measurement as "matching common attributes
of things or events" (p. 20). Matching takes place on an ordinal scale, and it needs neither instruments nor numbers. It can be accomplished internally (i.e., subjectively) and, in fact, is
ubiquitous in nature (whence the term natural measurement). Only with the introduction of physical variables can the results be expressed quantitatively, but this technicality does not alter the
mental origin or nature of measurement. In Zwislocki's scheme, formal physical measurement is only a late derivative of psychophysical or natural measurement. Another attempt to tie psychophysical
and physical measurement in a common conceptual package comes from the physicist and philosopher
LawrenceE. Marks and Daniel Algom
Herbert Dingle (e.g., 1960). Dingle challenged us to consider the first sentence of an influential volume on measurement (Churchman & Ratoosh, 1959): "Measurement presupposes something to be
measured, and unless we know what that something is, no measurement can have any significance." Self-evident truth? Tautology? No, claimed Dingle~just plainly wrong! Actually, he claimed, in physics
as well as in psychology, "far from starting with a something and then [trying] to measure it, we start with a measure and then try to find something to which we can attach it" (Dingle, 1960, p.
189). Dingle made a compelling case that physical measurement does not uncover something "out there," but rather is a self-contained process that implies nothing beyond the meaning that we choose to
confer on the result. Measurement, he stated, contains a manual and a mental part. It is what Stevens (1975, p. 47) called a "schcmapiric enterprise." But a system of measurements is a theoretical
construct impregnated with meaning only by its designer. In psychology, for instance, "the importance of Intelligence Quotient is not that it measures 'intelligence,' whatever that may be, but that
it stands in simple relations to other measurements" (Dingle, 1960, p. 192). Like Stevens, Dingle disposed with the distinction between "fundamental" and "derived" measurement; in his scheme,
fundamental measures are also derived. Dingle's approach is strictly operational, as Stcvcns's sometimes is. But unlike Stevens, Dingle stressed the indispensability of substantive theory. Sensations
are difficult to measure not because they arc mental, subjective, or inaccessible, but simply for want of good psychophysical theory.
B. Infrastructures and Suprastructures Dingle would ground psychophysical scales in what we term here theoretical suprastructures, that is, in those frameworks that connect the particular property
being measured with other properties or processes within the psychological domain. In physics, one example of a suprastructure is found in the principles underlying the ideal gas law. As Norwich
(1993) has pointed out, it is possible to derive the law from principles of conservation of mass and energy, though the law is also consistent with properties of the forces between gas molecules. The
law itself can be written
p. V-M.N,.
where P is the gas's pressure, V its volume, T its absolutc temperature, M the number of moles, and N~ is Avogadro's constant. Note that the multiplicative form of the gas law constrains the scales
of pressure, velocity, and temperature with respect to one another. If M is taken to be an absolute measure, with no permissible transformation, then the constraint on scales
2 Psychophysical Scaling
of P, V, and T is considerable. If M is not absolute, the constraint is smaller. For example, redefining all of the terms (N a as well as P, V, T, and M) by power transformations--for instance,
taking square rootswwould produce a new set of scales equally consistent with Eq. (1). On the other hand, additional constraint is provided by other gas laws, for example, the van der Waals equation
[P + c , / ( M . V ) 2 ] . [ M
9V -
c21--N a
where cl and c2 are constants. In general, power transformations of all of the variables in Eq. (2) do not leave its form invariant. Unfortunately, there have been relatively few suprastructural
theories in psychophysics (cf. Luce, 1972). Two important ones are the theories of Link (1992) and Norwich (1993), described later; here, we simply note that Norwich used the example of the ideal gas
law in discussing his approach to psychophysical theory. Suprastructural theories are substantive: They in-corporate sensory scales within frameworks that seek to account for a set of empirical
phenomena, often with no reference to any particular empirical operations for measurement. Many, probably most, psychophysical theories are infrastructural. That is, these theories seek to provide
frameworks that relate a set of empirical operations to a particular psychological scale or property. Often, they are formulated in the language of measurement theory. The most comprehensive and
rigorous treatment of measurement theory appears in the foundational approach (e.g., Krantz, Luce, Suppes, & Tversky, 1971; Luce, Krantz, Suppes, & Tversky, 1990; Luce & Narens, 1987; Suppes &
Zinnes, 1963). This treatment is axiomatic, resulting in representation theorems that state the existence of functions mapping certain empirical procedures or objects into sets of numbers. Uniqueness
theorems then list the ways in which the numbers can be changed without altering the empirical information represented. The foundational approach is informed by the branch of mathematics that deals
with ordered algebraic systems (cf. Luce & Krumhansl, 1988). In conjoint measurement (Luce & Tukey, 1964), for example, several stimulus factors are varied jointly in a factorial design. Possible
behavioral laws referring to the form of their combination (including independence) are then axiomatically tested. Conjoint measurement is at once simple and powerful. Given only an ordering of pairs
of stimuli, certain combinatorial patterns possess enough structure to constrain the possible representations, which transcend those of ordinal scales. Although the foundational approach has been
criticized as promising more than it has delivered (Cliff, 1992), Narens and Luce (1992) indicated several important contributions to psychology; notable is the role of axiomatic theory in the
development of modern decision theory. Largely infrastructural but potentially suprastructural is the framework
Lawrence E. Marks and Daniel Algom
of Anderson's (e.g., 1970, 1992) integration psychophysics, or, more generally, the approach called functional measurement. Like conjoint measurement, functional measurement treats multivariate
phenomena. Primary interest lies in specifying the quantitative rules stating how separate sensations combine in a unitary perception; hence, the conceptual basis of integrational psychophysics is
cognitive, lying within the mental realm. Often, these rules comprise algebraic laws of addition, multiplication, or averaging (cognitive algebra): If the data conform to the rule, Anderson claimed,
then they support not only the rule but the validity of the response scale (e.g., ratings, magnitude estimates) as well (but see Birnbaum, 1982; Gescheider, 1997). In this way, "measurement exists
only within the framework of substantive empirical laws" (Anderson, 1974, p. 288). In essence, functional-measurement theory specifies various tests for algebraic models and response scales, and to
this extent the theory provides an infrastructural framework to psychophysicaljudgments. Insofar as one can derive the algebraic models from broader theoretical considerations of perception and
cognition, functional measurement--and conjoint measurcmentmmay also provide suprastructure to psychophysical scales. III. SCALING BY DISTANCE
A. Fechner's Conception: Scaling and the Psychophysical Law Gustav Fechner's greatness lay in his unprecedented prowess at applying scientific methods to the measurement of sensations. If measurement
means more than giving verbal labels to stimuli~if it entails using appropriate units in a process of comparison~then how do we create units for attributes such as loudness, heaviness, or pain? We do
not have fundamental units for sensation readily available, a "centimeter" of loudness or a "gram" of pain. And supposing we had such units, how would one perform the necessary comparison? Even
granting sensations to have magnitudes, these are bound to remain private and inaccessible, as arc all contents of mind. "No instruments," wrote Leahey (1997, p. 181), "can be applied to conscious
What did Fechner do that eluded his predecessors? Fechner realized that reports by subjects of the magnitude of an experimental stimulus do not measure the resulting sensations any more than do
metaphoric descriptions found in literary works, but he was not ready to forswear the notion of sensation strength, an idea with compelling appeal. Consequently, he suggested that sensations could be
isolated, then measured, by manipulating the stimuli to which the subject is exposed. Though private, conscious experiences can be controlled by the judicious variation of the appropriate physical
stimuli. By varying systematically the values of stimuli, one can elicit a
2 Psychophysical Scaling
unique conscious experience: success or failure at distinguishing a pair of stimuli. The physical relation between such pairs of stimuli, augmented by a postulate about the concomitant subjective
experiences, then can serve to quantify sensation. Measurement presupposes a process of abstraction, isolating that aspect of the object that one wishes to measure. The sensory error in failing to
distinguish a pair of stimuli, or misjudging the weaker of two stimuli to be stronger, served Fechner to isolate a subjective experience in an otherwise inaccessible private realm. He thereby
anchored mental measuremcnt in units of the appropriate physical stimuli. To Link (1992), Measuring the size of this invisible sensory error in units of the visible physical stimuli was one of the
outstanding scientific achievements in the 19th century. . . . In this way Fcchner, and his Psychophysics, exposed to scientific scrutiny differences between sensations that were previously hidden
from public view and observed only by personal awareness. (p. 7) Note that Link refcrred to diffcrcnccs bctween sensations. Fechncr believed that measures of discrimination most directly scalc
"diffcrence sensations" (Murray, 1993)--though scales of diffcrence sensations may entail scnsory magnitudes as well. Ernst Weber (1834) preceded Fechncr in measuring difference thresholds or just
noticeable differences (JNDs). And Fechner used Weber's relativity principle, namely that JNDs are proportional to stimulus intensity, in deriving (actually, in justifying) his psychophysical law.
Weber's law implies that successiveJNDs form a geometric series. If, with Fechner, we assumc that JNDs mark off equal units in subjective magnitude along the continuum of intensity, thereby
distinguishing sensation JNDs from stimulusJNDs, then it follows that subjective magnitudes map onto physical ones by a logarithmic function~Fechner's law. Weber did not refer to a quantitative
concept of sensation strength or difference. In fact, the sole psychological component in Weber's law is the subject's indication when two stimuli are discriminably different. To be discriminable,
Weber learned, the intensities of two stimuli must differ by an amount that is proportional to their absolute level. Yet Weber's law is silent on a crucial question: What sensation is felt at a given
JND? Fechner complemented Weber's empirical relation by submitting that JNDs form a subjectively equal series of units, thereby conceiving a truly "psycho-physical" relation. Fechner assumed the
validity of Weber's law, namely, that AI, the stimulusJND, is proportional to stimulus intensity, I, or, AI/I = c,,,
where G is a positive constant, the Weber fraction. Weber's law characterizes thresholds of discrimination along the physical continuum, I. Along the
Lawrence E. Marks and Daniel Algom
parallel subjective continuum, S, implies Fechner's model, every JND has the same magnitude, so for every AI AS = c~.
Thus the subjectiveJND, AS, can serve as a unit of sensation. In Fechner's conjecture, the ratios, 12/11, 13/12 . . . . . /j//j-1, ofjust noticeably different stimuli--equal according to Weber's
law--correspond to equal increments in sensation. As a result, a geometrically spaced series of values on the physical continuum gives rise to an arithmetically spaced series of values on the
psychological continuum. This relation defines the logarithmic function, Fechner's Massformel, or measurement formula,
c ln(I/lo),
Cb/C a is the constant of proportionality, and I o is the absolute where c threshold. The stimulus is measured as multiples of the absolute threshold, at which value sensation is zero. Actual
scaling, Fechner realized, requires determining AI and I o in the laboratory, and he devised various methods for estimating them. The hallmark of these methods is that the subject merely orders
stimuli on a continuum by responses such as "greater," "smaller," or "equal." Such responses avoid many of the pitfalls associated with numerical responses (e.g., Attneave, 1962). First, because the
measurement situation is familiar and well defined, the subject presumably knows what she or he means by saying "longer" or "more intense." Therefore, the validity of the responses is warranted, a
presupposition of psychophysics in general. Second, ordinal relations are invariant across all positive monotonic transformations. Therefore, assuming only that the psychophysical function is
monotonic, the order of the subjective values reproduces the physical order of stimuli. Fechner derived the logarithmic law using his mathematical auxiliary principle, which asserts that the
properties characterizing differences as small as JNDs also characterize smaller differences. Thus, dividing Eq. (4) by Eq. (3), rearranging the terms, and rewriting the result as a differential
equation gives =
aS -- c E)I/I,
or the fundamental formula. Fechner then integrated Eq. (6) between I o and I to arrive at the standard logarithmic solution given in Eq. (5) and depicted in Figure 1. Figure 1 shows how sensation
magnitude should increase as stimulus intensity increases, given Fechner's hypothesis. The panel on the top plots both sensation and stimulus in linear coordinates, making clear the rule of
diminishing returns. Augmenting stimulus intensity from the 1 stimulus unit (by definition, the absolute threshold) to 1001 stimulus units increases
2 Psychophysical Scaling
q) "0 r" tm r" 0
C/) rq) 4 or)
1 O0
1 O0 000
FIGURE 1 Characterizationof Fechner's law. Sensation magnitude increases as a negatively accelerated (marginally decreasing) function of stimulus intensity, as shown in the top panel. When stimulus
intensity is plotted logarithmically, as in the bottom panel, the function appears as a straight line. the perceptual experience from zero to about 3 sensation units, but augmenting the stimulus by
another 1000 units, to 2001, increases the sensation by only a fraction of unit. The panel on the b o t t o m plots the same psychophysical relation, but now the stimulus levels are spaced
logarithmically, turning the concave-downward curve on the top into a straight line, consistent with the form of Eq. (5).
Lawrence E. Marks and Daniel Algom
1. Fechner's Conjectures and Weber Functions Both of the premises underlying the derivation of Fcchner's law face difficulties, and plausible alternatives have been offered for both. Following a
suggestion made by Brentano (1874), for example, a rule like Weber's law might also hold for sensation, challenging the assumption on the constancy of subjective JNDs. If so, then Fechnerian
integration gives a psychophysical power law, where the exponent plays a role analogous to the slope constant c in Fechner's law, Eq. (5). On the stimulus side, the validity of Weber's law itself has
long been debated. Often, the Weber ratio AI/I is not constant, particularly at low levels of stimulus intensity, though such deviations frequently can be corrected by a "linear generalization"
(Fcchncr, 1860; Miller, 1947; see also Gescheider, 1997; Marks, 1974b), where AI is proportional to I plus a constant rather than proportional to I. Figure 2 shows some characteristic Weber functions
for visual brightness, odor intensity, and vibratory touch intensity. Plotted in each case is the difference threshold (AI) against stimulus intensity (I), both variables expressed in logarithmic
coordinates. Two points arc noteworthy. First, these three modalities tend to be characterized by different Weber ratios (ratios of AI/I), with intensity discrimination best (Weber ratio smallest) in
vision and poorest in touch. Second, the relations in all three cases deviate, at low intensity levels, from the simple form of Weber's law but can be corrected by the inclusion of an additive
constant within the linear equation relating AI to I. 10DO0
>,, ~9 t-. (l.)
r :3
1 oo
(.2 1
1 O0
Stimulus Intensity I FIGURE 2 Idealized examples of Weber functions, relating the just-detectable change in stimulus intensity, AI, plotted on a logariihmic axis as a function of intensity, I, also
plotted on a logarithmic axis, for three perceptual modalities: visual brightness, odor, and vibratory touch.
2 Psychophysical Scaling
On the other hand, the relation between ~tI and I may be nonlinear. Guilford (1932) proposed that AI is proportional to / raised to the power n,
AI = c,I',
of which both Weber's law and Fullerton and Cattell's (1892) square-root law are special cases. Being rather general, Eq. (7) fits many sets of data, notably discriminations of sound intensity (e.g.,
Harris, 1963; Jesteadt, Wier, & Green, 1977; Luce & Green, 1974; McGill & Goldberg, 1968), where n is approximately 0.8-0.9. Were Weber's law to hold exactly, n would equal 1.0; the small discrepancy
has been dubbed the "near miss to Weber's law" (McGill & Goldberg, 1968). As indicated, Guilford's law reduces to Webcr's law when n = 1 and to Fullerton and Cattell's law when n = 0.5. One may
construe a stimulus to represent the sum of many smaller components, whose variances contribute to the observed variability (~tI, or the stimulus JND) of the global stimulus. This conjecture was
explored by Woodworth (1914; see also Cattcll, 1893; Solomons, 1900), who analyzed the correlations between the errors of the components. If these correlations are zero, the Fullerton-Cattell law
follows; if they are + 1, Weber's law follows. The notion that perceived magnitude is the end result of such elemental processes has proven powerful in current models of psychophysics. Given the
violations of Fechner's assumptions and the available alternatives, a fruitful approach would retain only the general routine for determining psychophysical functions, allowing alternative Weber
functions and Fechnerfunctions. By this generalization, any relation between AI and I is a Weber function, and any relation between cumulated sensation JNDs and cumulated stimulus JNDs is a Fechner
function (e.g., Baird & Noma, 1978). Such an approach is acceptable when S is determined by summing the finite values of JNDs (e.g., by graphical addition) but may be erroneous when each difference,
AI, is transformed to an infinitesimal and then integrated mathematically. Integration leads to mathematically acceptable results with only a limited number of Weber functions (including Weber's law
and its linearizations, but not other values of n in Guilford's law: Luce & Edwards, 1958; see also Baird & Noma, 1978; Falmagne, 1971, 1974, 1985; but see Krantz, 1971). Though infrequently applied
to psychophysical data, solutions derived through functional equations (Acz~l, 1966) can be used instead to derive legitimate Fechner functions (Luce & Edwards, 1958). The lesson taught by Luce and
Edwards (1958) is that Fechner's "mathematical auxiliary principle" is not universally appropriate. Graphically summated JND scales are appropriate to all forms of the Weber function. Nevertheless,
graphically summated JND scales may fail other tests of consistency. For example, because JNDs are defined statistically, the exact values depend on the criterion (e.g., 75% correct identification of
LawrenceE. Marks and Daniel Algom
stronger stimulus, 85%, or whatever). For the scales to be invariant in form over changes in criterion, the psychometric functions relating percent correct to stimulus intensity I must be homogenous,
for instance, of the same shape when plotted against log I. Furthermore, graphically summated JND scales often fail tests of consistency across multidimensional variation in stimulation (e.g.,
Nachmias & Steinman, 1965; Newman, 1933; Pi6ron, 1934; Zwislocki & Jordan, 1986), though they sometimes pass them (Heinemann, 1961). One test of consistency takes two stimuli (for instance, lights
differing in color), first matches them for subjective intensity, then asks whether the stimuli continue to match after both are augmented by the same number of JNDs. A strong version of Fechner's
conjecture says that the two stimuli should still match in subjective intensity, but results do not always support the prediction. For example, Durup and Pidron (1933) equated blue and red lights for
brightness at various levels but found that different numbers of JND steps separated the matching intensity levels of the two colors. For overviews, see Pidron (1952), Marks (1974b), and Krueger
(1989). An alternative test of consistency measures JNDs at a given intensity under two stimulus conditions in which sensation magnitude S is the same, but the rate of growth with intensity differs;
in this case, according to Fechner's conjecture, the JNDs should be inversely proportional to rate of growth: the greater the rate of growth, the smaller the stimulus JND. Hellman, Scharf,
Teghtsoonian, and Teghtsoonian (1987) measured Weber fractions for discriminating the intensity of 1000-Hz tones embedded within narrowband and wideband noises, conditions that produce markedly
different loudness functions as determined by direct matching. Despite the difference in the slopes of the loudness functions, the Weber fractions at equal loudness were nearly identical, implying
that, at equal values of S, JNDs may correspond to markedly different changes in subjective magnitude (AS) (and hence also to markedly different values of the ratio AS~S; see also Stillman,
Zwislocki, Zhang, & Cefaratti, 1993). Finally, an alternate formulation, first suggested by Riesz (1933) and later elaborated by Lira, Rabinowitz, Braida, and Durlach (1977), relaxes the strong
version of the Fechnerian assumption. Assuming that stimuli A and B have identical dynamic ranges (for instance, from threshold to some maximal or quasi-maximal value), it follows from the strong
assumption that the number of JNDs from threshold of stimulus A to maximum is equal to the number of JNDs from the threshold of B to maximum. Both Riesz and Lim et al. hypothesized that the number of
JNDs from threshold to maximum need not be constant, but that a constant proportion of JNDs will mark off constant increments in subjective magnitude at both A and B. If, for example, maximum
perceived magnitude lay 50 JNDs above the threshold at A, but 100 JNDs above the threshold at B, then a stimulus 5
2 Psychophysical Scaling
JNDs above A's threshold would match a stimulus 10 JNDs above B's threshold, a stimulus 10 JNDs above A's threshold would match one 20 JNDs above B's threshold, and so forth. Although some evidence
supports the proportional-JND hypothesis (e.g., Rankovic, Viemeister, Fantini, Cheesman, & Uchiyama, 1988; Schlauch, 1994), not all does (e.g., Schlauch, 1994). Methodologically, Fechner made
extensive use of statistical tools, employing the Gaussian model for the treatment of error and variance in order to derive the distance or difference between two sensations (see Link, 1992). The
"difference sensations" then could be used to estimate sensation magnitudes. It is noteworthy that sensory distance itself is inferred by statistical reasoning, not by asking the subject to estimate
distance. The methods of bisection (Delboeuf, 1873; Plateau, 1872) and equal-appearing intervals (e.g., Guilford, 1954), later espoused by Fechner, do call for subjects to mark off a psychological
interval defined by two stimuli into two or more equal-appearing distances. So do other partition methods, such as rating or ranking by categories. Easily overlooked, however, is the fact that none
of these methods asks subjects to estimate quantitatively the size of the differences or distances; subjects merely match them, an operation that is at once natural and simple (Zwislocki, 1991) and
that obviates the use of numbers. Fechnerian scaling eschews complex numerical estimates of individual stimuli; instead, the scaling derives sensory differences (or magnitudes) from a theoretically
guided, mathematical analysis of data that is largely independent of the subject's judgment or report about magnitude. Three principles largely define Fechnerian scaling. First, the subjects' tasks
are simple, requiring ordinal comparison or matching. Second, the mental operation is treated as one of differencing or subtracting. Third, the lack of numerical responses notwithstanding,
interval-level and even ratiolevel scales of sensation may be constructed through formal derivations that rely on basic theoretical assumptions. Fechnerian methods, and the underlying conceptions,
continue to offer much to contemporary psychophysics. Eisler (1963; Montgomery & Eisler, 1974) has argued that Fechnerian scales are compatible with "pure" interval scales, obtained from "unbiased"
rating procedures. Moreover, measuring sense distances through rating procedures has received major impetus from multicomponential models, particularly Anderson's (1981, 1982) functional measurement.
Subtractive processing is supported by the work of Birnbaum (e.g., 1978, 1980, 1982, 1990; see also, Birnbaum & Elmasian, 1977) and, with qualification, by that of Poulton (1989). The logarithmic law
itself is compatible with fundamental theories of psychophysics (cf. Ward, 1992). Finally, the notion that sensation magnitude is the end result of hidden processes~that sensation is cognitively
impenetrable~enjoys growing acceptance in current psychophysical theories.
Lawrence E. Marks and Daniel Algom
2. Critiques of Fechner's Psychophysics Fechner's enterprise saw several major objections. These criticisms carry more than historical significance, as they foreshadow important developments in
psychophysical scaling. We already mentioned the "quantity objection," namely, the claim that sensations do not have magnitudes. This position seems to preclude any possibility of mental measurement,
simply because there are no magnitudes "out there" to gauge. However plausible at first look, the conclusion is unwarranted. Thurstone (1927), whose "law of comparative judgment" relies on Fechnerian
logic, provided one clue: I shall not assume that sensations.., are magnitudes. It is not even necess a r y . . , to assume that sensations have intensity. They may be as qualitative as you like,
without intensity or magnitude, but I shall assume that sensations differ. In other words, the identifying process for red is assumed to be different from that by which we identify or discriminate
blue. (p. 368) So one might agree that, phenomenally, sensations do not possess magnitude but nevertheless erect a scale of sensation. All that is necessary is to assume, first, that sensations
differ and, second, that they are subject to error. These minimal assumptions underlie both Fechner's and Thurstone's endeavors. If we apply the assumptions to quantal processes operating at the
level of the receptor, then they also underlie such recent formulations as Norwich's (1993) entropy theory of perception and Link's (1992) wave theory of discrimination and similarity. Interpreting
Fechner's enterprise (indeed, psychophysical scaling in general) as a process of indirect measurement, based on primitive theoretical assumptions, immunizes it against many of the criticisms--for
example, against a variant of the quantity objection that denies that sensations are composed of (smaller) units. As Titchener (1905) put it, "We can say by ear that the roar of a cannon is louder,
very much louder, than the crack of a pistol. But the cannon, as heard, is not a multiple of the pistol crack, does not contain so and so many pistol cracks within it" (p. xxiv); and James (1890)
quoted Stumpf: "One sensation cannot be a multiple of another. If it could, we ought to be able to subtract the one from the other, and feel the remainder by itself. Every sensation presents itself
as an indivisible unit" (p. 547). As Thurstone recognized, introspection may be a poor guide for measurement. The weightiest, and perhaps most tenacious, criticism of Fechner's endeavor says that
sensations cannot be measured in the way that physical length, time, or mass can. Von Kries (1882) argued that sensations lack objective, agreed-upon units, which in turn precludes defining
invariances like equality or commutativity--and, in general, precludes operations commensurate with the "axioms of additivity." By this view, such operations are fundamental to establishing any
measurement device or scale. One can-
2 Psychophysical Scaling
not ostensibly lay unit sensations alongside a given percept the way one can lay meter-long sticks along a rod. Therefore, claimed von Kries, the numbers assigned to sensations, whether directly or
indirectly, are not quantities. The numbers are labels on an ordinal scale (substitutes for terms like dazzling or noisy) and consequently are not really scalable. In fact, the use of numbers may be
misleading in that it provides a sense of quantitative measurement and precision where there is none (cf. Hornstein, 1993). As Murray (1993) noted, Fechner recognized that von Kries's argument is "an
attack on the very heart of psychophysics" (p. 126). For von Kries's objections apply not only to Fechner's conception but to any future psychophysics that might come forth. In particular, the very
same criticism applies to Stevens's (1975) "new psychophysics." For this reason, we defer considering further the implications of von Kries's arguments to the section on scaling by magnitude.
B. Two Fundamental Psychophysical Theories Following Ward (1992), we denote as fundamental psychophysics those attempts to find "a core of concepts and relations from which all the rest of
psychophysics can be derived" (p. 190). Fundamental psychophysical theory may focus on conservational or mechanistic concepts (Ward, 1992). In physics, thermodynamics and quantum electrodynamics,
respectively, are examples, themselves linked by the more basic concepts of motion and energy. In psychophysics, Ward observed, conservational theories may treat laws of information exchange, whereas
mechanistic ones seek to characterize basic sensory processes. We describe two recent attempts at formulating fundamental theories, one primarily conservational (Norwich, 1993), the other primarily
mechanistic (Link, 1992). 1. Norwich's Entropy Theory of Perception The basic premise of Norwich's (1993) theory is that perception entails the reduction of entropy with respect to stimulus
intensity, captured by the organism as information. In general, to state the theory in words, more intense stimuli have greater information content (greater entropy) than do weaker ones, and
sensation provides a measure of (is proportional to) this entropy. Norwich's approach replaces the account offered by traditional psychophysics in terms of energy with one that uses terms of
information (see also Baird, 1970a, 1970b, 1984). In mathematical terms, s = kH,
where k is a positive constant, H is the stimulus information available for sensory transmission and processing, and S is a perceptual variable taken here
Lawrence E. Marks and Daniel Algom
as sensation magnitude. Following Shannon (1948; see Garner, 1962, and Garner & Hake, 1951, for psychophysical applications), H is calculated by H -- - ~ P i
In Pi,
where p; is the probability of occurrence of each steady-state stimulus i that is given for categorical judgment, classification, or identification. The association between the logarithmic character
of Fechner's law and the logarithmic character of Shannon's formula for information has been noted in the past (e.g., Moles, 1958/1966). Baird (1970a, 1970b) sought to derive Fechner's law explicitly
from concepts in information theory (but see MacRae, 1970, 1972, 1982). An immediate problem, of course, is that the fixed stimuli used in psychophysics (usually applied suddenly, in the form of a
step function) entail no uncertainty with respect to their macroscopic magnitude. Norwich's theory, however, treats the quantal structure of sensory signals impinging at the level of receptors.
Because receptors operate at a microscopic level, they experience moment-by-moment fluctuations in, say, the density of molecules of a solute (with a macroscopically constant taste stimulus) or
density of photons (with a macroscopically constant light). Thus sensory receptors may be regarded as sampling molecules or photons at discrete instants of time. Receptors operate to reduce
uncertainty about the mean intensity of the steady stimulus after m individual samples of the stimulus. To calculate sensory information from receptor uncertainty, one must manage probability
densities, not discrete probabilities. To pass from the discrete to the continuous case, however, one cannot simply proceed from summation, as in Eq. (9), to integration. Instead, one must calculate
the difference between two differential entropies. Norwich showed that, regardless of the probability density function that characterizes a given sensory signal (e.g., photon density), as a
consequence of the central limit theorem, the mean sensory signal density will approximate the normal distribution. If the variance of the original signal is or2 the variance of the means of the
samples of size m will be cr2/m. If m samplings of the stimulus are made in time t, then the absolute entropy H is given by H = (89 9ln[1 + ([3'/o "2)/tl,
where 13' is a constant of proportionality. H is measured here in natural logarithmic units (H/ln 2 gives uncertainty in bits). Equation (10) still expresses its argument in terms of variance, not in
terms of magnitude or intensity. Norwich suggested taking the variance as proportional to the mean intensity of the stimulus raised to a power, namely, 0.2 = KI",
2 Psychophysical Scaling
where K and n are constants. The exponent n is characteristic of the species of particle, and should, in principle, be determined by physical and physiological considerations. Basically, Eq. (11)
expresses the fact that larger quantities are associated with greater errors of measurement or fluctuations. If we take o.2 to reflect AI or the stimulus JND, then Eq. (11) is mathematically
identical to Guilford's law, and contains the Fullerton-Cattell law as a special case where n = 1. Having evaluated H, we can substitute Eq. (10) and (11) in Eq. (8) to obtain Norwich's
psychophysical law, S = k H = (k/2) 9ln[1 + (yI')},
where the proportionality constant ~/ = 13'/(K 9t) for constant duration t. Equation (12) says that perceived magnitude S is proportional to the amount of uncertainty experienced by the subject about
the mean intensity or macroscopic magnitude of the stimulus I. Significantly, Norwich's function still retains the logarithmic nature of Fechner's law. Given constant stimulus duration, with
high-intensity stimuli, or more precisely when ~/I" > > 1, Eq. (12) approximates Fechner's law. At lower levels, or more generally where ~/I" is < < 1, Eq. (12), or the first-order term of its Taylor
expansion, approximates a power function with exponent n, that is, Stevens's law. Equation (12) enables Norwich to derive a rich network ofpsychophysical relations, including measures of sensory
adaptation over time, information per stimulus and maximum information transfer (channel capacity), response time (RT), and relations among stimulus range, power-function exponent, Weber fractions,
and the total number of JNDs spanned by the plateau of the Weber function. Full mathematical derivations are given in a book and in several articles (e.g., Norwich, 1984, 1987, 1991, 1993). Here, we
conclude with a note on Weber's law. Norwich derived Weber's fraction by differentiating the psychophysical entropy function with respect to I. Rearranging terms gives / X I / I = ( 2 A H / n ) . (1
+ [1/~/I")]).
For large values of I, the Weber fraction tends to fall to a plateau, because 1/~/I, approaches zero. In this region, given Fechner's assumption that AS (corresponding to AH) is constant, we can
write A I / I = (2/n)" AS = constant,
which, when integrated, yields Fechner's logarithmic law. Incidentally, differentiating the simple power function does not yield an empirically acceptable Weber function, because it would predict A I
/ I to approach zero when I is large.
Lawrence E. Marks and Daniel Algom
2. Link's Wave Theory of Sensation Link's (1992) theory postulates a process by which a given stimulus is sampled continuously over time in a given trial; the result of this sampling is the envelope
of a time-amplitude waveform. A value sampled from the stimulus wave is compared, by subtraction, to a value sampled from a referent wave in order to create a comparative wave. The differences
created through a trial cumulate over time until their sum exceeds a subject-controlled threshold, A. The latter measures the subject's resistance to respond; its reciprocal, l/A, gives the subject's
responsiveness. Another parameter, 0", characterizes discriminability or response probability. As the difference between two stimuli (or between a stimulus and an internal reference) increases, so
does 0", and hence the probability of reaching the upper response threshold. The perception of an external stimulus originates at the body surface by the quantized action of sensory receptors or
transceivers. The situation is modeled by a Poisson process, Pr(k) = e-~
where Pr(k) is the probability of k elements responding out of a large population, and ot is the mean, indicating stimulus intensity. Signals are then transmitted as a sequence of electrical events
onto new locations for subsequent (central) processing. Significantly, the Poisson model depicts those temporal processes--the number of electrical pulses emitted during a unit of time--as easily as
it does the spatial processes at the sensory surface. Thus, the model "offers a unique method of digitally recording external events and digitally transmitting their characteristics through a chain
of Poisson processes that maintain the integrity of the original event" (Link, 1992, p. 189). Equally important, the combination of several Poisson processes is a new Poisson process that preserves
values of the original parameters that sum to produce the global output. Thus, Link showed that the overall Poisson intensity is a similarity transformation of the number of transceivers. The latter,
it is well known, mirrors stimulus intensity because the greater the stimulus intensity, the larger the number of areas (and receptors) affected. Therefore, the output of the Poisson process is a
similarity transformation of the intensity of the stimulus. Fechner's law is a consequence of Poisson comparisons. A standard stimulus with intensity Ia and a just noticeably different stimulus with
intensity I b produce two Poisson waves. Because Poisson means are similarity transformations of the physical stimuli, the waves have amplitudes Let (activating L independent Poisson variables at
mean intensity o0 and M/3 (activating M variables at mean intensity/3), respectively. Their comparison generates a wave difference, 0", where
2 Psychophysical Scaling O* = ln[(LoO/(M(3)].
101 (16)
Because multiplying stimulus intensities by a constant value leaves 0* unchanged, Eq. (16) characterizes Weber's law. Moreover, rewriting Eq. (16) in terms of I a and lb, and taking I b to be Io, the
absolute threshold, gives O* = ln(la/Io).
In wave theory, the subject senses the stimulus when the value of A is first reached. Sensation thus depends on both 0* and A ; the former derives from the physical features of the stimuli and the
sensory apparatus, whereas the latter is subjectively controlled. To satisfy Fechner's requirement that sensation be zero at the absolute threshold, Link suggested that sensation is equal to
discriminability multiplied by the resistance to respond, A, S-
Combining Eqs. (17) and (18) yields Fechner's law, S = A ' l n ( l a / l t , ).
Thus, discriminability depends on the amplitude parameters of the threshold and comparison stimulus waves. But sensation magnitude depends on both the discriminability and the subject's resistance to
respond. Like other logarithmic formulations, Link's theory predicts a linear relation between the logarithms of the stimuli equated for sensation in crossmodality matching (CMM)--a result often
obtained. However, in Link's theory, the slopes of the CMM functions represent the ratio of the respective response thresholds, A, and power-function exponents are interpreted as resistances to
respond, which can be gauged by the size of the Weber fraction. Cross-modality matches, therefore, are byproducts of the Poisson processes that generate Weber's law and Fechner's law, not of power
functions for sensation. Thus wave theory provides a theoretical underpinning to the mathematical argument, made decades earlier (e.g., MacKay, 1963; Treisman, 1964; cf. Ekman, 1964), that CMMs arc
equally compatible with logarithmic and with power functions relating sensation to stimulus. Note, however, that slopes of CMM functions can often be predicted from the exponents of power functions
derived from methods such as magnitude estimations; for logarithmic functions to provide comparable predictive power, they must provide constant ratios of their slope parameter, comparable to the
parameter A in Eq. (19). 3. Fundamental Theories and Fechnerian Scaling As different as they are, the theories offered by Norwich and Link share several features, including the ability to account for
Fechner's law. First, both theories take as their point of departure quantal responses at the sensory
Lawrence E. Marks and Daniel Algom
surface. Second, both theories treat sensation magnitude as a derived concept, constructed from hidden, nonconscious elemental processes. Third, both models also rely on measures of variance but do
not treat variance as error; instead, variance inheres in the stimulus. Fourth, both theories consider the stimulus at the microscopic level, eschewing definition of stimulus magnitude in terms of
steady-state intensity but considering global stimulus magnitude itself to be a derived concept. Fifth, both theories take comparisons as the basic mental act: Differencing or subtraction is an
explicit premise of wave theory; subtraction is implicit in entropy theory, in that absolute entropy, Eq. (10), reflects the difference between two differential entropies. At a more basic level, both
theories support the relativity of sensation and judgment. Judgments always are made relative to a referent (wave theory) or to the alternative stimuli expected by the perceiver (entropy theory). As
Garner (1974) observed, "Information is a function not of what the stimulus is, but rather of what it might have been, of its alternatives" (p. 194). Sixth, as already noted, both theories support
Fechner's law, deriving it from basic principles. Yet both formulations also leave room for Stevens's power function (Norwich applied it to "weak" stimuli, where yl" < < 1; Link applied it to the
realm called "feeling"). Finally, both theories treat Weber's law as a consequence of psychophysical processing--not as a starting point for deriving the psychophysical law.
C. Thurstonian Scaling That variability in sensory responding might form the basis for uncovering sensation difference was first pursued in detail by Solomons (1900). Consider, for example, his
rendition of Weber's law (cf. Gigerenzer & Murray, 1987). Relative variability may be expressed as a constant proportion p of stimulus intensity 1 (i.e., p is independent of 1). Therefore,
variability l ' p increases linearly with 1, which is Weber's law. Fechner himself speculated that relative variability might be constant, and he devised methods to gauge it. And both Helmholtz (1856
/1962) and Delboeuf (1873) considered how variability might affect measures of sense distance. But it was Thurstone (1927, 1959) who capitalized on the Fechnerian ideas of differencing and
variability to develop a general model of scaling (for recent assessments, see Dawes, 1994; Luce, 1994). Thurstone postulated an internal continuum onto which the representations of stimuli are
projected. A stimulus is identified by a "discriminal process," marking a value along the psychological continuum. Given momentary fluctuations in brain activity, repeated exposures to a given
stimulus result in a distribution of such values, the standard deviation of which is called discriminal dispersion. Because many factors contribute to the internal noise, the psychological
representations are assumed to be random variables
2 Psychophysical Scaling
with normal distributions. The psychological scale value associated with a stimulus is given by the mean of the distribution of its internal values. The standard deviation of this distribution
provides the unit of measurement used to quantify distances along the hypothetical subjective continuum. When two stimuli are presented for comparative judgment, the separation between the means
expressed in terms of the respective discriminal dispersions measures their psychological distance. To calculate scale values for the stimuli, one therefore needs to know the means, the standard
deviations, and the correlations between discriminal processes. Unfortunately, under most conditions, we cannot directly determine these parameters. They must be recovered instead from the matrix of
observed choice probabilities. Scale values are determined by the following assumptions and procedures. Given two stimuli, j and k, presented for comparative judgment, each generates a discriminal
process, the difference between them being a "discriminal difference." Repeated presentation produces a distribution of these discriminal differences. Because the discriminal processes are assumed to
be Gaussian distributed, so are the discriminal differences. The mean of the distribution of the differences is equal to the difference between the means of the two discriminal processes; the
standard deviation of the difference, sj-k, is given by sj_ k = (sj: + Sk 2 -- 2rjkSjSk) 89
where sj and Sk stand for the discriminal dispersions and r is the correlation between the momentary values of the discriminal processes. With each presentation of a given pair of stimuli, the
subject selects the stronger stimulus when the momentary discriminal difference is positive; otherwise she or he (erroneously) selects the weaker one. The respective proportions of choices can be
represented as complementary areas under a normal curve that describes the entire distribution of the discriminal differences. These areas can be converted into standard scores, z, which mark off
distances on the psychological continuum in standard deviation units. Therefore, Thurstone's law o f comparative j u d g m e n t is uj -
Uk = Zjk (Sj2 + Sk2 -- 2rivSjSk) 89
where uj and uk correspond to the means of the discriminal processes. Because of the difficulties in estimating the parameters, Thurstone's complete law has rarely been used. Thurstone outlined five
cases containing simplifying assumptions. The most useful is his Case V, where the discriminal dispersions are assumed to be equal and uncorrelated. The common discriminal dispersion then serves as
the unit of measurement, and the law simplifies to u j - u k = V'2(s 9zjk),
Lawrence E. Marks and Daniel Algom
in which the means (the scale values) are derived by averaging over the z-scores arising from the matrix of probabilities p(l',k) (cf. Baird & Noma, 1978). Using the empirical probabilities, Case V
can be written as
Pik = + ( u j - uk),
where Pjk is the probability of choosing stimulus j over stimulus k, ~ is the standard normal distribution function, and uj and uk are the scale values. Measuring physical stimuli like tones or
weights by Thurstone's Case V generates results consistent with Fechner's law as long as Weber's law holds. Actually, the assumption of constant variability in Case V amounts to the Fechnerian
assumption that the sensation JND, AS, is constant. It is, of course, an assumption, or a postulate, that equally often noted differences are equal psychologically. Should one conceive of a Case Vl,
with discriminal dispersions increasing in proportion to sensation (Stevens, 1959d, 1975), one might arrive at a power function relating sensation to intensity. Unlike Fechnerian scaling, which
relies on well-defined and quantified physical stimuli, Thurstonian scaling needs no physical measure. Stimuli may be tones, lights, or weightswbut they may just as well be preferences for
nationalities or flavors of ice cream. As Thurstone said, "psychophysical experimentation is no longer limited to those stimuli whose physical magnitudes can be objectively measured" (1959, p. 228).
Even without measurable stimuli, scales that are unique to affine transformations can result from the elaboration of a few elemental postulates. Given a minimal number of simple assumptions,
equal-interval scales of sensation are routinely derived. And by adding some strong (yet reasonable) assumptions (cf. Thurstone, 1959), one may arrive at ratio-level scales as well. 1. Related
Developments Given relatively modest assumptions, Thurstone's method readily generalizes to all kinds of probabilistic data in contexts far removed from those typical of psychophysics. In particular,
Thurstone's model has been used to scale dominance probabilities in the realm of decision and choice. The general problem then becomes one of explaining the process of choice between objects that, as
a rule, lack a physical metric. In decision making, the usual terminology is one borrowed from economics, with utility~the subjective value of a commodity or money--replacing sensation magnitude as
the psychological variable of interest. Thus, when applying Thurstone's Case V to choice data, the scale values uj and uk in Eq. (22) are relabeled as utilities. Because it assumes a random
distribution of internal values for each stimulus, Thurstone's model is sometimes called a random utility model. However, alternative views of stimulus representation and the subsequent comparison
and decision processes are possible and have been pursued.
2 Psychophysical Scaling
Statistical analysis of paired-comparison data suggested by Bradley and Terry (1952) was later extended and given an axiomatic basis by Luce (1959). In this constant utility model, each alternative
has a single internal representation or strength, and the variability inheres in the process of choice itself. For binary choices, the Bradley-Terry-Luce (BTL) model is usually stated as Pik = vi/(vi
+ vk),
where Pjk is the probability of preference as in Eq. (23), and vj and vk are the internal values or utilities for alternatives j and k. If we define ui = log (vi), Eq. (24) can be written Pik = A ( u
j - uk),
where A stands for the standard logistic distribution. Given that the logistic distribution differs only subtly from the normal, it is clear from comparing Eqs. (23) and (25) that Thurstone's model
and the BTL model are very similar. Different conceptions of the decision process notwithstanding, common computational routines are used to determine the scale values. Both the Thurstone and the BTL
models may be construed as special cases of an inclusive class called general Fechnerian scales, or GFS (cf. Baird & Noma, 1978; see Luce, 1977a, 1977b; Luce & Krumhansl, 1988; Yellott, 1971, 1977).
GFS models share Fechner's idea of treating variability as an authentic part of the scaling enterprise, and they use Fechner's assumptions of random variability and subtractive comparison in order to
derive scale values on an internal continuum. Because they require only nominal definition of stimuli, GFS models have wide applicability. GFS models handle well many choice situations but fail
conspicuously in others. The reason for the shortcomings is easily pinpointed, and, in our opinion, it may well signal the boundary condition dissociating the scaling of sensation from processes of
decision and choice between complex entities. In the BTL model and in Thurstone's Case V, the only variable affecting stimulus comparison is the difference in utility between the stimuli. This
assumption is realistic enough in the realm of psychophysical measurement, in which we scale the subjective magnitudes of physical stimuli. It also serves well to describe the pattern of choice
behavior when the choice set is fairly heterogeneous. However, data obtained in realistic choice situations often violate the predictions of these strong utility models (e.g., De Soete & Carroll,
1992; Krantz, 1967; Rumelhart & Greeno, 1971; Tversky, 1969). People tend to be influenced by the similarity structure among the choice alternatives, not merely by the set of the respective
utilities. This tendency is especially pronounced when the objects are similar, or when a heterogeneous choice set contains highly homogeneous subsets. Many reallife situations entail just such sets,
resulting in discrepancies between the
LawrenceE. Marks and Daniel Algom
observed preferences and those predicted on the basis of strong utility models such as GFS. To handle the choices made in realistic settings, one should model the effect of similarity as well as that
of utility. The explicit inclusion of the similarity structure in the theory also means relaxing the condition of strong stochastic transitivity. Following Halff (1976) and Sj6berg (1980), a family
of moderate utility models that captures the empirical influence of similarity can be written
Pik = G([Ui- uk]/Djk),
where Djk quantifies the dissimilarity between the objects (cf. De Soete & Carroll, 1992). Many popular models of choice, including the set-theoretic models developed by Restle (1961; Restle &
Greeno, 1970; Rumelhart & Greeno, 1971) and Tversky (1972; Tversky & Sattath, 1979), and the multidimensional probabilistic models developed by De Soete and Carroll (1992; see references therein) can
be shown to reduce to the basic form of Eq. (26), namely, to a differencing process scaled in terms of a dissimilarity parameter. Actually, the latter measure derives directly from Thurstone's
discriminability parameter, sj-k, the standard deviation of the distribution of the discriminal differences. In fact, sj-k itself satisfies the metric axioms of an index of distance (Halff, 1976) and
may validly be interpreted as one (Sj6berg, 1980; cf. Bockenholt, 1992). This maneuver also relaxes the often unrealistic condition of strong transitivity. Empirical data are consistent with the more
permissive requirements of the moderate utility model. Many issues await resolution. The multidimensional models of De Soete and Carroll exemplify a class called generalized Thurstonian models, which
allow for multidimensional scaling of the choice objects (see also Ramsay, 1969). They contrast with the discrete (e.g., Tversky & Sattath's 1979, treelike) representations also aimed at elucidating
human choice behavior. Do we have grounds to prefer one kind of representation over the other? Pruzansky, Tversky, and Carroll (1982) suggested that spatial models'may apply to "perceptual" stimuli,
whereas discrete models may describe better the similarity structure of "conceptual" stimuli. In several of the models mentioned (e.g., those of Restle and Tversky), a stimulus is associated with
more than one number. This feature is easily handled by certain multivariate models, but it poses a problem for the axiomatic approach to psychophysical measurement (cf. Luce & Krumhansl, 1988).
Finally, and most important for the present concerns, many of the models apply primarily in the broader realm of decision and choice. It is not entirely clear how the same models apply in measuring
sensation (although, as Melara, 1992, showed, these multidimensional methods are directly traceable to Fechnerian psychophysics). What features are primarily "cognitive" as opposed to "senso-
2 Psychophysical Scaling
ry?" Earlier, we hinted at a possible distinction, but a more sustained attack is needed to disentangle the principles that apply to scaling sensation and those that apply to choice and decision.
Link's (1992) analysis of sensation and feeling provides one avenue to distinguish the measurement of stimuli with and without a physical metric. 2. Violations of Transitivity Thurstone's model
requires that dominance data show transitivity: If, on most occasions, A is preferred to B, and if B is preferred to C, then A is preferred to C. But, as mentioned earlier, many studies show
systematic failures of transitivity. A theory developed by Coombs (1950, 1964) explains failures of transitivity without having to resort to multidimensional representations, either spatial or
discrete (note, incidentally, that Coombs's theory has been extended to the multidimensional case; see Bennet & Hays, 1960; Carroll, 1980). Following Thurstone, Coombs's model assumes that the
objects vary along a single psychological dimension. Failures of transitivity occur because the subject prefers a certain value along the psychological continuum--an ideal point--whose location need
not coincide with the scale extremes (i.e., the ideal point need not inhere in the representation of the strongest or weakest stimulus). In paired comparisons, the subject chooses the stimulus that
is closer to her or his ideal value. The scale is folded, so to speak, around each subject's ideal point. Subjects are represented by their unique ideal points, which lie on a common continuum with
the stimuli. Representations of the stimuli are said to be invariant across subjects. Given this assumption and the joint distribution of subjects and stimuli, values on the psychological dimension
are recovered by using the preference data of the various subjects to unfold it. Coombs's unfolding model has informed psychophysical theory and scaling, although applications have been relatively
sparse. 3. Theory of Signal Detectability Although the theory of signal detectability (TSD), imported by Tanner and Swets (1954) from engineering and statistical-decision theory to psychology, treats
issues of detection and discrimination, it bears a close affinity to Thurstonian scaling. According to TSD, a subject faced with a task of detection (or discrimination) must decide whether a noise
background contains a signal (or whether two noisy signals differ). The decision is informed by the rules of statistical hypothesis testing to the extent that "the mind makes the decision like a
Neyman and Pearson statistician" (Gigerenzer & Murray, 1987, p. 45). However, neither this interpretation nor the accepted nomenclature is consequential. Formally, TSD is equivalent to Thurstone's
random variable model. Like Thurstone's model, TSD assumes that, over
LawrenceE. Marks and Daniel Algom
repetitions, both the signal + noise and the noise (or the two noisy signals) generate overlapping Gaussian distributions. The measure known as d' represents the distance between the means of the
noise and the signal + noise distributions in units of standard deviation. TSD also introduces a decisional variable that determines the subject's response criterion; the theory allows the subject to
move the criterion in response to various nonsensory features of the experiment. Indeed, the ability to set apart sensory and cognitive factors is a signature ofTSD. TSD then is Thurstone's random
variable model with added emphasis on the process of decision. Actually, Thurstone too had a decision rule: "Respond that stimulus A is greater than B if the discriminal difference is positive,
otherwise respond that B is greater than A." Why Thurstone did not develop TSD is a fascinating question discussed by Luce (1977b) and more recently by Gigerenzer and Murray (1987). Whatever the
reason, d' can be construed as a Thurstonian unit of sensory distance. Several authors (e.g., Braida & Durlach, 1972; Durlach & Braida, 1969; Luce, Green, & Weber, 1976; see also Macmillan &
Creelman, 1991) have used a total d' or a summed d' (between adjacent pairs of stimuli) to measure supraliminal sensitivity. These measures usually derive from identification or classification data,
showing the robustness ofd'. Braida and Durlach have applied Thurstone's model by estimating the parameters of the discriminal processes from such data. For a fixed range of stimuli, assuming normal
distributions for the discriminal processes, results are consistent with Fechher's law. Importantly, from a theoretical vantage, scaling by TSD, like scaling by other random-variable models,
"reflects the belief that differences between sensations can be detected, but that their absolute magnitudes are less well apprehended" (Luce & Krumhansl, 1988, p. 39). According to Durlach and
Braida (1969), classification of stimuli depends on both sensory and context resolution. Both factors contribute to the variance of the internal distributions. Context variance is a function of
stimulus range: the greater the range the greater the variance due to the added memory load. Memory variance thus dominates at larger ranges, where the discrepancy between classification and
discrimination is notable (see also Gravetter & Lockhead, 1973; Marley & Cook, 1984). Luce and Green (1974; see also Luce, Green, & Weber, 1976) offered a different explanation. In their theory, the
representation of the stimulus is unaffected by the range of stimuli. Performance is hampered instead by limitations on central information processing. The authors postulate an "attention band," 10
to 20 decibels (dB) wide for sound intensity, that roves over the stimulus continuum. Only stimuli that happen to fall within the band are fully processed. Stimuli falling outside the band are
inadequately sampled, increasing the variance of their neural samplings. The greater the stimulus range, the
2 Psychophysical Scaling
smaller the probability that a given stimulus will fall within the roving band and consequently the greater neural variance and poorer identification. Any adequate theory must account for effects of
stimulus range, as well as robust effects of stimulus sequence and stimulus position, which also characterize identification and estimation (for which Luce & Green, 1978, and Treisman, 1984, provided
alternative models; and Lacouture & Marley, 1991, sought to develop a model for choice and response times within the framework of a connectionist architecture). For a review of the attention-band
model, see Luce, Baird, Green, and Smith (1980).
D. Discrimination Scales, Partition Scales, and Rating Scales 1. Discrimination Scales Discrimination methods seek to erect scales of sensation from a subject's discriminative or comparative
responses (cf. Gescheider, 1997). So these methods espouse Fechner's dictum of scaling by differencing. A straightforward approach within this tradition is to obtain stimulus JNDs experimentally (not
calculate them by Weber's law), then cumulate them as a function of stimulus intensity, treating each JND as a constant sensory unit. The dol scale for pain (Hardy, Wolff, 8r Goodell, 1947) provides
one well-known example, Troland's (1930) scale of"brilliance" or visual brightness another. Thurstonian scales and summated-d' scales are variations on this approach. The latter scales make use of
more information available from the comparative judgments than do summated-JND scales based on a preselected, arbitrary, and presumably constant cutoff level of discrimination (Marks, 1974b).
Nevertheless, the famous dictum, "equally often noticed differences are equal," that epitomizes Thurstone's Case V parallels the Fechnerian assumption that JNDs have equal subjective magnitudes. The
validity of summated-JND scales rests on the psychological constancy of JNDs. One approach is to make the subjective equality of JNDs a postulate rather than an hypothesis (Luce 8r Galanter, 1963;
see Falmagne, 1971, 1974, 1985). Attempts to test the consistency of summated-JND scales by cross-modal and intramodal matches have met with mixed success (see Marks, 1974b, and Krueger, 1989, for
summaries and references). However, those tests assume the commensurability of qualitatively different sensory events. Often, integrated-JND scales are logarithmically related to stimulus intensity
and approximately logarithmically related to corresponding scales generated by procedures such as magnitude estimation. This nonlinear relation has often been taken to reflect negatively on the
validity of JND scales (e.g., Stevens, 1960, 1961), but it can be just as well interpreted to challenge the validity of Stevens's magnitude scales (e.g., Garner, 1958).
LawrenceE. Marks and Daniel Algom
2. Partition Scales In partition scaling, subjects are asked to divide the psychological continuum into equal intervals. In one of the earliest examples, Plateau (1872) asked eight artists to paint a
gray that appeared midway between a white and a black. For all its seeming simplicity, however, the method of bisection is affected by many contextual factors, for example, the tendency, termed
hysteresis (Stevens, 1957), for bisection points to fall higher when stimuli are presented in ascending order rather than descending order. Although the method of bisection has also served to test
the psychophysical laws developed by Fechner and Stevens, these efforts have produced no conclusive evidence in favor of either (see Marks, 1974b). Later, Anderson and his collaborators (e.g.,
Anderson, 1976, 1977; Carterette & Anderson, 1979; Weiss, 1975) tested bisection by functional measurement. The bisection model was supported in the continua of grayness and loudness, but not in
length. In a modification of the method, called equisection, subjects are asked to set values of several stimuli so as to mark off equidistant sensations on the judged continuum. Two procedures may
be used to extract those equalappearing intervals (see Gescheider, 1997). In the simultaneous version of equisection, the subject is presented with the two end points and asked to set stimuli to
create a series of equal sensory intervals. In the progressive version, the subject repeatedly bisects sense distances until the desired number of intervals is reached. Commutativity and
transitivitymthe need for which is particularly apparent in the sequential proceduremare not always satisfied by such bisections (e.g., Gage, 1934a, 1934b; Pfanzagl, 1959). Stevens and Volkmann's
(1940) reel scale for pitch (see also Torgerson, 1958) and Garner's (1954) lambda scale for loudness are examples of equisection scales of considerable importance in psychophysics. Garner's approach
is notable for using converging operations: Subjects were asked to set both equal intervals (equisection) and ratios (ffactionation); assuming that the equisections constitute psychologically equal
differences and that fractionations define equal but unknown ratios, the two sets of data were combined to a single scale, which Garner called lambda. The |ambda scale is nearly a logarithmic
function of the stimulus and can be characterized as about a 0.3 power of sound pressure (smaller than the values usually obtained by magnitude estimation, described later). 3. Thurstonian Category
Scales In category scaling or rating, the subject's task is to assign categories (often integer numbers or adjectives) to stimuli so that succeeding categories mark off constant steps in sensation.
The number of categories is usually smaller than the number of stimuli, often between 3 and 20. The variability of the
2 Psychophysical Scaling
categorical assignments for a given stimulus can be treated in a manner analogous to the way Thurstone treated comparative judgments of pairs of stimuli. In this approach, both category widths and
scale values are estimated from the ratings and their variability, the analysis augmented by certain simplifying assumptions (e.g., that the distribution of judgments for each stimulus is normal or
that the category boundaries remain constant from moment to moment). Several authors have described variants of this method (e.g., Attneave, 1949; Garner & Hake, 1951; Saffir, 1937; see also Adams &
Messick, 1958). Thurstone himself, in an unpublished procedure, called it the "method of successive intervals." Guilford (1954) described the computational procedures under the rubric of the "method
of successive categories." The most general application, called the law of categorical judgment (Torgerson, 1954), betrays its close affinity to Thurstonian analysis. Much of the more recent work by
Durlach and Braida (e.g., 1969; see also Braida, Lim, Berliner, Durlach, Rabinowitz, & Purks, 1984) is informed by similar ideas, and may be construed as extending the analyses entailed by the method
of successive categories. Successive-interval scales, like summated-JND scales, can be logarithmic functions of stimulus intensity and nonlinearly related to "direct" magnitude scales (Galanter &
Messick, 1961; see also Indow, 1966). Garner (1952) derived an equal-discriminability scale for loudness from categorical judgments that was an approximately logarithmic function of sound pressure
and that was linearly related to a scale of summated JNDs obtained by Riesz (1928). 4. Mean Rating Scales A popular form of category scaling simply takes as scale values the averages (means or
medians) of the ratings. Implicit in this procedure is the view that each consecutive category reflects a constant unit change along the psychological continuum. The method of successive categories,
it should be recalled, allows unequal category widths to be estimated from the respective variabilities. In typical category scaling, a psychophysical function is produced by plotting the mean (or
median) rating against stimulus intensity. Category ratings, C, are often nearly linear functions of log stimulus intensity, as Fechner's law dictates, though more often, curvilinearly related
(positively accelerated), in which case power functions may provide better descriptions: C=
cI ~ + c'
In Eq. (27), oL is the exponent of the category scale, and c and c' are constants. Figure 3 gives some examples of rating scales obtained when subjects made categorical judgments of brightness under
various experimental
r.) --io--0
,, 1
categories 20 c~gorics I00 cancgofie~
Log luminance(cd/m2)
20 calegories
Log lumm~ncc (c~m 2)
I00 categories .,
Log luminance(cd/m~
FIGURE 3 Examples of mean category ratings of brightness, each plotted as a function of log luminance, for three different stimulus ranges (31"1, 1()():1, and 3131) aIld three different numbers of
categories (4, 2(1, and 1()()). Based on data of Marks (1968).
2 Psychophysical Scaling
conditions, the experimenter systematically varying both the range of luminance levels and the number of categories on the rating scale (Marks, 1968). When the luminance range is small, the ratings
are nearly linear functions of log luminance, consistent with Fechner's law; but with larger stimulus ranges, the functions are clearly upwardly concave. Given that magnitude estimates often follow a
power function (as described later), category scales relate to corresponding magnitude-estimation scales either as approximately logarithmic functions (e.g., Baird, 1970a, 1970b; Eisler, 1963;
Torgerson, 1961) or as power functions. When a power function is fitted to the relation between mean ratings and magnitude estimates, the exponent may be smaller than unity (e.g., Marks, 1968;
Stevens, 1971; Stevens & Galanter, 1957; see also Ward, 1972), or it may be equal to unity if c' in Eq. (27) is taken as zero (Foley, Cross, Foley, & Reeder, 1983; Gibson & Tomko, 1972). A good test
of the relation between category ratings and magnitude estimates comes from a straightforward linear, graphic plot of one versus the other. It is difficult to make firm generalizations about these
relations, however, because both mean ratings and magnitude estimates are sensitive to stimulus context (e.g., Foley et al., 1983; Marks, 1968; Parducci, 1965), a matter considered later. Stevens
(e.g., 1971, 1975) argued that category ratings do not generally provide adequate scales of sensation because, at best, they measure relative stimulus discriminability or variability, not sensation
magnitude, and because, as just noted, rating scales are particularly susceptible to the influence of such contextual variables as range and distribution of stimuli, so they may not even measure
resolution well. These arguments are not compelling. First, it may be possible to use iterative methods to minimize contextual distortions of scales (e.g., Pollack, 1965a, 1965b). Second, Anderson
(e.g., 1981, 1982) has claimed that appropriately determined rating scales do provide valid measures, using as a criterion the equivalence of scales obtained in different tasks. As these authors and
others (see Curtis, 1970) also noted, average judgment functions inferred from category scaling (see section V.A) are often nearly linear, more so than those inferred from magnitude estimates. In
general, when tasks require subjects to integrate multidimensional stimulus information, results suggest that ratings more closely reflect the underlying sensations than do magnitude estimates. In an
alternative formulation, suggested by Marks (1979b), both magnitude scales and rating scales are power functions of the stimulus but arc governed by different exponents. In this theory, each scale
measures its own psychological property, the former measuring sensation magnitude, the latter sensation difference or dissimilarity (see also Algom & Marks, 1984; Parker & Schneider, 1980; Popper,
Parker, & Galanter, 1986; Schneider, 1980, 1988). If the number of categories equals the number of stimuli, then the subject's task reduces to one of stimulus coding, or identification. The ability
Lawrence E. Marks and Daniel Algom
a subject to classify stimuli often is expressed by the statistical measure transmitted information, developed within the framework of information theory (Shannon, 1948). The sensitivity measure of
TSD, d', provides another index of performance. Equation (9) defines the information or entropy of a unidimensional array of stimuli. Transmitted information is defined as the difference between the
stimulus entropy and stimulus equivocation, where equivocation is a function of errors in identification (for details of the techniques for calculating the information transmitted and other measures
of information, see Attneave, 1959, or Garner, 1962). Stimulus entropy is governed by the number of stimuli (usually equal to the number of categories). When the number of stimuli is small, say 4 or
6, the subject's task becomes one of stimulus coding, and the subject may identify the stimuli infallibly. In the language of information theory, the equivocation would then be zero, transmitted
information would equal stimulus entropy, and we can talk of perfect communication. As the number of stimuli (and categories) increases, however, errors of identification mount considerably, and
transmitted information falls short of stimulus entropy. Miller's (1956) celebrated paper puts the number of perfectly identifiable stimuli, varying in just one dimension, at 7 + / - 2, a limit
called channel capacity. Although this limit seems extremely low, recognize that it applies to stimuli varying along a single dimension. With multidimensional stimuli like those experienced in the
everyday environment, channel capacity can be substantially greater (see Garner, 1962, for review and references). Miller suggests that channel capacity is roughly constant for unidimensionally
varying stimuli regardless of modality. Baird (e.g., Baird & Noma, 1978) has challenged Miller's conclusion and argues for a general negative relationship between channel capacity and the Weber
fraction: the greater the channel capacity, the smaller the Weber fraction. In Baird's scheme, both measures are indices of relative sensitivity or resolution.
E. Estimation of Sensory Differences Spacing stimuli in unit psychological differences or intervals may constitute the most natural form of measurement (Zwislocki, 1991). The procedure need not
involve numbers: Subjects may adjust the distance between stimuli to match a standard. Or subjects may give numerical estimates of sensory intervals or differences. The use of such direct estimates
violates, of course, the spirit of Fechnerian psychophysics. The violation seems less serious, however, when the judgments are construed to give only rank-order information; thus greater or smaller
numerical responses can be taken as mere indicants of larger or smaller sensory intervals. Nonmetric data often contain sufficient information to constrain metric (interval-scale) properties (cf.
Kruskal, 1964; Shepard, 1962a, 1962b, 1966). If there is an interval-scale
2 Psychophysical Scaling
representation of stimuli, ordered 11, 12, I 3 , . . . In, then differences among pairs must show weak transitive ordering, that is, if (11,12) --- (I2,I3)and (I>13) --- (I3,I4)then (I,,I2) ->
and monotonicity, if (I1,I2) >- (/4,/5) and (I2,13) >- (I5,I6)then (I,,I3) >- (I4,I6)
see Krantz et al., 1971). Given a sufficiently dense array of stimuli (-10), a complete rank ordering of differences suffices to retrieve the representations, unique to affine (interval-scale)
transformations (Shepard, 1966), that is, transformations that permit addition of a constant or multiplication by a positive constant. The same principle underlies methods to retrieve intervalscale
representations in more than one dimension, but this extension falls within the domain of multidimensional scaling (see Chapter 3, "Multidimensional Scaling," by Carroll & Arabie). Importantly,
numerical estimates can be taken to provide a kind of matching: All stimuli (here, pairs of them) assigned the same number are treated as equal on the judged attribute (sensory interval, difference,
or dissimilarity). Thus Beck and Shaw (1967) reported that subjects, when asked to estimate loudness intervals, gave equivalent responses to various pairs of tones defined by constant differences on
Garner's (1954) lambda scale. Basing measurement on differencing or dissimilarity is, of course, compatible with the Fechnerian tradition. Since Beck and Shaw (1967), several papers have reported
loudness scales derived from estimates of loudness intervals (e.g., Algom & Marks, 1984; Dawson, 1971; Parker & Schneider, 1974; Popper et al., 1986; Schneider, Parker, Valenti, Farrell, & Kanow,
1978). In many of these studies, only the rank-order of the differences is used to uncover the scale, as described earlier. Because only the ordinal properties of the differences are of concern,
other studies omit numerical estimates entirely, for instance, by obtaining direct comparisons of loudness differences (Schneider, 1980; Schneider, Parker, & Stein, 1974). Results using all of these
paradigms agree: The scale of loudness difference is a power function with a relatively small exponent, roughly 0.3 when calculated as a function of sound pressure. Consistent with Fechner's
conjecture, this scale of loudness difference appears to correspond well with a scale of JNDs for sound intensity, despite (better, because of) the near miss to Weber's law (Parker & Schneider, 1980;
Schneider & Parker, 1987). Moreover, there is evidence of substantial variation among the functions obtained from individuals (Schneider, 1980; see also Schneider, 1988). Nonmetric analysis also
produced relatively small exponents (less than unity) in functions derived from quantitative judgments of differences in line length (Parker, Schneider, & Kanow, 1975). In general, results obtained
by direct comparison or judgments of
Lawrence E. Marks and Daniel Algom
differences conflict with results obtained by such methods as magnitude estimation, as described later. We consider the issue again later, after considering scaling of magnitudes.
F. Response Times for Comparative Judgment In a typical experiment, subjects are shown a pair of stimuli, such as two lines or two circles; then the subjects must decide, while timed, which stimulus
is larger or smaller. The three central findings in this paradigm are termed the distance effect, the semantic congruity effect, and the serial position effect. The distance effect refers to the
functional dependence of response time (RT) on stimulus difference: the larger the difference between the two items being compared, the shorter the RT. The contingency was documented by Cattell as
early as 1902. The reciprocal relation between RT and stimulus difference (for even perfectly discriminable stimuli) remains a cornerstone of research and theory into response times (e.g., Curtis,
Paulos, & Rule, 1973; Link, 1975; Link & Heath, 1975; Welford, 1960). The semantic congruity effect refers to an interaction between the direction of the comparison, dictated by the instructions, and
the size of the compared items on the relevant continuum. Two relatively large items are compared faster in terms of which is larger, but two relatively small items are compared faster in terms of
which is smaller (e. g., Banks, Clark, & Lucy, 1975). Finally, subjects respond faster to stimuli located near the ends of the stimulus series than to stimuli from the middle of the r a n g e n t h e
serial position effect or end effect. Link's (1992) wave theory, like his earlier relative judgment theory, seeks a coherent account of the various effects. Response times are affected by
discriminability (hence the distance effect), but are also highly sensitive to subjective control. Bias, due to instructions and other features of the experimental context, and resistance to respond
are two variables under the direct control of the subject. They explain the congruity effect and end effects. Applying the principle of scale convergence, Birnbaum and Jou (1990) recently proposed a
theory, consistent with a random walk model, that assumes that the same scale values underlie comparative RTs and direct estimation of intervals. However, it is not clear how the theories of Link and
of Birnbaum and Jou treat differences between comparative judgments for perceptual and symbolic stimuli (e.g., Algom & Pansky, 1993; Banks, 1977; Marschark & Paivio, 1981; see also, Petrusic, 1992).
IV. SCALING BY M A G N I T U D E Although the scaling of magnitudes had fitful starts in the late 19th centur y n f o r example, in Merkel's (1888) Methode der doppelten Reize and
2 Psychophysical Scaling
Mfinsterberg's (1890) early attempt at cross-modality matching--it gained impetus in the 1930s, stimulated by acoustic engineers who were concerned with the measurement of loudness. Given Fechner's
logarithmic law, the loudness of an auditory signal should be proportional to the number of decibels it lies above its absolute threshold; yet there is some sense in which the decibel scale--in
common use by the early decades of the 20th centur y - f a i l s to capture adequately the phenomenal experience of loudness. Consequently, the 1930s saw a spate of studies on scaling loudness (e.g.,
Geiger & Firestone, 1933; Ham & Parkinson, 1932; Rschevkin & Rabinovich, 1936). The beginning of this endeavor was a seminal paper by Richardson and Ross (1930), who may be considered the originators
of the method of magnitude estimation. In this study, subjects were presented a standard tone, to be represented by the numeric response "1," and were asked to give other numbers, in proportion, to
various test tones differing from the standard in intensity and frequency. Richardson and Ross fitted a power function to the results: The judgments of loudness were proportional to the voltage
across the headphones (essentially, to the resulting sound pressure) raised to the 0.44 power or, equivalently, proportional to acoustic energy flow raised to the 0.22 power. Other studies cited
previously, using methods of ratio setting, gave results that more or less agreed. At about the same time, Fletcher and Munson (1933) approached the quantification of loudness magnitudes from a
different tack. They began with the assumption that signals processed in separate auditory channels (that is, in channels with distinct sets of receptors and peripheral neural pathways, like those in
the two ears) should combine in a simple, linearly additive manner. If so, Fletcher and Munson reasoned, then equally loud signals presented through two such channels (e.g., binaural stimuli) should
produce a sensation twice as great as that produced through either channel alone (monaural stimuli). From this conjecture, plus measurements of loudness matches between monaural and binaural signals
and between single and multicomponent tones, Fletcher and Munson constructed a loudness scale for pure tones that can be very closely approximated by a 0.6 power of sound pressure (0.3 power of
energy) at relative high signal levels (>40 dB sound pressure level, or SPL) and by the square of pressure (1.0 power of energy) at low signal levels. The exponent of 0.6 follows directly from the
empirical finding that combining loudness over two channels (e.g., across the two ears), and thus doubling loudness, is equivalent to increasing the intensity in either one channel alone by 10
dB--because a 0.6-power function means that loudness doubles with every 10 dB increase in stimulation. Figure 4 shows Fletcher and Munson's loudness scale, plotted on a logarithmic axis against SPL
in dB. At high SPLs, the function is nearly linear, consistent with a power function.
Lawrence E. Marks and Daniel Algom
minOe..m10u [ e-. 1 IL r-. U 4~
F I G U R E 4 The loudness scale derived by Fletcher and Munson (1933), plotted against decibels sound pressure level.
A. Magnitude Estimation In the 1950s and 1960s, S. S. Stevens (e.g., 1955, 1957, 1960)and others (e.g., Ekman, Eisler, & Kfinnapas, 1960; Engen & McBurney, 1964; Hellman & Zwislocki, 1961;J. C.
Stevens, 1957) amassed an enormous array of empirical evidence showing how people behave when asked to assign numbers to the magnitudes of their sensory experiences--the method known as magnitude
estimation (and, to some extent, by the inverse procedure of magnitude production). Following the work of the 1930s, much of this later research went into formulating a prototypical scale of
loudness, which, Stevens (1955, 1956) proposed, grows as a power function of sound pressure. In particular, Stevens offered a modern version of the "sone scale": A 1000-Hz tone heard binaurally at 40
dB has unit loudness (1 sone); above 40 dB, loudness doubles with every 10-dB increase in the acoustic signal. In fact, the International Organization for Standarization has taken the sone scale to
be the measure of loudness. With many perceptual dimensions, and under many stimulus conditions, the relation between these numeric responses R and various measures of stimulus intensity I could be
reasonably well fitted by power functions, with exponent 13 and constant k, R = kI6,
leading Stevens (1957, 1975) to propose such a function as the psychophysical law, a law to replace Fechner's logarithmic formulation. This proposal
2 Psychophysical Scaling
was buttressed in various publications by tables and graphs displaying the power functions obtained in various modalities (vision, hearing, touch, taste, and smell) and for various dimensions
(brightness, length, and area in vision; vibration intensity, warmth, and cold in somesthesis). According to Stevens, exponents can take on values that vary from much smaller than unity (e.g., 0.33
for the brightness of 1-s flash of light delivered to a darkadapted eye) through near unity (e.g., 1.0 for perceived length) to much greater than unity (e.g., 3.5 for the perceived intensity of
alternating electric current delivered to the fingers). Rosenblith (1959) and Teghtsoonian (1971) suggested that intermodal variations in exponent may reflect the ways that different sensorineural
systems map different stimulus ranges into a constant range of perceptual magnitudes: "The various receptor systems can be regarded as performing the necessary expansions or compressions required to
map the widely varying dynamic ranges into this constant range of subjective magnitudes" (Teghtsoonian, 1971, p. 74). Teghtsoonian (1971) also sought to relate the parameters of these power functions
to other psychophysical measures. Thus, in modifying Fechner's conjecture, and reappropriating Brentano's in its stead, Teghtsoonian inferred that JNDs correspond not to constant units in sensation
magnitude, AS, but constant relative changes in sensation, AS/S. Moreover, claimed Teghtsoonian, AS/S is more or less uniform across sensory modalities (roughly 3%), making the exponents of power
functions directly related to the size of the Weber ratio, AI/I. Like Fechner's conjecture, Teghtsoonian's too is vulnerable to evidence, reviewed earlier, indicating that equivalent changes in
sensation magnitude (which may be defined as either AS or AS~S) do not always mark off equal numbers of JNDs. By implication, the experimental findings of Stevens and his colleagues, and occasionally
Stevens's own words, have led to an erroneous, and indeed misleading, view that every modality, or at least every sensory or perceptual dimension in a given modality, has its own characteristic
exponent. Even in his last, posthumously published work, Stevens would still write, for example, that "the value of the e x p o n e n t . . , serves as a kind of signature that may differ from one
sensory continuum to another. As a matter of fact, one of the important features of a sensory continuum lies in the value of its exponent" (1975, p. 13). But as Stevens well knew, the notion that
each continuum has a single "value of the exponent" is a myth of scientific rhetoric. If anything is clear from the plethora of psychophysical studies reported over nearly 40 years, it is that
magnitude-estimation functions~and especially the exponents of these functions~depend both systematically and sometimes unsystematically on a variety of factors, including the exact methodology used
(for instance, the presence or absence of a standard stimulus), the conditions of stimulation (for instance, the duration of flashes of light), the subjects (individual differences abound), and the
choice of stimulus
LawrenceE. Marks and Daniel Algom
context (for instance, the range of stimulus levels) (see Baird, 1997; Marks, 1974a, 1974b; Lockhead, 1992). A brief review of these factors (save the last, which we treat separately at the end of
the chapter) is illuminating, for it tells us a good deal about human behavior in the framework of scaling tasks: not just about sensory and perceptual processes, but also about mechanisms of
decision and judgment.
B. Methods of Magnitude Scaling I. Magnitude Estimation and Magnitude Production First, there are methodological matters~the particular task set before the subject. Many of the studies of loudness in
the 1930s used variants of the method offractionation, where the subjects were presented a standard stimulus of fixed intensity and asked to adjust the intensity of a test stimulus to make the
sensation some fraction (typically, one-half) that of the standard. Or one could ask the subjects to make the test stimulus appear to be a multiple of (say, double) the standard. As Stevens (1955)
noted, halving and doubling give slightly different outcomes (analogous, perhaps, to the order effects seen in bisection). In the 1950s and 1960s, studies using magnitude estimation were often
complemented by the inverse procedure, magnitude production, where subjects were presented numbers and asked to set stimuli to the appropriate levels (e.g., Meiselman, Bose, & Nykvist, 1972; Reynolds
& Stevens, 1960; Stevens & Greenbaum, 1966). Again, the outcomes systematically differ. In general, psychophysical functions obtained with magnitude estimation have shallower slopes (when plotted in
log-log coordinates) than functions obtained with magnitude production (Figure 5 gives examples of scaling functions for loudness of noise, measured in two individual subjects by Stevens & Greenbaum,
1966). When fitted by power functions, exponents are usually smaller in estimation (unless the stimulus range is small; see Teghtsoonian & Tcghtsoonian, 1978). Stevens and Greenbaum (1966) noted that
this "regression effect," as they called it, exemplifies a general principle: that subjects tend to constrict the range of whatever response variable is under their control. Thus, relatively
speaking, subjects constrict the range of numbers in magnitude estimation and constrict the range of stimuli in magnitude production (but see also Kowal, 1993). Although Stevens and Greenbaum
suggested that the "best" estimate of an exponent falls between the values obtained with the two procedures, there is no a priori reason why this should be so. 2. Role of Procedure Results can vary
even when a single procedure, such as magnitude estimation or magnitude production, is used. Although some of these variations could be treated under the heading of "contextual effects," discussed
2 Psychophysical Scaling
1000 "0 ro ::3 "10 0 L_ n
t,. 0 "0
o~ (/)
G) "10 o~ c
o magnitude estimation
9 magnitude production I0
Relative level of noise (dB) F I G U R E 5 Examples of the "regression effect" in the psychophysical judgment of loudness. The slope (power-function exponent) is smaller when subjects give numbers to
match stimulus levels (magnitude estimation) than when subjects adjust stimulus levels to match numbers (magnitude production). Results of two individual subjects from Figure 10 of"Regression Effects
in Psychophysical Judgment," by S. S. Stevens and H. B. Greenbaum, 1966, Perception & Psychophysics, 1, pp. 439-446. Reprinted with permission of The Psychonomic Society and the author.
they are appropriately considered here. In one version of magnitude estimation, subjects are given a standard stimulus (which may appear at the start of a test session only, prior to each test
stimulus, or whenever the subject requests it) to which a fixed number is assigned, the modulus. In another version, no standard or modulus is designated, so every subject is free to choose whatever
sizes of numbers seem appropriate. The virtue of using a fixed standard and modulus is clear: Their use reduces the variability associated with idiosyncratic choice of numbers. But the choices of
standard and modulus can exert consistent effects on the resulting judgments. For example, when a standard stimulus is chosen from the top or bottom of the stimulus range, subjects typically give
smaller response ranges (lower exponents) than they do when the standard comes from the middle of the range (e.g., Engen & Lindstr6m, 1963; Engen & Ross, 1966; Hellman & Zwislocki, 1961; J. C.
Stevens & Tulving, 1957)--though the results probably depend jointly on the choice of standard stimulus and the numerical modulus assigned to it (Hellman & Zwislocki, 1961). 3. Role of Instructions
It is common, when a standard and modulus are given, to instruct subjects to assign their numbers such that the ratio of numerical responses
LawrenceE. Marks and Daniel Algom
corresponds to the ratio of the perceptual magnitudes. The method of ratio magnitude estimation (RME) emphasizes these relations by instructing subjects to make the ratios of successive numbers equal
to the ratio of the sensations (e.g., Luce & Green, 1974). Careful analysis shows, however, that numerical responses depart regularly from a model that holds that subjects respond in this fashion on
a trial-to-trial basis. Evidence that subjects often fail to give consistent judgments of perceptual ratios led to the development of absolute magnitude estimation (e.g., Zwislocki, 1983; Zwislocki &
Goodman, 1980), or AME, which instructs subjects to assign a number to each stimulus so that the number's subjective magnitude "matches" the magnitude of the sensation. This approach rests on the
view that perceptual experiences may be represented as magnitudes per se and not necessarily through ratios (see Levine, 1974). Indeed, a tenet of AME is the careful instruction to subjects that
avoids reference to ratio relations, emphasizing instead a direct matching of perceptual magnitudes. Proponents of AME suggest that children at an early age develop notions of magnitude per se;
Collins and Gescheider (1989) pointed to evidence that young children may learn cardinal properties of number before learning ordinal properties. Although AME may reduce contextual effects compared
to RME (e.g., Gescheider & Hughson, 1991; Zwislocki, 1983), AME probably does not wholly eliminate them (e.g., Gescheider & Hughson, 1991; Ward, 1987). Borg (e.g., 1972, 1982) has made a related
argument with respect to the use of particular kinds of verbal labels or categories. Many of his studies are concerned with the exertion that people perceive during physical work. In this framework,
Borg has hypothesized that, when working at their own physical maxima, which can vary greatly from person to person, people experience more or less the same level of exertion and that each person's
maximal experience is linked physiologically to her or his maximal level of heart rate. Assuming that a single power function relates perceived exertion to the physical stimulus when this is defined
as a proportion of maximum, and assuming further that people can relate other verbal categories to the category denoting "maximal," Borg was able to develop various scales to measure exertion that
proved notably successful in making possible direct comparisons among individuals and groups of individuals. In one interesting version, termed a category-ratio scale, the subjects' responses provide
both categorical labels, such as "very weak," "medium," and "maximal" and numerical values whose properties approximate those of magnitude estimates (Borg, 1982; see also Green, Shaffer, & Gilmore,
1993; Marks, Borg, & Ljunggren, 1983; yet another approach to scaling within the framework of "natural" categories can be found in Heller, 1985). Regardless of which variant of the method is used, it
is clear that magnitude estimates are sensitive to precise instructions. In fact, when instruc-
2 Psychophysical Scaling
tions refer to large rather than small numeric ratios, subjects tend to give larger rather than smaller response ranges and hence greater-sized exponents (Robinson, 1976). The upshot is clear:
Magnitude estimates reflect more than the ways that sensory-perceptual systems transduce stimulus energies (i.e., more than the internal representations of perceptual magnitudes); they also reflect
the results of not-yet-well-specified decisional and judgmental processes. One consequence, discussed later, has been a series of efforts aimed at disentangling sensory-perceptual transformations
from judgment functions. Although it seems undoubtedly true that any given magnitude-estimation function might in principle be decomposed into two (or more) concatenated functions, such a
deterministic approach may provide only limited theoretical insight into the ways that people go about making.judgments. It may be more fruitful instead to try to identify the decision processes
themselves, models of which could potentially account for the mapping of perceptual experiences to numeric responses. Models of this sort will almost certainly be probabilistic rather than
deterministicnexemplified by Luce and Green's (1978) theory of neural attention, Treisman's (1984) theory of criterion setting, and Baird's (1997) complementarity theory, all of which seek to account
for sequential contingencies in response. 4. Interindividual Variation Individual differences abound in magnitude scaling. As a rule of thumb, if power functions are fitted to a set of magnitude
estimates obtained from a dozen or so subjects, one may expect the range of largest to smallest exponents to be about 2:1 or even 3:1 (e.g., Algom & Marks, 1984; Hellman, 1981; Ramsay, 1979; J. C.
Stevens & Guirao, 1964; cf. Logue, 1976). Put another way, the standard deviation of the exponents obtained from a group of subjects is typically on the order of 30% of the value of the mean
(Kfinnapas, Hallsten, & S6derberg, 1973; Logue, 1976; J. C. Stevens & Guirao, 1964), or even greater (Collins & Gescheider, 1989; Teghtsoonian & Teghtsoonian, 1983). Sometimes, if many stimulus
levels are presented, and each stimulus is given many times per subject, the mean .judgments can depart systematically or idiosyncratically from power functions (e.g., Luce & Mo, 1965). It is
difficult to determine whether such departures characterize the psychophysical relation between the stimulus and the underlying sensation, the mapping of sensations onto numeric responses, or both.
It is likely that decisional processes (response mapping) play an important role (e.g., Poulton, 1989), even if they do not account for all individual variation. Baird (1975) and colleagues (Baird &
Noma, 1975; Noma & Baird, 1975; Weissmann, Hollingsworth, & Baird, 1975) have sought to develop models for subjects' numeric response preferences (e.g., for whole numbers, for multiples of "5" and
"10," and so forth).
LawrenceE. Marks and Daniel Algom
It has long been known that subjects who give large or small ranges of numeric responses in one scaling task tend to give large or small ranges in other tasks, as evidenced by substantial
correlations between exponents measured on different stimulus dimensions or modalities (e.g., Foley et al., 1983; Jones & Woskow, 1962; Teghtsoonian, 1973; Teghtsoonian & Teghtsoonian, 1971).
Although such correlations could conceivably represent individual differences in sensory responsiveness that transcend modalities (Ekman, Hosman, Lindman, Ljungberg, & fiikesson, 1968), they more
likely represent individual differences in the ways people map sensations into numbers, for instance, in what Stevens (1960, 1961) called the "conception of a subjective ratio." Several studies
report that individual differences are repeatable over time (e.g., Barbenza, Bryan, & Tempest, 1972; Engeland & Dawson, 1974; Logue, 1976). But other findings suggest that this consistency is
transient (Teghtsoonian & Teghtsoonian, 1971) unless the experimental conditions are similar enough (for instance, by using the same numerical modulus in magnitude estimation) to allow the subjects
to rely on memory (Teghtsoonian & Teghtsoonian, 1983). Similar conclusions were reached by Rule and Markley (1971) and Marks (1991). Finally, none of this should be taken to deny the presence of real
sensory differences in psychophysical functions, such as those observed in various pathological conditions (for example, sensorineural hearing loss) (Hellman, 1981).
C. Magnitude Scaling and Sensory-Perceptual Processing Beyond its many implications for the quantification of perceptual experiences, magnitude scaling has proven particularly versatile and valuable
to the study of sensory processes when applied to research aimed at understanding the mechanisms by which people see, hear, taste, smell, and feel. Indeed, some disciplines, such as chemosensation,
have been invigorated in recent decades by the widespread application of scaling methods. Methods such as magnitude estimation are especially well suited to assess the ways that perceptual responses
depend on multidimensional variations in stimuli. For instance, a researcher may be interested in determining how brightness depends jointly on stimulus intensity and the state of adaptation of the
eye at the time of stimulation, or how loudness depends jointly on stimulus intensity and duration. Holding constant such factors as the instructions (and of course the subjects themselves), subjects
typically are called on to judge a set of stimuli that vary multidimensionally within a given test session. Regardless of whether a subject's numerical responses follow a particular psychophysical
relationumaybe the loudness judgments double with every 10-dB (10:1) increase in signal intensity, maybe notmthe finding that loudness judgments increase to the same extent with a 10-dB increase in
intensity and with a 10:1 increase in duration specifies an underlying rule of time-inten-
2 Psychophysical Scaling
sity reciprocity: Loudness depends on the product of intensity and time (see Algom & Babkoff, 1984, for review). Roughly speaking, we may consider procedures of magnitude scaling as analogous to
procedures in which "absolute thresholds" are measured over some stimulus dimension (such as sound frequency). As long as the subjects maintain a constant criterion, measures of threshold should
provide accurate (inverse) measures of relative sensitivity. By analogy, regardless of the particular decision rules used by subjects when, say, they give magnitude estimates, as long as the subjects
apply the same rules to all stimuli presented in the series, the judgments generally can be taken to represent relative suprathreshold responsiveness (for a thorough, though somewhat dated, review of
magnitude estimation's role as a kind of "null method," see Marks, 1974b). In particular, it is commonly assumed that if two different stimuli (say, a 70-dB tone presented for 5 ms and a 60-dB tone
presented for 50 ms) have the same loudness, then on average they will elicit the same numerical judgment. If so, then magnitude estimation provides for a kind of indirect matching, a set of
magnitude estimates containing much the same information as that found in a set of direct intensity matches. In a classic study, J. C. Stevens and S. S. Stevens (1963) found, with the two eyes
differentially light-adapted, that interocular matches and magnitude estimates provide equivalent information about the way adaptation affects relative brightness of flashes of light ranging from
threshold levels, where the effects are substantial, to high luminances, where the effects are much smaller. Their results are shown in Figure 6, which replots their data showing brightness against
luminance level on log-log axes. Clearly, the greater the level of light adaptation, the higher the absolute threshold (indicated by the way brightness steeply approaches zero at the low end of each
function), the smaller the brightness produced by a given, suprathreshold luminance level (indicated by the displacement of the functions), and the greater the exponent of the brightness function
(indicated by the slope of the function's linear portion). To account for these findings, Stevens and Stevens used one of several possible modifications of the simple power Eq. (29) (see the next
section) in order to account for a commonly observed departure at low stimulus levels:
k ( I - I,,)~,
where the parameter Io relates to absolute threshold (one of several formulations proposed). It is important to note that all three parameters of this equation--the multiplicative constant k and the
threshold parameter Io, as well as the exponent [3~vary systematically with level of adaptation. Hood and Finkelstein (1979) used a model in which responses "saturate" (approach an asymptotic
maximum) at high intensities, as is often observed
Lawrence E. Marks and Daniel Algom IO0
adapted .1
A = 1600
Luminance (cd/m 2) FIGURE 6 How the level of prior light adaptation affects the relation between the brightness of short flashes of light and the luminance of the test flash. Both variables are
plotted on logarithmic axes. Each function represents a different luminance of the adapting stimulus. Modified from Figure 4 of"Brightness Function: Effect of Adaptation," by J. C. Stevens and S. S.
Stevens, 1963,Journal of the Optical Society of America, 53, pp. 375-385. Reprinted with permission.
neurally, in order to relate magnitude estimates to increment thresholds measured under different states of light adaptation. Through its application to multivariate cases, magnitude scaling has
provided numerous opportunities to develop theoretical accounts of sensory processing. At the same time, the development of these theories has frequently revealed deep indeterminacies about the
sensory representations that the scales imply. We provide two further examples, one from hearing and the other from vision. 1. Partial Masking of Auditory Loudness As is well known, a background band
of noise of fixed SPL can both raise the threshold of weak acoustic signals (for example, the threshold of a tone whose frequency falls within the noise band) and reduce the loudness of more intense
signals. As in the case of brightness after light adaptation, the degree of masking depends on the relative levels of the masker and signal. At least four different models have been proposed to
account for the results. One is Stevens's (1966; Stevens & Guirao, 1967) power-transformation model. According to this model, mainly when the SPL of a test signal falls below the SPL of the masking
noise is the signal's loudness reduced. Consequently, masked-loudness functions are bisegmented: The upper segment follows the same loudness function obtained in the quiet; noise leaves loud
2 Psychophysical Scaling
signals unaffected. But the lower segment follows a steeper-than-quiet function (greater exponent), intersecting the upper segments at the point where the SPLs of the tone and masker meet; the more
the intensity of the noise exceeds that of the signal, the greater the degree of masking, that is, the larger the exponent. The power transformation of loudness, L, can be written L = k'PsrS* , (3"
increasing with P,,, when P,,, > Ps; otherwise 13" = 13.
Here, Ps and Pm are the sound pressures of signal and noise, respectively, and 13 is the exponent governing the loudness function in quiet. The top panel of Figure 7 shows that the model is able to
fit matching data reported by Stevens and Guirao, who had subjects equate the loudness of a tone heard in the quiet and the loudness of a tone heard in wideband noise. Alternatively, instead of
inducing a power transformation of unmasked loudness, noise may subtract loudness. In the simplest version (Lochner & Burger, 1961), a masker of fixed intensity causes a constant number of loudness
units to be lost from the signal-in-quiet, the amount of masking being proportional to the masker's loudness. Hence the subtractive model can be expressed L
where the value of w,,, represents the fractional masking. (Note also that Eq. (32) provides one of the formulations that try to account for near-threshold departures from a simple power equation in
the quiet.) Like the model of power transformation, the subtractive model predicts that a fixed-level masker produces a proportionally greater decrement in loudness when the signal's SPL is low
rather than high. But this happens, says the model, by subtracting a constant number of loudness units from different starting (unmasked) levels. The middle panel of Figure 7 shows how this model can
be used to fit the loudness-matching data of Stevens and Guirao (1967). Two variants of subtractive models have also sought to account for partial masking. Garner (1959) too assumes that a masker
subtracts a constant number, c, of loudness units, from the signal, but in his model the units are defined by the lambda scale. Thus loudness is given in terms of a scale that, as indicated earlier,
is determined through measures of sensory difference rather than magnitude. Garner's model can be written
x-- k , ( p ? , - cp,,?,).
Because the exponent governing the lambda function, [3x, is roughly half that of the 13 governing the sone function (see Marks, 1974a, 1979b), it follows that if L is given in sones, Garner's model
can be rewritten L = k ( P y : - cp,,Is/2)2.
Lawrence E. Marks and Daniel A l g o m 120
4~ 1 0 0 .,.,.
o" .__.
0 ..J (3.
.8u ~
S P L of noise
4.1 1 0 0 O"
tO 0 .-I CL t/) ca
100 O"
tO 60
0 .J a,. V) Jfl
Jt o_, u
Decibels SPL of tone in noise
FIGURE 7 Three models to account for the way that masking noise affects the loudness of a tone; shown is the fit of each model to loudness matches between tones heard in the quiet and tones embedded
in noise (data of Stevens & Guirao, 1967). The model of power transformation (Stevens, 1966; Stevens & Guirao, 1967), shown in the upper panel, proposes that a masker primarily affects test tones
with lower SPLs, increasing the exponent of the power function. The model of subtraction (Lochner & Burger, 1961), shown in the middle panel, proposes that the masker subtracts a constant amount of
loudness from the tone, proportional
2 Psychophysical Scaling
Zwislocki (1965; see also Humes & Jesteadt, 1989) provided a more elaborate subtractive model (a third formula that aims to account for near-threshold departures from a power function). Zwislocki's
model assumes, first, that the physical intensities of the signal and masker summate (sum of squares of sound pressure) within critical bands; second, that the summed signal-plus-noise has an overall
loudness; and third, that the loudness of the signal consists of the difference between the overall loudness of the signal-plus-noise and the loudness of the masking noise: L = k[(Ps 2 + p,,,2)[3/2_
The final model was developed by Pavel and Iverson (1981; see also lverson & Pavel, 1981). They postulated that loudness functions measured in the presence of masking noise display "shift
invariance." That is, loudness functions in noise and in quiet can all be represented as segments of a single psychophysical function. Although the subtractive model of Lochner and Burger (1961)
displays shift invariance, Pavel and Iverson showed that masked loudness is better described through a more complex model, one that assumes that maskers affect a "gain control" in the auditory
system, as represented by the loudnessmatching equation p, = k[pj3~/ (ps~b + cp,,f3b/o)]l/(~,, - 130,
where P' is the level of a tone heard in quiet whose loudness matches Ps, and Ois a constant governing the degree of masking-induced shift. Note that this model does not treat psychophysical
functions per se, but loudness matchesmalthough the model makes it possible to compute exponents 13a and 13h governing the transformations of acoustic signals. By way of contrast, Stevens's and
Lochner and Burger's models do rely on the loudness functions themselves; these models assume that masking can be calculated directly on the magnitude functions for loudness, either through
subtraction or power transformation. As the bottom panel of Figure 7 shows, Pavel and Iverson's model also does an excellent job of fitting Stevens and Guirao's (1967) data. 2. Time-Intensity
Relations in Visual Brightness The brightness of a flash of fight depends on both its duration and luminance, so to be adequate any model of brightness vision must account for (
to the masker's own loudness. The model of shift invariance (Pavel & Iverson, 1981), shown in the bottom panel, proposes that all matches between tone in quiet and tone in noise of fixed SPL can be
represented as displaced segments of a single, complex loudness-matching function. The upper panel was adapted from Figure 8 of"Loudness Functions under Inhibitions," by S. S. Stevens and M. Guirao,
1%7, Perception & Psychophysics, 2, pp. 459-465. Reprinted with permission of The Psychonomic Society and the author. The bottom panel was adapted from Figure 6 of"Invariant Characteristics of
Partial Masking: Implications for Mathematical Models," by M. Pavel and G. J. Iverson, 1981,Journal of the Acoustical Society of America, 69, pp. 1126-1131. Reprinted with permission.
Lawrence E. Marks and Daniel Algom
three main generalizations. First, just as brightness increases with luminance given a constant flash duration, so too does brightness increase with duration (up to a point) given a constant
luminance; this follows from the generalized Bloch's law (originally applied to threshold), which states that over short durations, up to a critical duration 1"brightness B depends on the product of
luminance I and time t, B = F(I" t). Second, the value of'r is not constant, but decreases as I increases. Third, at any given value of I, there is a duration "rp at which brightness is greatest (the
Broca-Sulzer maximum); brightness is smaller when flash duration is either shorter or longer than "rp. A wide range of magnitude scaling data and direct brightness matches affirm these principles
(Aiba & Stevens, 1964; Broca & Sulzer, 1902a, 1902b; Nachmias & Steinman, 1965; Raab, 1962; J. C. Stevens & Hall, 1966). One way to account for this array of phenomena, and in particular for the
power functions often found to describe magnitude judgments, is to postulate nonlinear feedback in the visual system that influences both the system's gain and its time constant (Mansfield, 1970;
Marks, 1972). In a system containing n stages, the output of each stage, i, is B;, governed by a differential equation of the form 8Bi/St = aiBi-I - biBi- 1 (1 + ciB.),
where ai, bi, and ci are weighting constants. Such a model, first proposed in order to account for nonlinearities in neural responses of the horseshoe crab Limulus (Fuortes & Hodgkin, 1964), can also
help to account for human flicker discrimination (Sperling & Sondhi, 1968), and, in expanded form, for spatial summation and distribution of responsiveness over the retinal surface (Marks, 1972). It
is especially noteworthy that, in order to account quantitatively for the temporal properties of brightness vision, the feedback filter used in Mansfield's (1970) and Marks's (1972) models needs to
contain exactly two stages (n = 2). For it turns out that a two-stage filter has another important property: Its input-output characteristic, in the steady state, approximates a power function with
exponent of~wprecisely the value that Stevens (e.g., 1975) claimed to govern the magnitude scale for brightness of long flash durations under conditions of dark adaptation. Thus a theory formulated
to account for temporal processing in vision has the additional virtue that it can predict the form of the magnitude-scaling function for brightnesswan example where substantive theory goes hand in
hand with psychophysical scaling.
D. Cross-Modality Matching In the method of cross-modality matching (CMM), subjects attempt to set stimulus levels so that the perceived magnitude of a signal in one modality
2 Psychophysical Scaling
equals the perceived magnitude in another. Although C M M studies appeared intermittently from the 19th century on, most of these focused on judgments in which the same stimulus can be perceived
through different senses (for example, when subjects compare the lengths of objects perceived haptically and visually: Jastrow, 1886; Mme. Pidron [sicl, 1922). Often, however, the stimuli presented
to different senses have no clear environmental communality, so their equality may be considered more abstract or even metaphorical. An example is the matching of brightness and loudness. Modern C M
M appears to have evolved fromJ. C. Stevens's (1957) attempts to have subjects equate the ratios of perceived magnitudes in different modalities~a method of cross-modal ratio matching. Whether
subjects always perform cross-modality matching by operating on ratios (e.g., Krantz, 1972) or by directly equating magnitudes themselves (Collins & Gescheider, 1989; Levine, 1974; Zwislocki, 1983)
remains controversial. Results garnered by CMM were taken by S. S. Stevens (e.g., 1959a; J. C. Stevens, Mack, & Stevens, 1960) as support for the power law. Consider two modalities, a and b, that are
governed by power functions with exponents [3a and 13b, so that R~ = k,,I,,r and R b = k~,Ib~. If the responses Ra and Rb represent sensations, and subjects equate sensations on the two modalities,
so Ra = Rb, then a plot of values of one stimulus against the matching values of the other should conform to a power function whose exponent equals the ratio of the values of 13h and 13~,
12 = (k~,lk,,)(1/r162
To the extent that this result obtains (S. S. Stevens, 1959a; J. C. Stevens, Mack, & Stevens, 1960; J. C. Stevens & Marks, 1965), CMMs and magnitude judgments form a coherent, transitive system
(although some have argued that the system is not transitive, e.g., Mashhour & Hosman, 1968). Unfortunately, it is possible to transform magnitude scales (that is, transform all scales of R) by an
infinitude of continuous increasing transformations and leave the predicted CMMs unchanged; thus CMMs can be as consistent with Fechner's logarithmic function as with Stevens's power function (e.g.,
Ekman, 1964; MacKay, 1963; Treisman, 1964; Shepard, 1978; Zinnes, 1969; see also Link, 1992), at least as long as the multiplicative scale parameters of the several logarithmic functions, such as c
in Eq. (5), are quantitatively related to the same extent as are the corresponding exponents of power functions. As we already indicated, results obtained by CMM may reflect underlying comparisons of
ratios or comparisons of magnitudes. The emphasis on predicting exponents has had the unfortunate consequence of drawing attention away from the issue of magnitude proper. When we ask subjects to
match, say, loudness to brightness, do the results tell us that a particular luminance's brightness equals a particular SPL's loudness? Or that, in relation
Lawrence E. Marks and Daniel Algom
to some other light's luminance and some other sound's SPL, the brightness ratio equals the loudness ratio? Equation (38) would seem to make explicit predictions about stimulus levels and not just
their ratios (see also Collins & Gescheider, 1989). J. C. Stevens and Marks (1980) proposed that subjects can make absolute comparisons of magnitude across different modalities, and they developed a
method, magnitude matching, that makes it possible to derive cross-modal matches from magnitude estimates given to sets of stimuli presented in different modalities. The goal in doing this was
primarily practical rather than theoretical~to develop a method by which responses given to stimuli on one modality could be used to "calibrate" different individuals or different g r o u p s ~ a n d
in this magnitude matching has seen several successes (e.g., Gent & Bartoshuk, 1983; Murphy & Gilmore, 1989; J. c. Stevens & Cain, 1985). Berglund's (1991) method of"master scaling" relies on a
similar principle, but uses as a means for calibration a standardized set of stimuli presented to the same modality. For example, one may start by having subjects estimate the loudness of a series of
pink noises (emphasis on lowfrequency energy); then, the subjects' judgments of various other stimuli (e.g., environmental noises) can be standardized through the loudness function obtained for pink
E. Critiques of Stevens's Psychophysics In advocating the methods and results of magnitude scaling, Stevens (e.g., 1961) explicitly attacked Fechnerian scaling. This attack deserves special attention
because it also had its constructive side. In criticizing Fechner and proposing to "repeal his law," Stevens offered novel ways to measure sensation~although his own methods are vulnerable on several
grounds. Before examining those vulnerabilities, however, let us examine Stevens's arguments over Fechner's conception of psychophysical measurement. Stevens challenged Fechner's choice of the J N D
as a unit of sensation strength, objecting to the use of an index of variability, error, or "resolving power" as a unit of measurement. Moreover, Stevens argued that the subjective size of the J N D
does not remain constant. On a pragmatic level, Stevens objected to "indirect" methods of constructing scales of measurement and suggested alternatives. To the contention that psychophysical
measurement is not like physical measurement (the former not being measurement at all), he retorted by proposing a broader conception of measurement for the physical sciences as well (Stevens, 1946,
1951). A dominant approach to measurement, anticipated by von Kries (1882), grew out of the work of scholars like N. R. Campbell (1920) and crystallized in a "classical view of measurement" (Stevens,
1959b). Essentially, in this view, "fundamental" measurement requires manipulations that are iso-
2 Psychophysical Scaling
morphic with the "axioms of additivity." Fundamental measurement is possible with properties like length or mass. Measurement of other physical properties, such as density or potential energy, is
called "derived" and is defined by relations based on fundamental magnitudes. The classical view, followed to its logical conclusion, precludes the possibility of mental measurement. Obviously,
"sensations cannot be separated into component parts, or laid end to end like measurement sticks" (Stevens, 1959c, p. 608), so no fundamental measurement of sensation is possible. By implication,
however, sensations cannot be gauged by "derived" measurement either. Thus they cannot be measured at all. This pessimistic conclusion was shared by many members of a British committee of scientists
appointed in 1932 to consider the possibility of "quantitative estimates of sensory events." To bypass these difficulties, particularly in creating a unit to measure sensation, Stevens (1946, 1975)
sought to broaden the definition of measurement. In his theory, measurement is simply the assignment of numbers to objects according to a rule. Different rules or procedures yield numbers that
represent different empirical relations. Stevens then sought to isolate those transformations, or mathematical group structures, that leave the original empirical information invariant. His famous
system of scales of measurement anticipated the derivation of the so-called uniqueness theorems within the framework of axiomatic measurement (e.g., Krantz et al., 1971; Luce et al., 1990). Stevens
thus first called attention to the importance of determining how unique an array of numerical representations is, then showed how the permissible mappings between equivalent representations can serve
to classify scales of measurement. Psychophysical scaling is concerned, however, with the possibility of mapping stimuli onto numbers, not with the mapping of one representation onto another. Scaling
deals with the foundational operations of ordering and matching, whereby it precedes the question of uniqueness chronologically as well as logically. Luce and Krumhansl (1988), for example, discussed
Stevens's classification of scales under the rubric of axiomatic measurement (where it properly belongs), not under the rubric of scaling. If the "scales of measurement" do not deal with scaling, how
did Stevens address the issue of scaling? As we have seen, he did this by suggesting that one can measure the strength of sensation "directly," by matching numbers to stimuli. Numerical
"introspections," so to speak, are said to measure sensation in the same way that a thermometer measures temperature. Subjects may be asked to give numbers to stimuli such that the numbers are
proportional to the subjective magnitudes aroused. These responses are treated quantitatively, often as if they have properties of "ratio scales." Nevertheless, deep theoretical questions plague
Stevens's approach. In his description of the psychophysical law, Stevens used the term ~, not the more neutral R, to stand for the subjective variable (cf. McKenna, 1985).
Lawrence E. Marks and Daniel Algom
The practice has caused a great deal of confusion. If qJ is hypothesized to stand for sensation, then how do we ascertain that the overt numerical responses faithfully reflect the underlying
sensations? The very designation qJ may have impeded recognition of the crucial role played by decisional processes and hence with the consequent judgment function relating overt responses to the
unobservable sensations. Once this role is appreciated, it becomes apparent that, without additional information, the psychophysical function is indeterminate. Alternatively, one may consider qJ a
measure of sensation simply by operational definition. This may have well been Stevens's position; however, he did not suggest converging operations to support the premise (McKenna, 1985). Several
investigators (e.g., Garner, 1954; Graham & Ratoosh, 1962; McGill, 1974; Shepard, 1978) have questioned "the procedure of treating numerical estimates as if they were numerical d a t a . . , the
quantified outcome of a measuring operation" (Graham, 1958, p. 68). Asserted McGill (1974), "Whatever else they are, the responses are not numbers" (p. 295). By this view, one may average the stimuli
for a given response, but not, as Stevens did, average the responses for a given stimulus. The method of cross-modality matching (Stevens, 1975) eschews the use of numbers. But the results yielded by
that method are consistent with a whole array of possible psychophysical functions, including the logarithmic function of Fechner as well as the power function proposed by Stevens. Even granting that
overt responses are numbers, there remains an inherent arbitrariness about the choice of numbers to measure sensations. It is the same problem that has plagued physicists over the temperature scale,
where the choice of thermometric substance is arbitrary. There are two ways to escape the difficulty (actually, the first is a special case of the second). One way "is to ground scales in (metric)
properties of natural kinds . . . . In this sense, Fechner's intuition of searching for natural units of sensation is better than that of Stevens" (Van Brakel, 1993, p. 164). More fundamentally,
however, the usefulness of any scale depends on the available theoretical structure. For the physical property of temperature, "the interval or ratio properties of the thermodynamic scale of
temperature can be fully justified only by reference to physical theory" (Shepard, 1978, p. 444). The same applies to psychophysical scaling by the so-called direct methods, for, "Without a t h e o r
y , . . , how can we assume that the numbers proffered by a subject--any more than the numbers indicated on the arbitrary scale of the thermoscope--are proportional to any underlying quantity?"
(Shepard, 1978, p. 453). The critical ingredient, then, is a comprehensive theoretical structure that includes the particular scale as its natural component. Fechner's approach, with JNDs as "natural
units," offers one avenue to develop such a structure, although, as yet, it lacks the rich interconnections characterizing physical
2 Psychophysical Scaling
measurement (Luce, 1972; Shepard, 1978). Efforts to develop a theory for magnitude estimation (Krantz, 1972; Shepard, 1978, 1981) suggest that Stevens's conception that subjects judge sensations of
individual stimuli ("mapping theory") may be untenable, though this remains controversial. Levine (1974) offered a geometric model in which subjects do map numbers onto individual sensations. And
Zwislocki (1983; Zwislocki & Goodman, 1980) has argued that people develop individual scales of magnitude, which can be tapped through the method he calls absolute magnitude estimation. In
Zwislocki's view, subjects can be instructed to assign numbers to stimuli so that the psychological magnitudes of the numbers match, in a direct fashion, the psychological magnitudes of the stimuli.
On the other hand, it may be that in many, most, or even all circumstances, subjects judge relations between stimuli ("relation theory"; see Krantz, 1972). But without strong assumptions, relation
theory shows psychophysical functions derived from magnitude scaling to be underdetermined; in particular, there is no empirical or theoretical basis to decide between equally consistent sets of
logarithmic and power functions. Only by integrating scaling within some kind of substantive psychophysical theory can we attain valid and useful measurement. To do so, we "have to move outside the
circumscribed system of relationships provided by these 'direct' psychophysical operations" (Shepard, 1978, p. 484). Most attacks on Stevens's psychophysics have been aimed at the implications that
he drew from his data, not so much at the reliability of those data themselves (but see McKenna, 1985, and references therein). So-called direct procedures resulted in a psychophysical law
incompatible with the one developed by Fechner. Stevens found that magnitude estimates, taken as numerical quantities, are a power function of physical magnitude. But any attempt to form a general
psychophysical theory must reckon with the fact that different procedures can lead to different scales (Birnbaum, 1990). V. MULTISTAGE MODELS: MAGNITUDES, RATIOS, DIFFERENCES
A. Judgments of Magnitude and Scales of Magnitude Shepard (1978, 1981), Anderson (1970, 1981, 1982), and Birnbaum (1982, 1990) have shown the inadequacy of Stevens's (1956, 1957, 1975) contention
that the numbers given by subjects are necessarily proportional to sensation magnitudes. Earlier, Garner (1954), Graham (1958), Attneave (1962), Mackay (1963), and Treisman (1964) made much the same
point. The weakness of merely assuming that "direct" scales measure sensation is easily demonstrated (e.g., Gescheider, 1988; Gescheider & Bolanowski, 1991). Denote by S the unobservable sensation,
by R the numerical response, and
Lawrence E. Marks and Daniel Algom
by I the stimulus intensity. Then the psychophysical transformation, F1, is given by S = F,(I).
The judgment function, F2, relating sensation magnitude S to the observable response R, can be written R = F2(S ).
Combining Eqs. (39) and (40) gives R-
which is the experimentally observed relation between R and I, usually expressed directly by R = F(I),
which conflates the component functions F 1 and F2. The underlying psychophysical function F 1 must be inferred from the empirically observed function F. But unless one knows the judgment function F
2, it is impossible to determine F1. Stevens's assertion that the psychophysical law is a power function depends on the strong assumption that R is proportional to S (or to S raised to a power).
Stevens provided no justification for this assumption. Attempts at decomposing F 1 and F 2 within two-stage models suggest, in some instances, that F 2 is nonlinear, a matter considered in the next
section. The previous discussion assumes that subjects are able to assign numbers to stimuli in a consistent manner, such that R can be treated as numerical. This assumption too is controversial. As
we mentioned earlier, it is not clear whether a verbal response really possesses quantitative magnitude (e.g., Graham, 1958; McGill, 1974; Shepard, 1978). If not, then it is illegitimate to treat the
response as numbers, let alone calculate from them statistical quantities such as mean or variance. Such concerns do not apply to Fechnerian methods. 1. Curtis and Rule's Two-Stage Model Several
investigators have sought to disentangle psychophysical functions (relating stimulus to sensation) and judgment functions (relating sensation to overt response). In particular, judgment functions may
be nonlinear--if, for example, the psychological magnitudes of numbers are not proportional to numbers themselves, but a power function with exponent different from unity, as suggested by Attneave
(1962). This suggestion was elaborated into a model by Curtis, Rule, and their associates (e.g., Curtis, 1970; Curtis, A ttneave, & Harrington, 1968; Curtis & Rule, 1972; Rule & Curtis, 1977, 1978).
To decompose an overt magnitude scaling function F into its components, a psychophysical function F 1
2 Psychophysical Scaling
and a judgment function F 2, or into what these researchers called "input" and "output" functions, subjects were asked to judge pairs of s t i m u l i ~ typically, with regard to their perceived
difference. The model assumes that the function F1 applies separately to each stimulus in the pair; after F~ is applied, a difference is computed between the sensations; and finally this sensory
difference is subjected to response transformation F 2. Thus the model can be written R = k(I,f 3, -
!if3,) ~32,
where I i and/j denote the stimulus intensities of stimuli i and j, [31 and 132are the respective exponents of the power functions governing F 1 (the sensory process) and F2 (the nonlinear use of
numbers), and k absorbs muhiplicative constants from both F 1 and F 2. An analogous equation applies to conditions in which the sensory effects of the two components sum, simply by changing the sign
of the operation from subtraction to addition. In this two-stage model, the size of the observed exponent ~, routinely derived from magnitude estimation, should equal the product of [31 and 132. We
do not describe the empirical results in detail, but instead list four conclusions important to the present discussion. First, when judgments are obtained by magnitude estimation, the values of [32
are usually greater than 1.0, implying a nonlinear judgment function in magnitude estimation. Second, various attempts to scale the psychological magnitude of numbers suggest that numeric
representations may sometimes be nonlinearly related to numbers (e.g., Banks & Coleman, 1981; Rule & Curtis, 1973; Shepard, Kilpatric, & Cunningham, 1975), though sometimes the relation appears
linear (Banks & Coleman, 1981). Third, the value of [31 usually varies much less over individuals than does ~2, consistent with the notion that there is less interindividual variation in sensory
processing than in response mapping. Fourth, when judgments are obtained on rating scales rather than magnitude-estimation scales, the values of 132 are generally closer to 1.0, consistent with the
view that rating scales can provide linear judgment functions (see also Anderson, 1981). Results obtained by applying the two-stage model to judgments of pairs of stimuli suggest that in one respect,
at least, Stevens was correct: The function F1 governing the transformation from stimulus to sensation is often well described with a power function. But the exponents of F 1 are generally smaller
than those obtained with magnitude estimation. Earlier, Garner (1954) came to a similar conclusion, based on the equisections of loudness that he used to construct the lambda scale. In general,
judgments of loudness intervals agree better with the lambda scale than with the sone scale (Beck & Shaw, 1967; Dawson, 1971; Marks, 1974a, 1979b). So do scales derived from nonmetric analysis, that
is, from the rank order of loudness differences determined over all possible pairs of stimuli (Parker &
Lawrence E. Marks and Daniel Algom
Schneider, 1974; Popper et al., 1986; Schneider, 1980; Schneider et al., 1974, 1978). Loudness in lambda units is roughly proportional to the square root of loudness in sones. In sum, research
conduced under the rubric of twostage models, especially in the judgment of sensory differences, has produced a vast array of findings, largely consistent with the view that magnitude estimates are
"biased" by strongly nonlinear judgment functions (though a plausible alternative interpretation is given in the next section). 2. Conjoint Measurement The conclusion of the last section, that
magnitude estimates are strongly "biased" by a nonlinear judgment function, rests on the assumption that the same processes underlie judgments of magnitude (where subjects assign numbers to represent
the apparent strength of individual stimuli) and judgments of difference (where subjects assign numbers to represent the size of a difference). That this assumption itself deserves careful scrutiny
is suggested by studies seeking to disentangle sensory and judgment functions within experimental paradigms in which subjects judge magnitudes rather than differences. These studies are rooted in the
classic paper by Fletcher and Munson (1933), who erected a scale of loudness by relying on a hypothesis of linear sensory summation: When pure tones are presented through independent channels (to the
two ears, or to the same ear but at very different signal frequencies), the loudnesses of the components add. If this is so, then it follows, for instance, that a tone heard by two ears is exactly
twice as loud as the same tone heard by one ear (assuming the ears are equally sensitive). Consequently, to construct a loudness scale, it is sufficient to obtain matches between monaurally presented
and binaurally presented signals. If a signal intensity of 11 presented binaurally has a loudness, Lb(II), equal to that of intensity 12 presented monaurally, so Lb(I1) = L,,,(I2), and ifloudnesses
sum across the two ears at any intensity I, so, for example, Lb(I1) = 2 9L,,,(I1), then it follows that L,,,(I2) = 2 9L,,,(I~).
That is, given a model of linear loudness summation, and given that a signal of intensity 12 presented monaurally matches 11 presented binaurally, then under monaural (or binaural) presentations, G
will be twice as loud as I~. Unfortunately, Fletcher and Munson had no procedure to test the adequacy of their model of linear summation. Such tests awaited the development of conjoint measurement
theory (Luce & Tukey, 1964), which provided an axiomatic grounding for additive measurement of intensive quantities. This approach returns us to the longrevered notion of basing fundamental
measurement~be it physical or psychological~in the addition of quantities. But Luce and Tukey showed that additivity is possible even when we cannot mimic empirically the additive
2 Psychophysical Scaling
operations of extensive physical measurement. Thus conjoint measurement can apply to those very situations, like the measurement of sensation, to which variants of the "quantity objection" have been
raised--such as a cannon shot, which "does not contain so and so many pistol cracks within it." In Luce and Tukey's theory, values of two stimulus components can be shown to add even when we have
only ordinal information about the relative effects of the conjoined values. Assume a set of stimuli comprising two components, with component A taking on values ai, and B taking on values bj.
Additivity requires that two main principles hold (e.g., Krantz et al., 1971). One is transitivity: If al,b 1 :> a2,b 2 and a2,b 2 :> a3,b3, then al,b I >- a3,b 3.
The second is cancellation: If a2,b 3 :> a3,b 2 and al,b 2 > a2,b 1, then al,b 3 >- a3,b 1.
Cancellation is the critical axiom: It entails the commensurability of a given change in one of the variables with a given change in another. If additivity and cancellation hold, then there exist
interval-scale representations for dimensions A and B whose values are linearly additive. In demonstrating that fundamental (additive) measurement is not limited to the extensive quantities of
physical sciences, conjoint measurement theory provides a deep analysis of additive structures (cf. Luce & Krumhansl, 1988). Levelt, Riemersma, and Bunt (1972) obtained judgments of relative loudness
of tones presented at unequal intensities to the two ears and found the results were consistent with a model of additivity. The loudness scales derived by Levelt et al. could be described by power
functions of sound pressure, with exponent averaging about 0.5, closer to the prototypical value 0.6 of Stevens's sone scale than to the 0.3 of Garner's lambda scale. Similar results were reported
using conjoint scaling (Marks, 1978c) and a functional-measurement approach, the latter applied both to binaural loudness additivity and to the additivity of loudnesses of tones far separated in
sound frequency (Marks, 1978a, 1978b, 1979b). In all cases, the loudness scales had exponents near 0.6 in terms of sound pressure. In touch, the perceived intensity of vibrotactile stimuli comprising
multiple sinusoids was also linearly additive on the magnitude-estimation scale (Marks, 1979a). Schneider (1988) performed a thorough and elegant study of additivity of 2000-Hz and 5000-Hz tones
(separated by several critical bandwidths). Applying a conjoint-scaling method to direct comparisons of loudness, Schneider reported that the data of individual subjects satisfied the constraints of
additivity, permitting him to derive loudness scales that at each frequency were consistent with power functions; exponents ranged over individuals from 0.5 to 0.7 (re sound pressure). In a related
vein, Zwislocki (1983) used
LawrenceE. Marks and Daniel Algom
a conjoint-scaling approach to confirm the loudness scales that he derived using absolute magnitude estimation. We note, however, some evidence for deviations from additivity in both the case of
binaural combinations (Gigerenzer & Strube, 1983) and multifrequency combinations (Hfibner & Ellermeier, 1993), tested through a probabilistic generalization called random conjoint measurement
(Falmagne, 1976). Last, in this regard, Parker, Schneider, Stein, Popper, Darte, and Needel (1981) and Parker and Schneider (1988) used conjoint scaling to measure the utility of money and other
commodities. The results are consistent with the hypothesis that the utility of a bundle of commodities equals the sum of the utilities of the components. The utility of money is a negatively
accelerated function of money itself, consistent with power functions with exponents smaller than unity, and roughly similar to those derived from numerical scaling (e.g., Galanter, 1962; Galanter &
Pliner, 1974). As an aside, we note that, much like Thurstonian scaling, conjoint scaling, and indeed magnitude scaling in general, makes it possible to quantify psychological responses even to
stimuli that lack a clear physical metric. So, for example, a series of studies by Ekman and Kfinnapas (1962, 1963a, 1963b), used a variant of magnitude scaling~ratio estimation~to quantify such
domains as the esthetic value of samples of handwriting, the political importance of Swedish monarchs, and the degree of social-political conservatism of verbal statements. When they are consistent
with linear additivity, results obtained by applying conjoint-measurement theory to numerical judgments, as with other findings discussed earlier, can be characterized in terms of (or shown to be
consistent with) a two-stage model. If again we characterize F 1 and F2 as power functions, the model becomes R = k(Ilf3, + I213,)~32
But now, with magnitude estimation, the average judgment function is typically linear (132 = 1), and the value of [31 agree much better with magnitude estimates than with difference judgments (e.g.,
Marks, 1978c, 1979b). These findings suggest that it may be premature to reject magnitude estimation out of hand in favor of measures derived from sensory differences. It is possible that the sensory
representations underlying judgments of magnitude simply differ from those underlying judgments of difference but are no less valid (Marks, 1979b). 3. Judgments of Magnitude, Differences, and Ratios
Torgerson (1961) raised the possibility that subjects perceive a single relation between stimuli regardless of whether they are instructed to judge "ratios" or "differences." When ratio and
difference judgments are compared directly, the two sets typically have the same rank order (e.g., Birn-
2 PsychophysicalScaling
baum, 1978, 1982; Mellers, Davis, & Birnbaum, 1984). Were the subjects to follow the respective instructions appropriately, the two sets of judgments would not be identically ordered. So Torgerson's
conjecture seems plausible. But because the responses may be affected by nonlinear judgment functions, it is more difficult to ascertain whether it is differences or ratios that subjects report.
Asking the subjects to compare stimulus relations (e.g., "ratios of differences," or "differences of differences") may help decide. Convergence of scale valuesmassuming that equivalent values operate
in the two tasks--provides another constraint. Combining results makes it possible to test Torgerson's hypothesis and uncover the nature of the underlying operationman avenue pursued by Birnbaum. In
Birnbaum's theory, subjects use subtraction regardless of whether they are instructed to judge ratios or differences. Subjectively, the stimuli form an interval scale like positions of cities on a
map. On such a scale, differences are meaningful but ratios are not. Because a difference has a well-defined origin or zero, judgments of ratios of differences should obey a ratio model, whereas
judgments of differences of differences should obey a difference model; evidence supports these predictions (e.g., Birnbaum, 1978, 1982; Birnbaum, C. Anderson, & Hynan, 1989). The subtractive theory
provides a coherent account of several phenomena, including context effects, judgments of inverse attributes, and relations between ratings and magnitude estimates (see Birnbaum, 1990, for summary).
Whatever the subjects judge (differences or ratios) and however they make their judgments (by category rating or magnitude estimation), according to Birnbaum's theory the sensations, S, remain the
same. Only the judgment function, F 2, differs. Judgment functions are readily affected by such factors as distribution of stimulus levels, choice of standard, and wording of the instructions. The
relation between judgments of ratios and judgments of differences, or between magnitude estimates and fixed-scale ratings, can be systematically manipulated. This granted, in many cases F 2 obtained
with judgments of ratios is exponentially related to F2 obtained with judgments of differences. Consequently, judgments of ratios may appear at first glance to fit a ratio model, and judgments of
differences may appear to fit a subtractive, or differencing, model. But in fact both sets of judgments rely on a single, subtractive process operating at the level of the underlying sensory scale
values; it is the judgment functions that differ. Furthermore, if subjects try to judge ratios when they give magnitude estimates but try to judge differences when they give category ratings, then
the magnitude estimates should likewise be an exponential function of corresponding category ratings--consistent with the common finding that magnitude estimates and category ratings are nonlinearly
related. This said, it is necessary to qualify some of these conclusions. When asked to judge "ratios" and "differences," subjects sometimes do produce
Lawrence E. Marks and Daniel Algom
different rank orders, as if they were in fact using different mental operations, not just different judgment functions. Although the use of different rank orders is undoubtedly the exception in
judgments of continua like loudness, it seems to be the rule in the particular case of length of lines. Judgments of ratios of line lengths are consistent with a ratio model in which subjective
values are a power function of physical length (exponent not determinate), whereas judgments of differences in line length are consistent with a subtractive model in which subjective values are
proportional to the square root of physical length (Parker et al., 1975). Perhaps subjects can use ratio models when stimuli have well-learned physical units (cf. Poulton, 1989) or when they have
extensive rather than intensive properties, and thus, conceivably, better-defined zero points (cf. Hagerty & Birnbaum, 1978). Another set of problems appears when we try to extend the findings on
judgments of ratios and differences to explicate magnitude estimation and category rating. Let us assume, first, that judgments of ratios and differences rely on the single operation of differencing
and, second, that an exponential judgment function underlies ratios but a linear judgment function underlies differences. If magnitude estimates and category ratings reflect judgments of ratios and
differences, respectively, then it follows that category ratings should be logarithmically related to corresponding magnitude estimates. Further, if magnitude estimates follow a power function of
stimulus intensity, the category ratings (or other scales of difference) should follow a logarithmic function of intensity. But category ratings are only infrequently logarithmically related to the
corresponding magnitude estimates, or to stimulus intensity. Often, the nonlinearity is less extreme, consistent with a power function with an exponent somewhat smaller than unity (Marks, 1968,
1974a; Stevens, 1971; Ward, 1972). Indeed, sometimes category ratings and magnitude estimates are similar functions of stimulus intensity (Foley et al., 1983; Gibson & Tomko, 1972). Perhaps one or
more of the assumptions is incorrect. It is possible, for example, that subjects do not use implicit ratios in magnitude estimation. But given the sensitivity of judgment functions to stimulus
context (for example, the form of rating scales depends systematically on stimulus range and number of available categories: e.g., Marks, 1968; Parducci, 1982; Parducci & Wedell, 1986; Stevens &
Galanter, 1957), this evidence is not wholly conclusive. In this matter, see Birnbaum (1980). More significant in this regard are the various scales derived from judgments of intervals or differences
in which nonlinearities in judgment functions have been eliminated or circumvented~either by applying explicit two-stage power-function models or by subjecting the judgments to nonmetric scaling
analyses. In virtually all of these cases, the scales of sensory difference turn out to be power functions, not logarithmic functions, of
2 Psychophysical Scaling
stimulus intensity (e.g., Curtis et al., 1968; Rule & Curtis, 1976; Parker & Schneider, 1974; Schneider, 1980; Schneider et al., 1974). If magnitude estimates were exponentially related to these
scale values, the resulting relation should deviate from a power function. The judgment functions obtained from, say, magnitude estimates of sensory differences typically are not as extreme as an
exponential rule dictates but can be described by power functions: Eq. (43) with exponents, [32, generally ranging between about 1.5 and 2.0 (Curtis et al., 1968; Rule & Curtis, 1977; Rule, Curtis, &
Markley, 1970). One possible resolution comes by dissociating judgments of magnitudes from judgments of ratios. This is the position taken by Zwislocki (e.g., 1983), and it traces back to
observations made more than 30 years ago (e.g., Hellman & Zwislocki, 1961). By this token, judgments of magnitude rely on, or can rely on, assessments of individual sensory experiences and need not
be based on putative ratio relations (cf. Marks, 1979b). 4. Functional Measurement Norman Anderson's functional measurement originated in work on person perception some three decades ago. That work
showed the value of studying the algebraic structures that underlie processes of information integration, with tasks involving adjectival ascriptions of personality (Asch, 1946). Subsequently, the
conceptual framework extended to virtually every arena of experimental psychology. These developments and the experimental results are summarized in two volumes (Anderson, 1981, 1982). Several works
recount psychophysical data and theory (e.g., Anderson, 1970, 1974, 1992). Figure 8 depicts a hypothetical chain of transformations from observable stimuli, /j, to observable response, R. These
stimuli are first transformed into sensations, Si, by appropriate psychophysical functions, the input functions defined by the psychophysical law. The values of Sj then combine to produce an overall
sensation, S, the combination governed by a corresponding integration rule, or psychological law. Finally, a judgment or output function, or psychomotor law, maps the outcome of the integration
process, S, onto observable response R. Note that, for a single physical continuum, the functions implied by Figure 8 translate to the two-stage model of magnitude estimation discussed earlier and
characterized by Eqs. (43) and (47), where the psychophysical transformation from stimulus to sensation is followed by a judgment transformation mapping the sensation to response. With
unidimensionally varying stimuli, however, the model itself does not provide a means to separate the two transformations and uncover the psychophysical function. The key to functional measurement
lies in the use of several physical
Lawrence E. Marks and Daniel Algom
Stimulus 11
, ~
Sensation S 1
Stimulus 12
Sensation S2
Stimulus 13
Sensation S 3
~ R
Psychophysical Law
Psychological Law
Psychomotor Law
S i = F 1(I i)
S = G(S 1 , S 2 . . . . Sn)
R = F2(S )
FIGURE 8 The model of functional measurement offered by Anderson (e.g., 1977), in which stimulus values, !i, are transformed to psychological values, S i, by a psychophysical law; the psychological
values of Si are integrated by an appropriate psychological law; and the resulting percepts, S, are mapped to response, R, by a psychomotor law. From Figure 1.1 of Foundations of Information
Integration Theory, by N. H. Anderson, New York: Academic Press, 1977. Reprinted with permission of the author and publisher.
continua, whose values vary according to an appropriate factorial design. The motivation for prescribing a multidimensional design is not just procedural. Rather it provides a framework for
theoretical analysis, particularly if it is possible to derive the rule of information integration from a suprastructural, theoretical framework. Primary emphasis is placed, therefore, on stimulus
integration. Often, integration is found to obey simple algebraic rules such as addition or averaging. Collectively, such rulcs have been called cognitive algebra. In Anderson's scheme, the process
of information integration involves unobserved sensations, S i. If, however, the overt response, R, is linearly related to the integrated response, R', then the data can be taken as joint evidence
for both the integration rule and the validity of the response scale; analysis of variance and visual examination of the factorial plots provide ways to assess the adequacy of the model. Finally, one
can relate S (through R) to the stimulus in order to derive psychophysical functions. A study of the integration of pain (Algom, Raphaeli, & Cohen-Raz, 1986) illustrates the approach. Thc authors
covaried the values of two scparate noxious variables in a factorial design, combining 6 levels of electric current applied to the wrist with 6 levels of uncomfortably loud tones, making 36
tone-shock compounds in all. Subjects gave magnitude estimates
2 Psychophysical Scaling
m 20
0 ""
Shock intensity (mA)
FIGURE 9 An example of functional measurement applied to the perception and judgment of pain induced by systematically combining levels of noxious acoustic and electrical stimulation. Plotted are
judgments of pain as a function of shock intensity, each curve representing a different level of constant SPL delivered to the two ears. After Algom, Raphaeli, and Cohen-Raz (1986). Copyright 1986 by
the American Psychological Association. Reprinted with permission of the author. of the painfulness of these concurrent-presented stimuli. Figure 9 reproduces the main results, showing how the
.judgments of pain increased with increasing shock current. Each curve represents the results for a fixed intensity of the tone; the greater the tone intensity, the greater the pain. The most salient
characteristic of the factorial plot is the roughly equal spacing of the family of curves in the vertical dimension, although a slight trend toward divergence in the upper right is evident (more on
this later). Parallel spacing implies linear additivity of the numerical responses. By implication, the aversiveness of an electric shock and a strong tone presented simultaneously at various
intensities approximates the linear sum of the individual painful components. Note, too, that the shock-only trials (bottom curve) have the same slope as do the compounds of shock plus tone. This
feature supports an additive composition rule over an averaging rule, for in such cases an averaging rule predicts a crossover of the adjacent functions. Observed parallelism is consistent with three
features of the functionalmeasurement model: First, pain shows additivity; in this particular test, the data corroborate Algom's (1992a) functional theory of pain. Second, the .judgment function
appears linear. So, third, by implication, the psychophysical functions are valid. Accordingly, Algom et al. derived psychophysical functions separately for shock-induced pain and for acoustically
induced pain. Both relations could be approximated by power functions,
Lawrence E. Marks and Daniel Algom
with exponents of about 1.1 for shock and 0.9 for sound. Note that the function for auditory pain differs appreciably from the functions for loudness that are routinely derived for acoustic sounds.
The use of magnitude estimates for a response measure is an exception within the school of functional measurement. Anderson (1970, 1974) has found interval or category scales superior because, in
most cases, ratings but not magnitude estimates reveal directly the anticipated rules of integration. Indeed, Algom et al. (1986) had to apply a nonlinear transformation to their data in order to
exhibit full parallelism. That maneuver~monotonically transforming the data to fit the selected m o d e l ~ i s routinely followed when the original data fail to support the tested rule. In general,
no need for such rescaling arises when ratings are used. This result is interpreted to support the validity of rating scales. Moreover, the criterion of scale convergence across different tasks is
satisfied when ratings are used, seldom when magnitude estimates are. Neither argument, however, is conclusive. The data may fail to produce the selected model simply because it is false, not because
magnitude estimates are biased. Further, the criterion of scale convergence may not always be appropriate. Though not parsimonious, it is possible that the sensations aroused by the same stimuli
actually change over different tasks (e.g., Eisler, 1963; Luce & Edwards, 1958; Marks, 1974a, 1979b; cf. Wasserman, 1991). The assertion that an observed pattern of responses jointly supports the
model and the linearity of the judgment scale is controversial (e.g., Birnbaum, 1982). As Gescheider (1997) and Gigcrenzer and Murray (1987) have shown, any number of models can account for a given
pattern of data, permitting an appropriately chosen judgment function. And, within functional measurement, there may be no reason to prefer one model over another unless there is an appropriate
theory. In this sense, functional measurement provides an infrastructure rather than a theoretically more powerful suprastructure. The trade-off between integration rule and judgment function poses a
serious theoretical quandary. It is possible, of course, simply to assume that a particular model is appropriate or that a particular judgment function is linear, but then the approach provides
little leverage beyond that of unidimensional designs. Anderson has devised several methods (e.g., two-operation designs) to enable one to reject some alternatives; but thus far no general solution
B. Response Times and Sensory Magnitudes More than 80 years ago, Pidron (1914) suggested that simple response t i m e s ~ t h e time from the onset of a stimulus to thc initiation of a response
indicating that the stimulus was perceived~might provide a measure of sensation magnitude. He proceeded to measure R T versus intensity rune-
2 Psychophysical Scaling
tions in several modalities, the outcome of this endeavor being an equation that came to be known as Pi6ron's law: RT
= h i - " + T o.
Equation (48) is an inverse power function, with R T declining linearly with stimulus intensity I raised to the . power. The asymptotic limit, T o, represents an irreducible minimum in R T . Modern
studies, using more sophisticated equipment and experimental designs, have produced comparable findings, for example, in vision (Mansfield, 1973; Vaughn, Costa, & Gilden, 1966), hearing (Chocholle,
1940; Kohfeld, Santee, & Wallace, 1981a, 1981b; Luce & Green, 1972; McGill, 1961), and warmth (Banks, 1973). Unlike measures of comparative response, where subjects must choose between two possible
stimuli or must respond to some relation between stimuli, measures of simple RT merely require the subject to respond as quickly as possible to the detection or perception of a stimulus. A seemingly
straightforward model might postulate that the subject initiates a response only when the amount of information surpasses some criterion (see, e.g., Grice, 1968). Perhaps this is most easily
conceived in neurophysiological terms, where the criterion might correspond to a fixed number of neural impulses (though it could instead correspond to a measure of interpulse arrival times; see Luce
& Green, 1972). Given that neural firing rates increase with stimulus intensity, the time to reach criterion will be inversely proportional to the firing rate (assuming the criterion stays constant
over time; for review, see Luce, 1986). Moreover, if the criterial level of information (or neural response) corresponded to sensory magnitude, then we might expect the e x p o n e n t , of Eq. (48)
to correspond in some direct or regular manner to exponents of psychophysical functions derived from scales of either differences or magnitudes. Unfortunately, there does not seem to be any simple or
uniform connection between RT and scales of magnitude. For example, studies of simple visual RT (e.g., Mansfield, 1973; Vaughn et al., 1966) show an exponent of about 0.33, like the value of
13obtained for magnitude scales of brightness under comparable conditions (J. C. Stevens & Stevens, 1963), whereas most comparable studies of auditory RT (e.g., Chocholle, 1940; McGill, 1961) are
consistent with an exponent a), computed in terms of sound pressure, of about 0.3 (corresponding to Garner's lambda scale of loudness difference) instead of 0.6 (Stevens's sone scale of loudness).
Again, it is instructive to examine the extension of response-time measures to stimuli that vary multidimensionally, for such measures provide one way to assess the consistency of any putative
relation between RT and sensory magnitude. Thus, we may ask whether different stimuli that match in perceived magnitude produce equivalent RTs. Chocholle (1940) examined such relations in some
detail, noting that equal-RT curves, obtained
Lawrence E. Marks and Daniel Algom
from responses to tones varying in both sound frequency and SPL, closely resemble equal-loudness curves; similarly, embedding a tone within a masking noise reduces the tone's loudness, and similarly
increases the RT. But while close, the correspondence between loudness and RT is not perfect, either across sound frequency (Kohfeld et al., 1981a, 1981b), or when tones are partially masked by noise
(Chocholle & Greenbaum, 1966). That is, equally loud tones do not always give exactly equal RT. In fact, under certain circumstances, RT and sensation magnitudes can diverge markedly and in important
ways: Thus, while both the brightness of flashes of light and RT depend approximately on the total luminous energy (Bloch's law), the critical duration ~"for energy integration varies inversely with
luminance in measures of brightness (as discussed earlier) but is independent of luminance in measures of RT. And only brightness, not RT, shows a BrocaSulzer maximum (Raab & Fehrer, 1962; Mansfield,
1973). VI. C O N T E X T U A L EFFECTS Perception, like so many other psychological processes, is quintessentially contextual: The response given to a nominally constant stimulus can depend on a
whole host of factors, including the other stimuli recently presented or available in the ensemble (stimulus-set context), the responses made to recent stimuli relative to the available set of
responses (response and response-set context), the presence of other stimuli in the environment, the existing frame of reference, and so forth. Moreover, to advocate "stage theories" of
psychophysical processing, like those of Anderson (1981, 1982), Curtis et al. (1968), Marks (1979b), and many others, is to acknowledge the possibility, at least, that context can affect processes
occurring at every stage: in early sensory transduction, in later perceptual encoding, in possible cognitive recoding, and in decision/response (see also Baird, 1997). Thus there is perhaps a sense
in which a full and viable theory of context would be a "theory of everything" (Algom & Marks, 1990). A. Effects o f Stimulus Context
Psychophysical judgments are especially sensitive to stimulus context, in particular to the levels, spacing, and probability of occurrence of the possible stimuli within the set (cf. Garner, 1962).
Judgments are also sensitive to other stimuli in the experimental environmentmfor example, to stimuli given as "anchors" to identify the end points of rating scales, or to stimuli used as "standards"
in magnitude estimation. Such effects are widespread, found using all kinds of stimuli regardless of the perceptual modality, and consequently seem to represent the outcome of very general mechanisms
that subserve perceptual encoding and judgment.
2 Psychophysical Scaling
1. Helson's Adaptation-Level Theory Perhaps the most extensive and wide-reaching account of contextual effects appears in Helson's (e.g., 1948, 1959, 1964) theory of adaptation level or AL. According
to Helson, every stimulus is perceived and judged homeostatically, within a frame of reference, or relative to the momentary value of the equilibrium value, the AL. The AL itself depends on the
recent stimulus history, the presence of any perceptual "anchors," stimuli in the ambient environment, and so forth, and it determines the level of stimulation that is perceived as "medium." To the
extent that stimulation exceeds or lags the AL, the stimulus is perceived as relatively strong or weak. Helson's model can be written, in one version, as Ri
where R/is the response given with respect to perceptual dimension i, F i is the psychophysical function operating on the stimulus I;, and AL; is the adaptation level of dimension i. Clearly, the
model is geared to account readily for contrast effects: Given three stimuli, ordered in magnitude so that A < B < C, contrast is said to describe the tendency to judge B to be larger when it appears
in the context of A than when it appears in the context of C. For example, subjects may rate an object as "large" when it falls near the top of the stimulus range (of tiny things) but "small" when it
falls near the bottom of the range (of bigger ones). Having learned that a grape is a small fruit and a watermelon a large one (that is, learning to "anchor the scale" by these examples), we may
judge a grapefruit to be the prototype of"mid-sized." And similarly learning that a mouse is a small mammal and an elephant a large one, our prototype for a mammal judged midsized may be a human or a
horse. But making such judgments does not mean that we infer the size of either Don Quixote or Rosinante to equal that of citrus fruit. In one popular version, the AL is defined as the weighted
geometric average of the prevailing stimuli. This account follows from two principles: first, that the AL represents an average of the psychological values and, second, that these psychological
values are logarithmically related to stimulus levels, as dictated by Fechncr's law (see, e.g., Helson, 1959; Michels & Helson, 1949). In a general way, then, Helson's AL theory accounts for the
pattern of responses obtained when people use rating scales to judge different sets of objects. Helson's model as given does not speak directly to the mechanism underlying changes in AL (but see
Wedell, 1995). Even if the model were to provide an adequate quantitative account of psychophysical judgments, we may ask whether it serves a useful purpose to group what are surely different
processes that act to modify judgments. Stevens (1958), for instance,
Lawrence E. Marks and Daniel Algom
criticized Helson for conflating changes in AL, which may represent shifts in semantic labeling (see also Anderson, 1975), with changes induced by sensory adaptation, the latter conceived to be akin
to physiological fatigue, perhaps in peripheral receptor processes. This may, of course, oversimplify the account of adaptation, which itself can represent effects of central as well as peripheral
neural processes, and which can entail shifts in response criteria as well as changes in perceptual sensitivity or representation. 2. Parducci's Range-Frequency Theory Parducci (1965, 1974) proposed
a model that relates responses made on category-ratings scales directly to the characteristics of the stimulus set: Range-frequency (RF) theory says that subjects distribute their responses so as to
mediate two rules: (1) subjects divide the stimulus range into uniform intervals over which the response categories are distributed; and (2) subjects use the categories equally often, thereby
distributing their responses in proportion to the relative frequency with which the various stimuli are presented. A simple version of the RF principle can be written c, = wE, +
where C i is the average category rating of stimulus i, E i is its range value (the value that C i would have in the absence of effects of presentation frequency), Pi is its frequency value (the
value that C i would have if the subject simply made ordinal responses, using response categories equally often), and w (0 ~ w < 1) is the weighting coefficient of the range component. When stimuli
are presented equally often, responses to stimulus i depend on E i. But if the distribution of stimulus presentations is skewed, so that either weak or strong stimuli are presented more often, values
of C i change, with a corresponding change in the relation of C i to stimulus magnitude. Consider the example in Figure 10, showing results reported by Parducci, Knobel, and Thomas (1976), who had
subjects rate the size of squares in three conditions that varied the relative frequency with which stimuli were presented. All three conditions used the same set of stimuli, but in one condition the
stimuli were presented equally often (the rectangular skewing, indicated by the crosses), whereas in the other two conditions either the weak ones were presented more often than the strong ones
(positive skewing, indicated by the open circles) or the strong ones were presented more often than the weak ones (negative skewing, indicated by the filled circles). Given the rectangular
distribution, the subjects tend to use the response categories equally often, leading these categories to be distributed uniformly over the set of stimuli and thus to a psychophysical function that
depends on the spacing of the stimuli (Parducci & Perrett, 1971;J. C. Stevens, 1958) and to some extent on the number of available response categories (Foley et al.,
2 Psychophysical Scaling
O) rt.. r
=z Skewing:
o x
Rectangular Negative
Stimulus F I G U R E 10 Effects of the distribution of stimulus values on category ratings of size of squares. The distributions of stimulus presentations were skewed positively (open circles),
rectangular (crosses), and skewed negatively (filled circles). Note that each successive stimulus value corresponds to a proportional increase in the width of the square by a factor of 1.16, thus the
spacing of the stimulus magnitudes is logarithmic. Adapted from Figure 5 of"lndependent Contexts for Category Ratings: A Range-Frequency Analysis," by A. Parducci, S. Knobel, and C. Thomas, 1976,
Perception & Psychophysics, 1, pp. 439-446. Reprinted with permission of The Psychonomic Society and the author.
1983; Marks, 1968; Parducci & Wedell, 1986). With positive or negative skewing, the same tendency to use categories equally often causes the subjects to assign relatively larger ratings, on average,
to the frequently presented weak stimuli (positive skew) or to the strong stimuli (negative skew). It may be possible, of course, to explain results like these by principles other than range and
frequency. For example, Haubensak (1992) has suggested that distribution effects arise because, first, subjects tend to form their response scale in the first few trials, maintaining the scale
throughout a test session, and, second, the most frequently presented stimuli are most likely to be presented early in an experimental session. AL theory can make similar predictions about the
effects of stimulus distribution, though RF theory appears to be superior in predicting the effects quantitatively. Moreover, analyses performed in terms of functional measurement support the general
features of RF (Anderson, 1975, 1992).
B. Effects of Stimulus Sequence Within a given test session, as in life outside the laboratory, the immediate, local, or short-term stimulus context is always shifting, for our world
Lawrence E. Marks and Daniel Algom
changes with the appearance of every new stimulusma "blooming and buzzing" world if not one marked by "confusion." Context is never absent, and the only environmental setting that remains
contextually constant is one that never changes. A typical experimental session faces the subject with a Heraclitean context, ever ebbing and flowing: a set of signals presented in some order,
usually random, perhaps in several replicates. That the same stimulus typically receives different responses over the course of a session is well known (one of the best-known and longest-studied
phenomena in psychophysics is the so-called time-order error, where the second of two consecutively presented, identical stimuli is judged to differ from the first and, typically, to appear greater
in magnitude; see, e.g., Hellstr6m, 1985, for review). Careful analysis of the trial-by-trial responding shows that much of this variation cannot be attributed either to "psychological noise" (in
Thurstone's sense) or intrinsic stimulus variation (in Link's sense). Some of the variation consists of systematic change due to the fluctuating microenvironmentmto the stimuli and responses given on
recent trials (e.g., Cross, 1973; Jesteadt, Luce, & Green, 1977; Luce & Green, 1974; Ward, 1973; Ward & Lockhead, 1970). Although usually modest in size, trial-by-trial or sequential effects
sometimes can be substantial, with magnitude estimates varying by a factor of 2:1 depending on the previous few stimuli (Lockhead & King, 1983; Marks, 1993). Understanding the source of these
sequential effects may provide a key to unlock the mechanisms by which stimuli are encoded and decisions made (see Baird, 1997). Sequential effects are evident in virtually all psychophysical tasks,
including magnitude estimation, category rating, and absolute stimulus identification. Here we summarize a few main findings. First and foremost, sequential effects tend to be largely assimilative.
That is, the response to a given stimulus on the current trial tends to be greater when the stimulus (or response) on the previous trial was greater than the current one, smaller when the stimulus
(response) on the previous trial was smaller. This is true both when no feedback is given (e.g., Cross, 1973; Jesteadt, Luce, & Green, 1977), as is typical in magnitude estimation, and when feedback
is given (e.g., Ward & Lockhead, 1971), as is typical in absolute identification (for possible models, see Wagner & Baird, 1981). Note that the pervasiveness of assimilation runs counter to
expectations of contrast based on AL theory. It is still controversial whether, or when, short-term sequential effects may contain a component of contrast. Staddon, King, and Lockhead (1980) inferred
the presence of contrast that depended on stimuli two or more trials earlier, but they used an absoluteidentification task, that is, a task that provided the subjects with feedback. King and Lockhead
(1981) came to a similar conclusion with respect to magnitude estimations obtained with feedback. Ward (1979, 1985, 1990) has applied a linear-regression model to both magnitude estimation and
2 Psychophysical Scaling
ry rating; his model contains an assimilative component, which depends on the magnitude of the responses given on several previous trials (independent of the stimulus, even when the stimuli are
presented to different modalities), and a contrastive component, which depends on the intensity of the stimuli on previous trials (but only if the stimuli are qualitatively similar). On the other
hand, DeCarlo (1992), using an autocorrelation model that relates responses to the current and previous stimuli, and not to previous responses, inferred that just assimilation takes place, not
contrast. As DeCarlo argued, whether one infers the presence of a particular phenomen o n - i n this case, contrast--can depend on the particular quantitative model used to evaluate the data (see
also DeCarlo, 1994). Jesteadt, Luce, and Green (1977) noted that a simple regression analysis relating responses on successive trials can obscure one interesting aspect of the results: The
correlation between successive responses depends on the size of the stimulus difference. Successive judgments of loudness are strongly correlated when the signal levels are identical or nearly so;
the correlation drops to near zero when the decibel difference between successive signals reaches about 10 dB (see also Green, Luce, & Duncan, 1977; Ward, 1979). This means that when successive
stimuli are near in intensity, the ratio of the responses tends to be constant, consistent with a response-ratio model of magnitude estimation (Luce & Green, 1974). But the response-ratio hypothesis
fails when stimuli lie farther apart. Perhaps related is the finding that the coefficient of variation of the ratio of successive magnitude estimates follows a similar pattern, being small when the
ratio of successive stimuli is large (Baird, Green, & Luce, 1980; Green & Luce, 1974; Green et al., 1977). That both the correlation and the variability of successive responses depend on the
proximity of the stimulus levels led to the hypothesis, mentioned earlier, of a band of neural attention, about 10 dB to 20 dB wide, governing judgments (Green &Luce, 1974; Luce & Green, 1978; see
Luce et al., 1980). The frequency and intensity on the previous trial become the center of attention of an adjustable band; if the current signal falls within the band, the signal's representation is
presumably more precise. Treisman (1984) has offered an alternative model, in the spirit of Thurstonian theory. In place of attention bands and response ratios, Treisman hypothesized a set of
adjustable response criteria, into which the logarithmically transformed stimuli map. Treisman's (1984) model, like one offered by Braida and Durlach (1972), treats magnitude estimates much like a
set of response categories, with boundaries corresponding to locations on a decision axis. As Braida and Durlach pointed out, their model "is a special case of Thurstone's 'law of Categorical
Judgement'" (p. 484). These models are probabilistic, unlike the deterministic ones, such as functional measurement and two-state models that are often applied to magnitude judgments. The stochastic
LawrenceE. Marks and Daniel Algom
of magnitude judgments may provide a valuable entr6e to understanding the processes of sensory transformation and psychophysical judgment. If, for example, the neural system represents sensory
stimuli in terms of a set of independent Poisson processes (cf. Link's, 1992, wave theory), then the distribution of sensory responses, and thus to some extent the distribution of magnitude
estimates, should depend on the underlying mechanism: following approximately a Gaussian distribution if the sensory system counts impulses, but approximately a gamma distribution if the system
measures the inverse of the time between pulses (Luce & Green, 1972). Perhaps subjects can use both mechanisms (Green &Luce, 1973). As already noted, of course, choosing a counting mechanism or a
timing mechanism for a model is by itself not enough to explain the various results, as the distributions of underlying count-based or time-based responses (mental or neural) are modified by whatever
mechanisms produce sequential dependencies.
C. Effects o f Stimulus Range and Stimulus Level Within the domain of stimulus-set context are effects of stimulus range and stimulus level. Stimulus range (SR) is generally taken to refer to the
ratio of the largest to smallest stimulus in the series; in the case of logarithmically transformed stimulus levels, SR is exponentially related to the (log) difference between largest and smallest
levels. A series of tones varying from 0.002 N / m 2 to 0.02 N / m 2 (equivalently, from 40 dB to 60 dB SPL) has a range of sound pressures of 10:1 (20 dB), whereas a series varying from 0.0002 to
0.2 N / m 2 (20 dB to 80 dB) has a range of 100:1 (60 dB). Stimulus levels refer to the absolute values of the stimuli, often characterized by the mean level, independent of range. For example, a
series of tones varying from 40 dB to 60 dB and another series varying from 70 dB to 90 dB have the same 20-dB range but different absolute and mean levels. Considerable attention has been directed
toward understanding effects of range, less to effects of level (but see Kowal, 1993). The difference probably reflects the partiality of most psychophysicists to the view that judgments depend
largely on relations between stimuli, not on absolute levels; this bias can probably stand correction. 1. Stimulus Range Range effects appear in virtually every kind of psychophysical judgment: not
only in magnitude estimates, where they have been studied extensively, but in category ratings and even in measures of discrimination. When responses are made on rating scales, the end points of the
response scale are generally fixed. Consequently, the range of responses is effectively forced to remain constant. Nevertheless, SR appears to exert a systematic effect on the degree of curvature of
the function relating mean category rating to log
2 Psychophysical Scaling
stimulus intensity. When such relations are linear, they conform directly to Fechner's logarithmic law. Often, however, the relations are slightly accelerated, in which case they often conform well
to a power function, whose exponent tends to increase in size as SR increases (Marks, 1968; but see also Foley et al., 1983, for a different model and interpretation). Finally, the range of stimuli
can affect discriminability. This occurs even when one measures discriminability between a fixed pair of stimuli, embedded within different stimulus sets or ranges (Berliner & Durlach, 1973; Lockhead
& Hinson, 1986; see also Ward, Armstrong, & Golestani, 1996). Such variations may reflect a limited capacity to attend to signals that vary over a wide range, consistent with notions like that of an
attention band for intensity (Luce & Green, 1978). Stimulus range has long been known to exert systematic effects on magnitude estimation (ME), as if subjects tend to use a constant range of numbers
regardless of the range of stimuli. Ira power function fits the relation between ME and stimulus intensity, and if the range of ME is constant, independent of SR, the observed exponent [3 would be
inversely proportional to log SR. Typically, MEs are not affected by stimulus range to so great an extent as this hypothesis suggests; instead, exponents change less than predicted by the model of
constant number-range (Teghtsoonian, 1971). Moreover, the change in exponent with change in SR is most evident when the SR is small m in loudness, exponents vary when range is smaller than about 30
dB. Increasing stimulus range beyond this point has little effect. Range effects operate in an analogous fashion when the method is magnitude production (MP), except that the roles of stimuli and
numbers reverse. In MP, were the subject to set a constant SR in the face of different ratio ranges of numbers presented, the exponent (calculated from the dependency of log number on log stimulus
intensity) would be directly proportional to the number range. Again, though, actual behavior is not so extreme as this, although observed exponents do increase slightly as number range increases
(Teghtsoonian & Teghtsoonian, 1983). Tempting as it may be to attribute effects of stimulus range in ME and MP solely to decisional processes, that is, to processes that encourage subjects to emit a
constant set of responses, evidence gleaned from a couple of experimental paradigms suggests that changes in SR may actually induce changes in the representations of sensory magnitude. Algom and
Marks (1990) examined range effects within the framework of various intensitysummation paradigms, asking their subjects to judge the loudness of both one-component and two-component toneswe.g.,
monaural and binaural signals, respectively, in a binaural-summation paradigm. The data were inconsistent with the hypothesis that a given acoustic signal always produces the same internal
representation of loudness, and that changes in exponent represent nothing but different mappings of internal representation into
Lawrence E. Marks and Daniel Algom
numbers. Instead, results obtained in three paradigms (binaural summation of loudness, summation of loudness of multiple frequency components, and temporal integration of loudness) implied that
changing SR affects the sensory representations themselves, perhaps by modifying the relative size of the power-function exponents. Further, the changes in the rules of intensity summation, which
Algom and Marks took as evidence of changes in the underlying scale, were absent when subjects simply learned different mappings of stimuli to numbers, suggesting that range per se is crucial (Marks,
Galanter, & Baird, 1995). Schneider and Parker (1990) and Parker and Schneider (1994) arrived at a similar conclusion. They had their subjects make paired-comparison judgments of differences in
loudness (e.g., whether pair i, j differed less or more in loudness than pair k, I), then derived scales for loudness by means of nonmetric analysis of the rank orders of differences. For example, in
different experimental sessions, subjects heard different ranges of SPL of the 500Hz and 2500-Hz tones, and this resulted in different loudness scales (power functions with different exponents),
implying changes in the internal representations. It was notable that the changes in exponent (steepening with small SR) took place only when the absolute levels of the signals were relatively weak,
implying that level as well as range of stimuli matters. 2. Stimulus Level Variation in stimulus level is fundamental, of course, to many of the contrast effects that adaptation-level and
range-frequency theories seek to address. Given identical stimulus ranges, but different mean levels, subjects may adapt their categorical ratings to the stimulus set, thereby giving substantially
different mean ratings to the same stimulus presented in different contexts. This is hardly surprising, given the constraints intrinsic to the task. In this regard, it is perhaps more interesting to
ask about effects of stimulus level within magnitude estimation, where the response scale has fewer constraints. This question takes on special interest in conditions where the stimulus set shifts
within a session. Although contrast-type effects have sometimes been reported in ME (Melamed & Thurlow, 1971; Ross & DiLollo, 1971), changes in mean signal level often lead to the opposite behavior,
namely, a change in response that suggest assimilation. That is, if all of the stimuli in a set shift upward in constant proportion, the average response to a signal common to different sets is
actually greater when the signal appears in the high set and is smaller when the signal appears in the low set (Marks, 1993). This outcome may reflect the outcome of whatever process is responsible
for short-term sequential assimilation, described earlier. In this regard, see Baird (1997).
2 Psychophysical Scaling
3. Differential Effects of Context Changes in the average signal levels can have surprisingly strong and unexpected effects on judgments of intensity when signals vary muhidimensionally. When
subjects judge the loudness of low-frequency and highfrequency signals of various SPL, the rank-order properties of the resulting magnitude estimates depend systematically and differentially on the
mean SPLs at the two frequencies (Marks, 1988, 1992b, 1993; Marks & Warner, 1991; Schneider & Parker, 1990). Figure 11 gives typical results, in which data from one contextual condition are plotted
as circles (SPLs corresponding to the left-hand abscissa) and data from another contextual condition are plotted as squares (SPLs corresponding to the right-hand abscissa). In this example, taken
from Marks (1988), the circles refer to data obtained when the ensemble of stimuli contained mostly soft 500-Hz tones and mostly loud 2500-Hz tones and the squares to data obtained when the
distribution was reversed, so the ensemble contained mostly loud 500-Hz tones and mostly soft 2500-Hz tones. The change in distribution had a great effect on the .judgments: In the first condition,
the subjects.judged the loudness of a 500Hz tone of 70 dB to be as loud as a 2500-Hz tone of 73 dB (that is, they gave the same magnitude estimate to both), but in the second condition the subjects
judged the same 500-Hz tone at 70 dB to be as loud as a 2500-Hz tone at 58 dB. Such changes in relative loudness, observed with multidimensionally varying stimuli, arc widespread, evident not only in
(9 "0 .w c
dr 500
Decibels SPL
FIGURE 11 Differentialeffects of context in the perception and judgment of loudness of tones varying in sound frequency and intensity. The circles, at the left, show results obtained when the SPLs
presented at 500 Hz were relatively low and those at 2500 Hz were high; the squares, on the right, show results obtained when the SPLs presented at 500 Hz were relatively high and those at 2500 Hz
were low. Data of Marks (1988).
LawrenceE. Marks and Daniel Algom
perception but also in the perception of taste intensity (Rankin & Marks, 1991), visual length (Armstrong & Marks, 1997; Potts, 1991), and haptic length (Marks & Armstrong, 1996). Differential
context effects appear to be widespread characteristics of perception and judgment of magnitude. They entail change in the rankorder properties of a series of judgments: in one context stimulus i
being judged greater than stimulus j, but in another stimulus/' being judged greater than stimulus i. So differential effects cannot be explained in terms of changes in, say, a single judgment
function. It is conceivable that they might reflect changes in two such functions, one for each kind of stimulus, or in some other decisional process that depends on the multidimensional properties
of the stimuli. Several pieces of evidence speak against this interpretation. First of all, differential context effects are not universal: Although differential effects are evident in judgments of
length of lines oriented vertically and horizontally (Armstrong & Marks, 1997; Potts, 1991), they are absent from judgments of length of lines presented in different colors (Marks, 1992b);
differential effects are also absent from magnitude estimates of pitch of tones contextually varying in intensity and from magnitude estimates of duration of tones varying in frequency (Marks,
1992b). So the effects are probably not the result of some general process of criterial adjustment, like that proposed by Treisman (1984), extended to multidimensionally varying stimuli (for an
analysis of decision rules in categorizing multidimensional stimuli, see, e.g., Ashby & Gott, 1988). Second, similar changes appear in results using obtained in paradigms that omit the use of
numerical judgments (Armstrong & Marks, 1997; Marks, 1992a, 1994; Marks & Armstrong, 1996; Schneider & Parker, 1990; Parker & Schneider, 1994), including paradigms in which subjects merely listen to
a series of "adapting signals" prior to testing (Marks, 1993). Thus differential contextual effects may well have a sensory-perceptual rather than, or as well as, a decisional basis; in particular,
they may reflect the outcome of a stimulus-specific process that resembles adaptation. VII. PRAGMATICS AND EPISTEMICS OF PSYCHOPHYSICAL SCALING Psychophysical scaling, indeed psychophysics in
general, straddles several borders~between the mental and physical, and between the applied or pragmatic and the more theoretical or epistemic. Psychophysical scaling serves both as a means to
various ends, providing a set of procedures that nourish the elaboration and evaluation of sensory and perceptual mechanisms, and as an end in and of itself, providing a quantitative account of the
mind's activity.
2 Psychophysical Scaling
A. Scaling Is Pragmatic When all is said and done, regardless of psychophysical scaling's everchanging theoretical status, it is important neither to forget nor to deprecate the utility of various
scaling methods--especially magnitude estimation and category scaling. So many experimental studies of sensory processes bear witness to the value of these methods that it is not possible to
summarize the evidence or even to provide an overview. A few examples have popped up in the chapter: for instance, J. C. Stevens and Stevens's (1963) study of how light adaptation affects visual
brightness and Algom et al.'s (1986) study, using functional measurement, of multimodal integration of pain. Much of the relevant literature from the 1950s to the early 1970s is reviewed elsewhere
(Marks, 1974b). To a great extent, the success, utility, and indeed the continued widespread use of scaling methods such as magnitude estimation and category rating come from their capacity to
provide relatively convenient and rapid means to assess how several different stimulus characteristicsmintensity, duration, areal extent, state of adaptation, maskers--jointly influence perceptual
experience. Many of the inferences drawn from these studies do not rely in any substantial way on the presumed metric properties of the responses, but rest instead on the much more modest assumption
that the response scale (or judgment function) preserves weak ordering among the underlying perceptual experiences. One consequence is that these scaling methods thereby serve as "null procedures,"
or indirect methods to obtain stimulus matches: Signals that produce the same average response (numerical estimate or rating) are treated as perceptually equal. Such measures make it possible to
determine stimulusstimulus relations (what Marks, 1974b, called sensory-physical laws), such as the parameters of Bloch's law of temporal summation or the durations at which a flash of light reaches
its (Broca-Sulzer) maximum. On the other side of this coin, measures of multidimensionally determined responses, as realized through intensity matches, oftentimes constrain the relative properties of
psychophysical functions themselves (cf. Marks, 1977). Thus the change in critical duration, determined from brightness matches, implies that brightness grows more quickly with luminance when the
duration of a test flash is very brief (~ 10 ms) than when duration is relatively long (~ 1 s)meven if we cannot decide with certainty whether the difference in rate of growth resides in the value of
the exponent of a power function or in the slope constant of a logarithmic function. Finally, in this regard, it is of course the goal of sensory-perceptual science not only to describe but also to
explain. Here, in our view, the data obtained with methods of psychophysical scaling will continue to prove
Lawrence E. Marks and Daniel Algom
invaluable, for research directed toward evaluating the combined effects of multiple stimulus properties is central to the development of adequate theories both of scaling behavior and of the
psychological and biological processes underlying sensation and perception.
B. Scaling Is Epistemic Beyond the practical, psychophysics also speaks to a goal that is more theoretical--to that aspiration, pervasive in the history of Western thought, to understand and
especially to quantify mental life, to give numeric expression to those processes and properties of thought and behavior that lend themselves to enumeration and rationalization. Contemporary
applications of psychophysical theory in the domains of utility and psychological value find their roots, for example, in the quantification and implicit metric equations discussed more than two
centuries ago by the founder of utilitarianism, Jeremy Bentham (1789), who suggested that certain decisions could be made rationally by totaling up the positive and negative psychological utilities;
thus Bentham couched his analysis of economic benefits and costs in terms that may readily be translated into those of functional-measurement or conjoint-measurement theory. A century earlier, Blaise
Pascal (1670) proposed his famous "wager," whose bottom line comprised a payoff matrix together with a rule by which a rational person could calculate an optimal decision: whether to "bet on"
(believe in) the existence of God. In a nutshell, Pascal noted that there are two possible psychological stances, belief or disbelief in God's existence, and two existential states, existence or
nonexistence (albeit with unknown a priori probabilities). Pascal argued that, given even a small nonzero probability that God exists, the gain associated with averring God's existence and the cost
associated with denying it together make belief in God the rational choice: "If you gain, you gain all; if you lose, you lose nothing. Wager, then, without hesitation that He is" (Pensfe No. 233).
Basically, Pascal implied a formal, normative model for decision makingmin which a person quantitatively evaluates a set of expected utilitiesmwhich he applied to a particular decision that is rarely
considered in psychophysical research. Plato's attempt, in the Republic, to compute the happiness of a just king versus that of a tyrant (the former 729 times the latter), Pascal's wager, Bentham's
utilitarian metric, Fechner's scales of sensation difference, and Stevens's scales of sensation magnitude--all of these speak to a common quest to quantify. And the quest to quantify is part and
parcel of the scientific enterprise. If, as Dingle (1960) suggested, to measure is to define, if measurements are construed as theories, and if theories are construed as models or metaphors, then
this quest to quantify may well represent the
2 Psychophysical Scaling
expression of a deep and abiding scientific impulsemone component of what has been called a metaphorical imperative (Marks, 1978d). Such a metaphorical imperative, satisfied by the pragmatics of
scaling, bears directly on its epistemics, for it could lead to a more mature evaluation of the very edifice of psychophysics. All too often, psychophysics has been viewed, and even practiced, in a
rather parochial fashion, exploring the route from sensation to cognition unidirectionally, working from the periphery to the center (cf. Anderson, 1992), and thus depicting an organism whose
behavior seems essentially reactive (cf. Galanter, 1992). Studies on information integration and contextual effects, as reviewed in the last few sections of this chapter, document instead the
possible roles of potent "topdown" processes in perception and consequently in psychophysical scaling. In a similar vein, many contributors to a recent volume (Algom, 1992b) explore the merger of
sensation and cognition in psychophysical scaling, as captured in the notion of a "cognitive psychophysics" (cf. Baird, 1970a, 1970b; Marks, 1992c). Finally, in this regard, in a book that was just
published and can be only briefly mentioned, Baird (1997) has proposed an elaborate "complementarity theory," which rests on the assumption that sensorineural and cognitive explanations provide
mutually compatible and even jointly necessary components to a full psychophysical theory; central to Baird's position is the notion that psychophysical responses are variable, and that variability
may be associated with either sensory processes, which Baird treats within his Sensory Aggregate Model, or cognitive-judgmental processes, which he treats within his Judgment Option Model. If the
metaphorical imperative includes a deep human quest for quantification, then full recognition of its role in scaling may accordingly help shift the conceptual base of psychophysical measurement to
one that is driven by psychological theory and that emanates from the person.
Acknowledgments Preparation of this article was supported in part by grants DC00271 and DC00818 from the U.S. National Institute of Health to Lawrence E. Marks and by grant 89--447 from the
U.S.-lsrael Binational Science Foundation to Daniel Algom. We gratefully thank John C. Baird, Kenneth H. Norwich, and Joseph C. Stevens for their valuable and thoughtful comments.
References Acz61, J. (1966). Lectures on functional equations and their applications. New York: Academic Press. Adams, E., & Messick, S. (1958). An axiomatic formulation and generalization of
successive intervals scaling. Psychometrika, 23, 355-368. Aiba, T. S., & Stevens, S. S. (1964). Relation of brightness to duration and luminance under light- and dark-adaptation. Vision Research, 4,
Lawrence E. Marks and Daniel Algom
Algom, D. (1992a). Psychophysical analysis of pain: A functional perspective. In H.-G. Geissler, S. W. Link, & J. T. Townsend (Eds.), Cognition, it~rmation processing, and psychophysics (pp.
267-291). Hillsdale, NJ: Erlbaum. Algom, D. (Ed.). (1992b). Psychophysical approaches to cognition. Amsterdam: North-Holland Elsevier. Algom, D., & Babkoff, H. (1984). Auditory temporal integration
at threshold: Theories and some implications of current research. In W. D. Neff (Ed.), Contributions to sensory physiology (Vol. 8, pp. 131-159). New York: Academic Press. Algom, D., & Marks, L. E.
(1984). Individual differences in loudness processing and loudness scales. Journal of Experimental Psychology: General, 113, 571-593. Algom, D., & Marks, L. E. (1990). Range and regression, loudness
processing and loudness scales: Toward a context-bound psychophysics. Journal of Experimental Psychology: Human Perception and Performance, 16, 706-727. Algom, D., & Pansky, A. (1993). Perceptual and
memory-based comparisons of area. In A. Garriga-Trillo, P. R. Minon, C. Garcia-Gallego, C. Lubin, J. M. Merino, & A. Villarino (Eds.), Fechner Day 93. Proceedings of the Ninth Annual Meeting of the
International Society for Psychophysics (pp. 7-12). Palma de Mallorca: International Society for Psychophysics. Algom, D., Raphaeli, N., & Cohen-Raz, l_.. (1986). Integration of noxious stimulation
across separate somatosensory communications systems: A functional theory of pain. Journal of Experimental Psychology: Human Perception and Performance, 12, 92-102. Anderson, N. H. (1970). Functional
measurement and psychophysicaljudgment. Psychological Review, 77, 153-170. Anderson, N. H. (1974). Algebraic models in perception. In E. C. Carterette & M. P. Friedman (Eds.), Handbook of perception:
Vol. 2. Psychophysical judgment and measurement (pp. 215-298). New York: Academic Press. Anderson, N. H. (1975). On the role of context effects in psychophysicaljudgment. Psychological Review, 82,
462-482. Anderson, N. H. (1976). Integration theory, functional measurement, and the psychophysical law. In H.-G. Geissler & Yu. Zabrodin (Eds.), Advances in psychophysics (pp. 93-129). Berlin: VEB
Deutscher Verlag der Wissenschaften. Anderson, N. H. (1977). Failure of additivity in bisection of length. Perception & Psychophysics, 22, 213-222. Anderson, N. H. (1981). Foundations of b~rmation
integration theory. New York: Academic Press. Anderson, N. H. (1982). Methods of b~rmation integration theory. New York: Academic Press. Anderson, N. H. (1992). Integration psychophysics and
cognition. In D. Algom (Ed.), Psychophysical approaches to cognition (pp. 13-113). Amsterdam: North-Holland Elsevier. Armstrong, L., & Marks, L. E. (1997). Stimulus context, perceived length, and the
verticalhorizontal illusion. Perception & Psychophysics, in press. Asch, S. (1946). Forming impressions on personality. Journal of Abnormal and Social Psychology, 41, 258-290. Ashby, F. G., & Gott,
R. E. (1988). Decision rules in the perception and categorization of multidimensional stimuli. Journal of Experimental Psychology: Learning, Memory, and Cognition, 14, 33-53. Attneave, F. (1949). A
method of graded dichotomies for the scaling of judgments. Psychological Review, 56, 334-340. Attneave, F. (1959). Applications of information theory to psychology. New York: Holt, Rinehart, &
Winston. Attneave, F. (1962). Perception and related areas. In S. Koch (Ed.), Psychology: A study of a science (Vol. 4, pp. 619-659). New York: McGraw-Hill.
Psychophysical Scaling
Baird, J. C. (1970a). A cognitive theory of psychophysics. I. ScandinavianJournal of Psychology, 11, 35-46. Baird, J. C. (1970b). A cognitive theory ofpsychophysics. II. ScandinavianJournal of
Psychology, 11, 89-103. Baird, J. C. (1975). Psychophysical study of numbers: Generalized preferred state theory. Psychological Research, 38, 175-187. Baird, J. C. (1981). Psychophysical theory: On
the avoidance of contradiction. The Behavioral and Brain Sciences, 4, 190. Baird, J. C. (1984). Information theory and information processing, b~rmation Processing & Management, 20, 373-381. Baird,
J. C. (1997). Sensation andjudgment: Complementarity theory of psychophysics. Mahwah, NJ: Erlbaum. Baird, J. C., Green, D. M., & Luce, R. D. (1980). Variability and sequential effects in
crossmodality matching of area and loudness. Journal of Experimental Psychology: Human Perception and Performance, 6, 277-289. Baird, J. C., & Noma, E. (1975). Psychological studies of numbers. I.
Generation of numerical responses. Psychological Research, 37, 291-297. Baird, J. C., & Noma, E. (1978). Fundamentals of scaling and psychophysics. New York: Wiley. Banks, W. P. (1973). Reaction time
as a measure of summation of warmth. Perception & Psychophysics, 13, 321-327. Banks, W. P. (1977). Encoding and processing of symbolic information in comparative judgments. In G. H. Bower (Ed.), Tile
psychology of learning and motivation (Vol. 11, pp. 101159). New York: Academic Press. Banks, W. P., Clark, H. H., & Lucy, P. (1975). The locus of semantic congruity effect in comparative judgments.
Journal of Experimental Psychology: Human Perception and Performance, 104, 35-47. Banks, W. P., & Coleman, M. J. (1981). Two subjective scales of number. Perception & Psychophysics, 29, 95-105.
Barbenza, C. M. de, Bryan, M. E., & Tempest, W. (1972). Individual loudness functions.
Journal of Sound and Vibration, 11,399-410.
Beck, J., & Shaw, W. A. (1967). Ratio-estimations of loudness-intervals. AmericanJournal of Psychology, 80, 59-65. Bennet, J. H., & Hays, W. L. (1960). Multidimensional unfolding determining the
dissimilarity of ranked preference data. Psychometrika, 25, 27-43. Bentham, J. (1948). An introduction to the principles of morals and legislation. New York: Hafner (originally published, 1789).
Berglund, B. (1991). Quality assurance in environmental psychophysics. In S. J. Bolanowski, Jr., & G. A. Gescheider (Eds.), Ratio scaling of psychological magnitude (pp. 140-162). Hillsdale, NJ:
Erlbaum. Berliner, J. E., & Durlach, N. I. (1973). Intensity perception: IV. Resolution in roving-level discrimination. Journal of the Acoustical Society of America, 53, 1270-1287. Birnbaum, M. H.
(1978). Differences and ratios in psychological measurement. In N. J. Castellan & F. Restle (Eds.), Cognitive theory (Vol. 3, pp. 33-74). Hillsdale, NJ: Erlbaum. Birnbaum, M. H. (1980). A comparison
of two theories of"ratio" and "difference" judgments. Journal of Experimental Psychology: General, 3, 304-319. Birnbaum, M. H. (1982). Controversies in psychological measurement. In B. Wegener (Ed.),
Social attitudes and psychophysicaI measurement (pp. 401-485). Hillsdale, NJ: Erlbaum. Birnbaum, M. H. (1990). Scale convergence and psychophysical laws. In H.-G. Geissler (Ed.), Psychophysical
exploration of mental structures (pp. 49-57). Toronto: Hogrefe and Huber. Birnbaum, M. H., Anderson, C. J., & Hynan, L. G. (1989). Two operations for "ratios" and
Lawrence E. Marks and Daniel A l g o m "differences" of distances on the mental map. Journal of Experimental Psychology: Human
Perception and Performance, 15, 785-796.
Birnbaum, M. H., & Elmasian, R. (1977). Loudness "ratios" and "differences" involve the same psychophysical operation. Perception & Psychophysics, 22, 383-391. Birnbaum, M. H., & Jou, W., Jr. (1990).
A theory of comparative response times and "difference" judgments. Cognitive Psychology, 22, 184-210. Bockenhoh, U. (1992). Thurstonian representation for partial ranking data. British Journal of
Mathematical and Statistical Psychology, 45, 31-49. Borg, G. (1972). A ratio scaling method for interindividual comparisons. Reports from the Institute of Applied Psychology, The University of
Stockholm, No. 27. Borg, G. (1982). A category scale with ratio properties for intermodal and interindividual comparisons. In H.-G. Geissler & P. Petzold (Eds.), Psychophysicaljudgment and the
process of perception (pp. 25-34). Berlin: Deutscher Verlag der Wissenschaften. Boring, E. G. (1921). The stimulus-error. AmericanJournal of Psychology, 32, 449-471. Bradley, R. A., & Terry, M. E.
(1952). The rank analysis of incomplete block designs. I. The method of paired comparisons. Biometrika, 39, 324-345. Braida, L. D., & Durlach, N. I. (1972). Intensity perception. II. Resolution in
one-interval paradigms. Journal of the Acoustical Society of America, 51,483-502. Braida, L. D., Lim, J. S., Berliner, J. E., Durlach, N. l., Rabinowitz, W. M., & Purks, S. R. (1984). Intensity
perception. XIll. Perceptual anchor model of context-coding. Journal of the Acoustical Society of America, 76, 722-731. Brentano, F. (1874). Psychologie vom empirischen Standpunkte. Leipzig: Duncker
und Humblot. Broca, A., & Sulzer, D. (1902a). La sensation lumineuse en fonction du temps. Comptes Rendus de l'Acad~miedes Sciences (Paris), 134, 831-834. Broca, A., & Sulzer, D. (1902b). La
sensation lumineuse en fonction du temps. Comptes Rendus de l'AcadEmiedes Sciences (Paris), 137, 944-946, 977-979, 1046-1049. Campbell, N. R. (1920). Physics: The elements. Cambridge, England:
Cambridge University Press. Carroll, J. D. (1980). Models and methods for multidimensional analysis of preferential choice (or other dominance) data. In E. D. Lantermann & H. Feger (Eds.), Similarity
and choice (pp. 234-289). Bern: Huber. Carterette, E. C., & Anderson, N. H. (1979). Bisection of loudness. Perception & Psychophysics, 26, 265-280. Cattell, J. McK. (1893). On errors of observations.
American Journal of Psychology, 5, 285293. Cattell, J. McK. (1902). The time of perception as a measure of differences in intensity. Philosophische Studien, 19, 63-68. Chocholle, R. (1940). Variation
des temps de r~actions auditifs end fonction de l'intensitd ~1 diverses fr~quences. L'Ann~ePsychologique, 41, 65-124. Chocholle, R., & Greenbaum, H. B. (1966). La sonie de sons purs partiallement
masquds. t~tude comparative par une m~thode d'dgalisation et par la m~thode des temps de r~action. Journal de Psychologie Normale et Pathologique, 63, 387-414. Churchman, C. W., & Ratoosh, P. (Eds.)
(1959). Measurement: Del%ition and theories. New York: Wiley. Cliff, N. (1992). Abstract measurement theory and the revolution that never happened. Psychological Science, 3, 186-190. Collins, A. A.,
& Gescheider, G. A. (1989). The measurement of loudness in individual children and adults by absolute magnitude estimation and cross-modality matching. Journal of the Acoustical Society of America,
85, 2012-2021. Coombs, C. H. (1950). Psychological scaling without a unit of measurement. Psychological Review, 57, 145-158.
Psychophysical Scaling
Coombs, C. H. (1964). A theory of data. New York: Wiley. Cross, D. V. (1973). Sequential dependencies and regression in psychophysical judgments. Perception & Psychophysics, 14, 547-552. Curtis, D.
W. (1970). Magnitude estimations and category judgments of brightness and brightness intervals: A two-stage interpretation. Journal of Experimental Psychology, 83, 201-208. Curtis, D. W., Attneave,
F., & Harrington, T. L. (1968). A test of a two-stage model for magnitude estimation. Perception & Psychophysics, 3, 25-31. Curtis, D. W., Paulos, M. A., & Rule, S.J. (1973). Relation between
disjunctive reaction time and stimulus difference. Journal of Experimental Psychology, 99, 167-173. Curtis, D. W., & Rule, S. J. (1972). Magnitude judgments of brightness and brightness difference as
a function of background reflectance. Journal q]"Experimental Psychology, 95, 215-222. Dawes, R. M. (1994). Psychological measurement. Psychological Review, 101, 278-281. Dawson, W. E. (1971).
Magnitude estimation of apparent sums and differences. Perception & Psychophysics, 9, 368-374. DeCarlo, L. T. (1992). Intertrial interval and sequential effects in magnitude scaling. Journal of
Experimental Psychology: Human Perception and Performance, 18, 1080-1088. DeCarlo, L. T. (1994). A dynamic theory of proportional judgment: Context and judgment of length, heaviness, and roughness.
Journal of Experimental Psychology: Human Perception and Performance, 20, 372-381. De Soete, G., & Carroll, J. D. (1992). Probabilistic multidimensional models of pairwise choice data. In F. G. Ashby
(Ed.), Multidimensional models of perception and cognition (pp. 61-88). Hillsdale, NJ: Erlbaum. Delboeuf, J. R. L. (1873). l~tude psychophysique: Recherches th6oretiques et exp6rimentales sur la
mesure des sensations, et sp6cialement des sensations de lumi6re et de fatigue. M~moires de l'Acad~mieRoyale de Belgique, 23 (5). Dingle, H. (1960). A symposium on the basic problem of measurement.
Scientific American, 202 (6), 189-192. Durlach, N. l., & Braida, L. D. (1969). Intensity perception: I. A theory of intensity resolution. Journal of the Acoustical Society of America, 46, 372-383.
Durup, G., & Pi6ron, H. (1933). Recherches au sujet de l'interpretation du ph6nom~me de Purkinje par des diff6rences dans les courbes de sensation des recepteurs chromatiques. L'Ann~e Psychologique,
33, 57-83. Ebbinghaus, H. (1902). Gundziige der Psychologie. Leipzig: Verlag yon Veit. Eisler, H. (1963). Magnitude scales, category scales, and Fechnerian integration. Psychological Review, 70,
243-253. Ekman, G. (1964). Is the power law a special case of Fechner's law? Perceptual and Motor Skills, 19, 730. Ekman, G., Eisler, H., & Kfinnapas, T. (1960). Brightness of monochromatic light as
measured by the method of magnitude production. Acta Psychologica, 17, 392-397. Ekman, G., Hosman, J., Lindman, R., Ljungberg, L., & fi,kesson, C. A. (1968). lnterindividual differences in scaling
performance. Perceptual and Motor Skills, 26, 815823. Ekman, G., & Kfinnapas, T. (1962). Measurement of aesthetic value by "direct" and "indirect" methods. Scandinavian Journal of Psychology, 3,
33-39. Ekman, G., & Kfinnapas, T. (1963a). A further study of direct and indirect scaling methods. Scandinavian Journal of Psychology, 4, 77-80. Ekman, G., & Kfinnapas, T. (1963b). Scales of
conservatism. Perceptual and Motor Skills, 16, 329-334. Engeland, W., & Dawson, W. E. (1974). Individual differences in power functions for a 1-week intersession interval. Perception & Psychophysics,
15, 349-352.
Lawrence E. Marks and Daniel A l g o m
Engen, T., & Lindstr6m, C.-O. (1963). Psychophysical scales of the odor intensity of amyl acetate. Scandinavian Journal of Psychology, 4, 23-28. Engen, T., & McBurney, D. H. (1964). Magnitude and
category scales of the pleasantness of odors. Journal of Experimental Psychology, 68, 435-440. Engen, T., & Ross, B. M. (1966). Effect of reference number on magnitude estimation. Perception &
Psychophysics, 1, 74-76. Falmagne, J.-C. (1971). The generalized Fechner problem and discrimination. Journal of Mathematical Psychology, 8, 22-43. Falmagne, J.-C. (1974). Foundations of Fechnerian
psychophysics. In D. H. Krantz, R. C. Atkinson, R. D. Luce, & P. Suppes (Eds.), Contemporary developments in mathematical psychology. Vol. 2. Measurement, psychophysics, and neural it~rmation
processing (pp. 129159). San Francisco: Freeman. Falmagne, J.-C. (1976). Random conjoint measurement and loudness summation. Psychological Review, 83, 65-79. Falmagne, J.-C. (1985). Elements
ofpsychophysical theory. Oxford: Oxford University Press. Fechner, G. T. (1860). Elemente der Psychophysik. Leipzig: Breitkopf und H~irtel. Fletcher, H., & Munson, W. A. (1933). Loudness, its
measurement and calculation. Journal of the Acoustical Society of America, 5, 82-108. Foley, H. J., Cross, D. V., Foley, M. A., & Reeder, R. (1983). Stimulus range, number of categories, and the
'virtual' exponent. Perception & Psychophysics, 34, 505-512. Fullerton, G. S., & Cattell, J. McK. (1892). On the perception of small differences. Philadelphia: University of Pennsylvania Press.
Fuortes, M. G. F., & Hodgkin, A. L. (1964). Changes in time scale and sensitivity in the ommatidia of Limulus. Journal of Physiology, 172, 239-263. Gage, F. H. (1934a). An experimental investigation
of the measurability of auditory sensation. Proceedings of the Royal Society (London), 116B, 103-122. Gage, F. H. (1934b). An experimental investigation of the measurability of visual sensation.
Proceedings of the Royal Society (London), 116B, 123-138. Galanter, E. H. (1962). Contemporary psychophysics. In R. Brown, E. H. Galanter, E. H. Hess, & G. Mandler (Eds.), New directions in
psychology (pp. 89-156). New York: Holt, Rinehart and Winston. Galanter, E. H. (1992). Intentionalism--An expressive theory. In D. Algom (Ed.), Psychophysical approaches to cognition (pp. 251-302).
Amsterdam: North-Holland Elsevier. Galanter, E., & Messick, S. (1961). The relation between category and magnitude scales of loudness. Psychological Review, 68, 363-372. Galanter, E., & Pliner, P.
(1974). Cross-modality matching of money against other continua. In H. R. Moskowitz, B. Scharf, & J. C. Stevens (Eds.), Sensation and measurement: Papers in honor orS. S. Stevens (pp. 65-76).
Dordrecht: Holland, Reidel. Garner, W. R. (1952). An equal discriminability scale for loudness judgments. Journal of Experimental Psychology, 43, 232-238. Garner, W. R. (1954). Context effects and
the validity of loudness scales. Journal of Experimental Psychology, 48, 218-224. Garner, W. R. (1958). Advantages of the discriminability criterion for a loudness scale. Journal of the Acoustical
Society of America, 30, 1005-1012. Garner, W. R. (1959). On the lambda loudness function, masking, and the loudness of multicomponent tones. Journal of the Acoustical Society of America, 31,602-607.
Garner, W. R. (1962). Uncertainty and structure as psychological concepts. New York: Wiley. Garner, W. R. (1974). The processing of information and structure. Potomac, MD" Erlbaum. Garner, W. R., &
Hake, H. W. (1951). The amount of information in absolute judgments. Psychological Review, 58, 446-459. Geiger, P. H., & Firestone, F. A. (1933). The estimation of fractional loudness. Journal of the
Acoustical Society of America, 5, 25-30.
Psychophysical Scaling
Gent, J. F., & Bartoshuk, L. M. (1983). Sweetness of sucrose, neohesperidin dihydrochalcone, and saccharin is related to genetic ability to taste the bitter substance 6-n-propylthiouracil. Chemical
Senses, 7, 265-272. Gescheider, G. A. (1988). Psychophysical scaling. Annual Review of Psychology, 39, 169-200. Gescheider, G. A. (1997). Psychophysics: The fundamentals (3rd ed.). Mahwah, NJ:
Erlbaum. Gescheider, G. A., & Bolanowski, S. J., Jr. (1991). Final comments on ratio scaling of psychological magnitudes. In S. J. Bolanowski, Jr., & G. A. Gescheider (Eds.), Ratio scaling of
psychological magnitude (pp. 295-311). Hillsdale, NJ: Erlbaum. Gescheider, G. A., & Hughson, B. A. (1991). Stimulus context and absolute magnitude estimation: A study of individual differences.
Perception & Psychophysics, 50, 45-57. Gibson, J. J. (1966). The senses considered as perceptual systems. Boston: Houghton Mifflin. Gibson, J. J. (1979). The ecological approach to visual perception.
Boston: Houghton Mifflin. Gibson, R. H., & Tomko, D. L. (1972). The relation between category and magnitude estimates of tactile intensity. Perceptions & Psychophysics, 12, 135-138. Gigerenzer, G., &
Murray, D.J. (1987). Cognition as intuitivestatistics. Hillsdale, NJ: Erlbaum. Gigerenzer, G., & Strube, G. (1983). Are there limits to binaural additivity of loudness?Journal of Experimental
Psychology: Human Perception and Performance, 9, 126-136. Graham, C. H. (1958). Sensation and perception in an objective psychology. Psychological Review, 65, 65-76. Graham, C. H., & Ratoosh, P.
(1962). Notes on some interrelations of sensory psychology, perception, and behavior. In S. Koch (Ed.), Psychology: A study of a science (Vol. 4, pp. 483-514). New York: McGraw-Hill. Gravetter, F., &
Lockhead, G. R. (1973). Criterial range as a frame of reference for stimulus judgment. Psychological Review, 80, 203-216. Green, B. G., Shaffer, G. S., & Gilmore, M. M. (1993). Derivation and
evaluation of a semantic scale of oral sensation magnitude with apparent ratio properties. Chemical Senses, 18, 683-702. Green, D. M., & Luce, R. D. (1973). Speed-accuracy tradeoff in auditory
detection. In S. Kornblum (Ed.), Attentionand performance (Vol. 4, pp. 547-569). New York: Academic Press. Green, D. M., & Luce, R. D. (1974). Variability of magnitude estimates: A timing theory
analysis. Perception & Psychophysics, 15, 291-300. Green, D. M., Luce, R. D., & Duncan, J. E. (1977). Variability and sequential effects in magnitude production and estimation of auditory intensity.
Perception & Psychophysics, 22, 450-456. Grice, G. R. (1968). Stimulus intensity and response evocation. Psychological Review, 75, 359-373. Guilford, J. P. (1932). A generalized psychological law.
Psychological Review, 39, 73-85. Guilford, J. P. (1954). Psychometric methods (2nd ed.). New York: MaGraw-Hill. Hagerty, M., & Birnbaum, M. H. (1978). Nonmetric tests of ratio vs. subtractive
theories of stimulus comparisons. Perception & Psychophysics, 24, 121-129. Halff, H. M. (1976). Choice theories for differentially comparable alternatives. Journal of Mathematical Psychology, 14,
244-246. Ham, L. B., & Parkinson, J. S. (1932). Loudness and intensity relations.Journal ofthe Acoustical Society of America, 3, 511-534. Hardy, J. D., Wolff, H. G., & Goodell, H. (1947). Studies on
pain: Discrimination of differences in pain as a basis of a scale of pain intensity.Journal of Clinical Investigation, 19, 11521158. Harris, J. D. (1963). Loudness and discrimination. Journal of
Speech and Hearing Disorders (Monograph Supplement 11). Haubensak, G. (1992). The consistency model: A process model for absolute judgments. Journal of Experimental Psychology: Human Perception and
Performance, 18, 303-309. Heidelberger, M. (1993). Fechner's impact for measurement theory. The Behavioral and Brain Sciences, 16, 146-148.
Lawrence E. Marks and Daniel A l g o m
Heinemann, E. G. (1961). The relation of apparent brightness to the threshold for differences in luminance. Journal of Experimental Psychology, 61,389-399. Heller, O. (1985). H6rfeldaudiometrie mit
dem Verfahren der Kategorienunterteilung (KU). Psychologische Beitrdge, 27, 478-493. Hellman, R. P. (1981). Stability of individual loudness functions obtained by magnitude estimation and production.
Perception & Psychophysics, 29, 63-70. Hellman, R., Scharf, B., Teghtsoonian, M., & Teghtsoonian, R. (1987). On the relation between the growth of loudness and the discrimination of intensity for
pure tones. Journal of the Acoustical Society of America, 82, 448-453. Hellman, R. P., & Zwislocki, J. J. (1961). Some factors affecting the estimation of loudness. Journal of the Acoustical Society
of America, 33, 687-694. Hellstr6m,/~. (1985). The time-order error and its relatives: Mirrors of cognitive processes in comparing. Psychological Bulletin, 97, 35-61. Helmholtz, H. L. F. von (1962).
Treatise on physiological optics. New York: Dover (originally published, 1856). Helson, H. (1948). Adaptation level as a basis for a quantitative theory of frames of reference. Psychological Review,
55, 297-313. Helson, H. (1959). Adaptation level theory. In S. Koch (Ed.), Psychology: A study of a science (Vol. 1, pp. 565-621). New York: McGraw-Hill. Helson, H. (1964). Adaptation level theory:
An experimental and systematic approach to behavior. New York: Harper and Row. Hood, D. C., & Finkelstein, M. A. (1979). A comparison of changes in sensitivity and sensation: Implications for the
response-intensity function of the human photopic system. Journal of Experimental Psychology: Human Perception and Performance, 5, 391-405. Hornstein, G. A. (1993). The chimera of psychophysical
measurement. The Behavioral and Brain Sciences, 16, 148-149. Hfibner, R., & Ellermeier, W. (1993). Additivity of loudness across critical bands: A critical test. Perception & Psychophysics, 54,
185-189. Humes, L. E., & Jesteadt, W. (1989). Models of additivity of masking. Journal of the Acoustical Society of America, 85, 1285-1294. Indow, T. (1966). A general equi-distance scale of the four
qualities of taste. Japanese Psychological Research, 8, 136-150. Iverson, G. J., & Pavel, M. (1981). On the functional form of partial masking functions in psychoacoustics. Journal of Mathematical
Psychology, 24, 1-20. James, W. (1890). The principles of psychology. New York: Henry Holt. James, W. (1892). Psychology: Briefer course. New York: Henry Holt. Jastrow, J. (1886). The perception of
space by disparate senses. Mind, 11, 539-544. Jesteadt, W., Luce, R. D., & Green, D. M. (1977). Sequential effects in judgments of loudness. Journal of Experimental Psychology: Human Perception and
Performance, 3, 92-104. Jesteadt, W., Wier, C. C., & Green, D. M. (1977). Intensity discrimination as a function of frequency and sensation level. Journal of the Acoustical Society of America, 61,
169-177. Jones, F. N., & Woskow, M. H. (1962). On the relationship between estimates of loudness and pitch. AmericanJournal of Psychology, 75, 669-671. King, M. C., & Lockhead, G. R. (1981). Response
scales and sequential effects in judgment. Perception & Psychophysics, 30, 599-603. Kohfeld, D. L., Santee, J. L., & Wallace, N. D. (1981a). Loudness and reaction time: I. Perception & Psychophysics,
29, 535-549. Kohfeld, D. L., Santee, J. L., & Wallace, N. D. (1981b). Loudness and reaction time: II. Identification of detection components at different intensities and frequencies. Perception &
Psychophysics, 29, 550-562. Kowal, K. H. (1993). The range effect as a function of stimulus set, presence of a standard, and modulus. Perception & Psychophysics, 54, 555-561.
Psychophysical Scaling
Krantz, D. H. (1967). Rational distance functions for multidimensional scaling. Journal of Mathematical Psychology, 4, 226-245. Krantz, D. H. (1971). Integration ofjust-noticeable differences.
Journal of Mathematical Psychology, 8, 591-599. Krantz, D. H. (1972). A theory of magnitude estimation and crossomodality matching. Journal of Mathematical Psychology, 9, 168-199. Krantz, D. H.,
Luce, R. D., Suppes, P., & Tversky, A. (1971). Foundations of measurement. Vol. 1: Additive and polynomial representations. New York: Academic Press. Krueger, L. E. (1989). Reconciling Fechner and
Stevens: Toward a unified psychophysical law. The Behavioral and Brain Sciences, 12, 251-320. Kruskal, J. B. (1964). Nonmetric multidimensional scaling: A numerical method. Psychometrika, 29,
115-129. Kfilpe, O. (1895). Outlines of psychology: Based upon the results of experimental investigations. New York: Macmillan. Kfinnapas, T., Hallsten, L., & S6derberg, G. (1973). Interindividual
differences in homomodal and heteromodal scaling. Acta Psychologica, 37, 31-42. Lacouture, Y., & Marley, A. A.J. (1991). A connectionist model of choice and reaction time in absolute identification.
Connection Science, 3, 401-433. Leahey, T. H. (1997). A history of psycholo~,y: Main currents in psychological thought (4th ed.) Englewood Cliffs, NJ: Prentice-Hall. Leibowitz, I. (1987). Bein Mada
Uphilosophia [Between science and philosophy]. Academon: Jerusalem (Hebrew). Levelt, W. D. M., Riemersma, J. B., & Bunt, A. A. (1972). Binaural additivity of loudness. British Journal of Mathematical
and Statistical Psychology, 25, 51-68. Levine, M. V. (1974). Geometric interpretations of some psychophysical results. In D. H. Krantz, R. C. Atkinson, R. D. Luce, & P. Suppes (Eds.), Contemporary
developments in mathematical psychology (Vol. 2, pp. 200-235). San Francisco: Freeman. Lim, L. S., Rabinowitz, W. M., Braida, L. D., & Durlach, N. I. (1977). Intensity perception: VIII. Loudness
comparisons between different types of stimuli. Journal of the Acoustical Society of America, 62, 1256-1267. Link, S. W. (1975). The relative judgment theory of two choice response time. Journal of
Mathematical Psychology, 12, 114-135. Link, S. W. (1992). The wave theory of difference and similarity. Hillsdale, NJ: Erlbaum. Link, S. W., & Heath, R. A. (1975). A sequential theory of
psychological discrimination. Psychometrika, 40, 77-105. Lochner, J. P. A., & Burger, J. F. (1961). Form of the loudness function in the presence of masking noise. Journal of the Acoustical Society
of America, 33, 1705-1707. Lockhead, G. R. (1992). Psychophysical scaling: Judgments of attributes or objects? The Behavioral and Brain Sciences, 15, 543-601. Lockhead, G. R., & Hinson, J. (1986).
Range and sequence effects in judgment. Perception & Psychophysics, 40, 53-61. Lockhead, G. R., & King, M. C. (1983). A memory model for sequential scaling tasks. Journal of Experimental Psychology:
Human Perception and Performance, 9, 461-473. Logue, A. W. (1976). Individual differences in magnitude estimation of loudness. Perception & Psychophysics, 19, 279-280. Luce, R. D. (1959). On the
possible psychophysical laws. Psychological Review, 66, 81-95. Luce, R. D. (1972). What sort of measurement is psychophysical measurement? American Psychologist, 27, 96-106. Luce, R. D. (1977a). The
choice axiom after twenty years. Journal of Mathematical Psychology, 15, 215-233. Luce, R. D. (1977b). Thurstone discriminal processes fifty years later. Psychometrika, 42, 461498.
Lawrence E. Marks and Daniel A l g o m
Luce, R. D. (1986). Response times: Their role in it~rring elementary mental organization. New York: Oxford University Press. Luce, R. D. (1994). Thurstone and sensory scaling: Then and now.
Psychological Review, 101, 271-277. Luce, R. D., Baird, J. C., Green, D. M., & Smith, A. F. (1980). Two classes of models for magnitude estimation. Journal of Mathematical Psychology, 22, 121-148.
Luce, R. D., & Edwards, W. (1958). The derivation of subjective scales from just noticeable differences. Psychological Review, 65, 222-237. Luce, R. D., & Galanter, E. (1963). Discrimination. In R.
D. Luce, R. R. Bush, & E. Galanter (Eds.), Handbook of mathematical psychology (Vol. 1, pp. 191-243). New York: Wiley. Luce, R. D., & Green, D. M. (1972). A neural timing theory for response times
and the psychophysics of intensity. Psychological Review, 79, 14-57. Luce, R. D., & Green, D. M. (1974). The response ratio hypothesis for magnitude estimation.
Journal of Mathematical Psychology, 11, 1-14.
Luce, R. D., & Green, D. M. (1978). Two tests of a neural attention hypothesis for auditory psychophysics. Perception & Psychophysics, 23, 363-371. Luce, R. D., Green, D. M., & Weber, D. L. (1976).
Attention bands in absolute identification. Perception & Psychophysics, 20, 49-54. Luce, R. D., Krantz, D. H., Suppes, P., & Tversky, A. (1990). Foundations of measurement. Vol. 3: Representation,
axiomatization, and invariance. San Diego: Academic Press. Luce, R. D., & Krumhansl, C. L. (1988). Measurement, scaling, and psychophysics. In R. C. Atkinson, R. J. Herrnstein, G. Lindzay, & R. D.
Luce (Eds.), Stevens' handbook of experimental psychology (2nd ed.) (Vol. 1, pp. 3-74). New York: Wiley. Luce, R. D., & Mo, S. S. (1965). Magnitude estimation of heaviness and loudness by individual
subjects. A test of a probabilistic response theory. British Journal of Mathematical and Statistical Psychology, 18, 159-174. Luce, R. D., & Narens, L. (1987). Measurement scales on the continuum.
Science, 236, 15271532. Luce, R. D., & Tukey, J. (1964). Simultaneous conjoint measurement: A new type of fundamental measurement. Journal of Mathematical Psychology, 1, 1-27. MacKay, D. M. (1963).
Psychophysics of perceived intensity: A theoretical basis for Fechner's and Stevens's laws. Science, 139, 1213-1216. Macmillan, N. A., & Creelman, C. D. (1991). Detection theory: A user's guide.
Cambridge, England: Cambridge University Press. MacRae, A. W. (1970). Channel capacity in absolute judgment tasks: An artifact of information bias? Psychological Bulletin, 73, 112-121. MacRae, A. W.
(1972). Information transmission, partitioning and Weber's law: Some comments on Baird's cognitive theory ofpsychophysics. ScandinavianJournal of Psychology, 13, 73-80. MacRae, A. W. (1982). The
magical number fourteen: Making a very great deal of non-sense.
Perception & Psychophysics, 31,591-593.
Mansfield, R.J.W. (1970). Intensityrelations in vision: Analysis and synthesis in a non-linear sensory system. Doctoral dissertation, Harvard University. Mansfield, R. J. W. (1973). Latency functions
in human vision. Vision Research, 13, 22192234. Marks, L. E. (1968). Stimulus-range, number of categories, and form of the category-scale.
American Journal of Psychology, 81,467-479.
Marks, L. E. (1972). Visual brightness: Some applications of a model. Vision Research, 12, 1409-1423. Marks, L. E. (1974a). On scales of sensation: Prolegomena to any future psychophysics that will
be able to come forth as science. Perception & Psychophysics, 16, 358-376.
Psychophysical Scaling
Marks, L. E. (1974b). Sensory processes: The new psychophysics. New York: Academic Press. Marks, L. E. (1977). Relative sensitivity and possible psychophysical functions. Sensory Processes, 1,
301-315. Marks, L. E. (1978a). Binaural summation of the loudness of pure tones. Journal of the Acoustical Society of America, 64, 107-113. Marks, L. E. (1978b). Mental measurement and the
psychophysics of sensory processes. Annals of the New York Academy of Sciences, 309, 3-17. Marks, L. E. (1978c). Phonion: Translation and annotations concerning loudness scales and the processing of
auditory intensity. In N. J. Castellan & F. Restle (Eds.), Cognitive theory (Vol. 3, pp. 7-31). Hillsdale, NJ: Erlbaum. Marks, L. E. (1978d). The unity of the senses: Interrelations among the
modalities. New York: Academic Press. Marks, L. E. (1979a). Summation of vibrotactile intensity: An analogue to auditory critical bands? Sensory Processes, 3, 188-203. Marks, L. E. (1979b). A theory
of loudness and loudness judgments. Psychological Review, 86, 256-285. Marks, L. E. (1988). Magnitude estimation and sensory matching. Perception & Psychophysics, 43, 511-525. Marks, L. E. (1991).
Reliability of magnitude matching. Perception & Psychophysics, 49, 31-37. Marks, L. E. (1992a). The contingency of perceptual processing: Context modifies equalloudness relations. Psychological
Science, 3, 187-198. Marks, L. E. (1992b). The slippery context effect in psychophysics: Intensive, extensive, and qualitative continua. Perception & Psychophysics, 51, 187-198. Marks, L. E. (1992c).
"What thin partitions sense from thought divide": Toward a new cognitive psychophysics. In D. Algom (Ed.), Psychophysical approaches to cognition (pp. 115-186). Amsterdam: North-Holland Elsevier.
Marks, L. E. (1993). Contextual processing of multidimensional and unidimensional auditory stimuli. Journal of Experimental Psychology: Human Perception and Performance, 19, 227-249. Marks, L. E.
(1994). "Recalibrating" the auditory system: The perception of loudness. Journal of Experimental Psychology: Human Perception and Performance, 20, 382-396. Marks, L. E., & Armstrong, L. (1996).
Haptic and visual representations of space. In T. lnui & J. L. McClelland (Eds.), Attention and Performance XVI (pp. 263-287). Cambridge, MA: MIT Press. Marks, L. E., Borg, G., & Ljunggren, G.
(1983). Individual differences in perceived exertion assessed by two new methods. Perception & Psychophysics, 34, 280-288. Marks, L. E., Galanter, E., & Baird, J. C. (1995). Binaural summation after
learning psychophysical functions for loudness. Perception & Psychophysics, 57, 1209-1216. Marks, L. E., & Warner, E. (1991). Slippery context effect and critical bands. Journal of Experimental
Psychology: Human Perception and Performance, 17, 986-996. Marley, A. A., & Cook, V. T. (1984). A fixed rehearsal capacity interpretation of limits on absolute identification performance. British
Journal of Mathematical and Statistical Psychology, 37, 136-151. Marschark, M., & Paivio, A. (1981). Congruity and the perceptual comparison task. Journal of Experimental Psychology: Human Perception
and Performance, 7, 290-308. Mashhour, M., & Hosman, J. (1968). On the new "psychophysical law": A validation study. Perception & Psychophysics, 3, 367-375. McGill, W.J. (1961). Loudness and reaction
time: A guided tour of the listener's private world. Acta Psychologica, 19, 193-199. McGill, W. J. (1974). The slope of the loudness function: A puzzle. In H. R. Moskowitz, B. Scharf, & J. C. Stevens
(Eds.), Sensation and measurement: Papers in honor of S. S. Stevens (pp. 295-314). Dordrecht, Holland: Reidel.
Lawrence E. Marks and Daniel A l g o m
McGill, W. J., & Goldberg, J. P. (1968). Pure-tone intensity discrimination and energy detection. _Journal of the Acoustical Society of America, 44, 576-581. McKenna, F. P. (1985). Another look at
the "new psychophysics." BritishJournal of Psychology, 76, 97-109. Meiselman, H. L., Bose, H. E., & Nykvist, W. F. (1972). Magnitude production and magnitude estimation of taste intensity. Perception
& Psychophysics, 12, 249-252. Melamed, L. E., & Thurlow, W. R. (1971). Analysis of contrast effects in loudness judgments. Journal of Experimental Psychology, 90, 268-274. Melara, R. D. (1992). The
concept of perceptual similarity: From psychophysics to cognitive psychology. In D. Algom (Ed.), Psychophysical approaches to cognition (pp. 303-388). Amsterdam: North-Holland Elsevier. Mellers, B.
A., Davis, D. M., & Birnbaum, M. H. (1984). Weight of evidence supports one operation for "ratios" and "differences" of heaviness. Journal qfExperimentai Psychology: Human Perception and Performance,
10, 216-230. Merkel, J. (1888). Die Abh~ingigkeit zwischen Reiz und Empfindung. Philosophische Studien, 4, 541-594. Michels, W. C., & Helson, H. (1949). A reformation of the Fechner law in terms of
adaptationlevel applied to rating-scale data. AmericanJournal of Psychology, 62, 355-368. Miller, G. A. (1947). Sensitivity to changes in the intensity of white noise and its relation to loudness and
masking. Journal of the Acoustical Society of America, 19, 609-619. Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information.
Psychological Review, 63, 81-97. Moles, A. (1966). Information theory and esthetic perception. Urbana, IL: University of Illinois Press (originally published, 1958). Montgomery, H., & Eisler, H.
(1974). Is an equal interval scale an equal discriminability scale? Perception & Psychophysics, 15, 441-448. Mfinsterberg, H. (1890). Beitrdge zur experimentellen Psychologie, Band Ill. Freiburg:
Mohr. Murphy, C., & Gilmore, M. M. (1989). Quality-specific effects of aging on the human taste system. Perception & Psychophysics, 45, 121-128. Murray, D.J. (1993). A perspective for viewing the
history of psychology. The Behavioral and Brain Sciences, 16, 115-186. Nachmias, J., & Steinman, R. M. (1965). Brightness and discriminability of light flashes. Vision Research, 5, 545-557. Narens,
L., & Luce, R. D. (1992). Further comments on the "nonrevolution" arising from axiomatic measurement theory. Psychological Science, 4, 127-130. Newman, E. B. (1933). The validity of the just
noticeable difference as a unit of psychological magnitude. Transactions of the Kansas Academy of Science, 36, 172-175. Noma, E., & Baird, J. C. (1975). Psychophysical study of numbers: II.
Theoretical models of number generation. Psychological Research, 38, 81-95. Norwich, K. H. (1984). The psychophysics of taste from the entropy of the stimulus. Perception & Psychophysics, 35,
269-278. Norwich, K. H. (1987). On the theory of Weber fractions. Perception & Psychophysics, 42, 286-298. Norwich, K. H. (1991). Toward the unification of the laws of sensation: Some food for
thought. In H. Lawless & B. Klein (Eds.), Sensory science: Theory and applications in food (pp. 151-184). New York: Dekker. Norwich, K. H. (1993). Information, sensation, and perception. Orlando, FL:
Academic Press. Parducci, A. (1965). Category judgment: A range-frequency model. Psychological Review, 75, 407-418. Parducci, A. (1974). Contextual effects: A range-frequency analysis. In E. C.
Carterette & M. P. Friedman (Eds.), Handbook of perception. Vol. 2. Psychophysicaljudgment and measurement (pp. 127-141). New York: Academic Press.
Psychophysical Scaling
Parducci, A. (1982). Category ratings: Still more contextual effects! In B. Wegener (Ed.), Social attitudes and psychophysical measurement (pp. 89-105). Hillsdale, NJ: Erlbaum. Parducci, A., Knobel,
S., & Thomas, C. (1976). Independent contexts for category ratings: A range-frequency analysis. Perception & Psychophysics, 20, 360-366. Parducci, A., & Perrett, L. F. (1971). Category rating scales:
Effects of relative spacing and frequency of stimulus values. Jouolal of Experimental Psychology Monographs, 89, 427-452. Parducci, A., & Wedell, D. H. (1986). The category effect with rating scales:
Number of categories, number of stimuli, and method of presentation. Journal of Experimental Psychology: Human Perception and Performance, 12, 496-516. Parker, S., & Schneider, B. (1974). Non-metric
scaling of loudness and pitch using similarity and difference estimates. Perception & Psychophysics, 15, 238-242. Parker, S., & Schneider, B. (1980). Loudness and loudness discrimination. Perception
& Psychophysics, 28, 398-406. Parker, S., & Schneider, B. (1988). Conjoint scaling of the utility of money using paired comparisons. Social Science Research, 17, 277-286. Parker, S., & Schneider, B.
(1994). The stimulus range effect: Evidence for top-down control of sensory intensity in audition. Perception & Psychophysics, 56, 1-11. Parker, S., Schneider, B., & Kanow, G. (1975). Ratio scale
measurement of the perceived length of lines. Journal of Experimental Psychology: Human Perception and Performance, 104, 195-204. Parker, S., Schneider, B., Stein, D., Popper, R., Darte, E., &
Needel, S. (1981). Utility function for money determined using conjoint measurement. AmericanJournal of Psychology, 94, 563-573. Pascal, B. (1958). PensEes. New York: Dutton (originally published,
1670). Pavel, M., & Iverson, G.J. (1981). Invariant characteristics of partial masking: Implications for mathematical models. Journal of the Acoustical Society of America, 69, 1126-1131. Petrusic, W.
M. (1992). Semantic congruity effects and theories of the comparison process. Journal of Experimental Psychology: Human Perception and Performance, 18, 962-986. Pfanzagl, J. (1959). A general theory
of measurement: Applications to utility. Naval Research Logistics Quarterly, 6, 283-294. Pi6ron, H. (1914). Recherches sur les lois de variation des temps de latence sensorielle en fonction des
intensities excitatrices. L'AnnEePsychologique, 20, 2-96. Pi6ron, H. (1934). Le probl~me du mechanisme physiologique impliqu6 par l'6chelon diff6rentiel de sensation. L'AnnEePsychologique, 34,
217-236. Pi6ron, H. (1952). The sensations: Their functions, processes and mechanisms. New Haven, CT: Yale University Press. Pi,~ron, Mme. H. (1922). Contribution exp6rimentale ~ l'6tude des
ph6nom~nes de transfert sensoriel: La vision et la kin6sthesie dans la perception des longueurs. L'AnnEePsychologique, 23, 76-124. Plateau, J. A. F. (1872). Sur la mesure des sensations physiques, et
sur la loi qui lie l'intensit,~ de ces sensations ~1l'intensit6 de la cause excitante. Bulletins de l'AcaddmieRoyale des Sciences, de Lettres, et des Beaux-Arts de Belgique, 33, 376-388. Pollack, I.
(1965a). Iterative techniques for unbiased rating scales. QuarterlyJournal of Experimental Psychology, 17, 139-148. Pollack, I. (1965b). Neutralization of stimulus bias in the rating of grays.
Journal of Experimental Psychology, 69, 564-578. Popper, R. D., Parker, S., & Galanter, E. (1986). Dual loudness scales in individual subjects. Journal of Experimental Psychology: Human Perception
and Performance, 12, 61-69. Potts, B. C. (1991). The horizontal-vertical illusion: A confluence of configural, contextual, and framing factors. Doctoral dissertation, Yale University. Poulton, E. C.
(1989). Bias in quantifyingjudgments. Hove, England: Erlbaum.
Lawrence E. Marks and Daniel A l g o m
Pruzansky, S., Tversky, A., & Carroll, J. D. (1982). Spatial versus tree representations of proximity data. Psychometrika, 47, 3-24. Raab, D. H. (1962). Magnitude estimation of the brightness of
brief foveal stimuli. Science, 135, 42-44. Raab, D. H., & Fehrer, E. (1962). Supplementary report: The effect of stimulus duration and luminance on visual reaction time. Journal of Experimental
Psychology, 64, 326-327. Ramsay, J. O. (1969). Some statistical considerations in multidimensional scaling. Psychometrika, 34, 167-182. Ramsay, J. O. (1979). Intra- and interindividual variation in
the power law exponent for area summation. Perception & Psychophysics, 26, 495-500. Rankin, K. R., & Marks, L. E. (1991). Differential context effects in taste perception. Chemical Senses, 16,
617-629. Rankovic, C. M., Viemeister, N. F., Fantini, D. A., Cheesman, M. F., & Uchiyama, C. L. (1988). The relation between loudness and intensity difference limens for tones in quiet and noise
backgrounds. Journal of the Acoustical Society of America, 84, 150- 155. Restle, F. (1961). Psychology of judgment and choice: A theoretical essay. New York: Wiley. Restle, F., & Greeno, J. G.
(1970). Introduction to mathematical psychology. Reading, MA: Addison-Wesley. Reynolds, G. S., & Stevens, S. S. (1960). Binaural summation of loudness. Journal of the Acoustical Society of America,
32, 1337-1344. Richardson, L. F., & Ross, J. S. (1930). Loudness and telephone current. Journal of General Psychology, 3, 288-306. Riesz, R. R. (1928). Differential intensity sensitivity of the ear
for pure tones. Physical Review, 31, 867-875. Reisz, R. R. (1933). The relationship between loudness and the minimum perceptible increment of intensity. Journal of the Acoustical Society of America,
5, 211-216. Robinson, G. H. (1976). Biasing power law exponents by magnitude estimation instructions. Perception & Psychophysics, 19, 80-84. Rosenblith, W. A. (1959). Some quantifiable aspects of the
electrical activity of the nervous system (with emphasis upon responses to sensory systems). Reviews of Modern Physics, 31, 532-545. Ross, J., & DiLollo, V. (1971). Judgment and response in magnitude
estimation. Psychological Review, 78, 515-527. Rschevkin, S. N., & Rabinovich, A. V. (1936). Sur le probl~me de l'estimation quantitative de la force d'un son. Revue d'Acoustique, 5, 183-200. Rule,
S.J., & Curtis, D. W. (1973). Conjoint scaling of subjective number and weight. Journal of Experimental Psychology, 97, 305-309. Rule, S. J., & Curtis, D. W. (1976). Converging power functions as a
description of the sizeweight illusion. Bulletin of the Psychonomic Society, 8, 16-18. Rule, S. J., & Curtis, D. W. (1977). Subject differences in input and output transformations from magnitude
estimation of differences. Acta Psychologica, 41, 61-65. Rule, S. J., & Curtis, D. W. (1978). Levels for sensory and judgmental processing: Strategies for the evaluation of a model. In B. Wegener
(Ed.), Social attitudes and psychophysical measurement (pp. 107-122). Hillsdale, NJ: Erlbaum. Rule, S.J., Curtis, D. W., & Markley, R. P. (1970). Input and output transformations from magnitude
estimation. Journal of Experimental Psychology, 86, 343-349. Rule, S.J., & Markley, R. P. (1971). Subject differences in cross-modality matching. Perception
& Psychophysics, 9, 115-117.
Rumelhart, D. L., & Greeno, J. G. (1971). Similarity between stimuli: An experimental test of the Luce and Restle choice models. Journal of Mathematical Psychology, 8, 370-381. Saffir, M. A. (1937).
A comparative study of scales constructed by three psychophysical methods. Psychometrika, 2, 179-198.
Psychophysical Scaling
Schlauch, R. S. (1994). Intensity resolution and loudness in high-pass noise. Journal of the Acoustical Society of America, 95, 2171-2179. Schneider, B. (1980). Individual loudness functions
determined from direct comparisons of loudness intervals. Perception & Psychophysics, 27, 493-503. Schneider, B. (1988). The additivity of loudness across critical bands. Perception & Psychophysics,
43, 211-222. Schneider, B., & Parker, S. (1987). Intensity discrimination and loudness for tones in notched noise. Perception & Psychophysics, 41,253-261. Schneider, B., & Parker, S. (1990). Does
stimulus context affect loudness or only loudness judgment? Perception & Psychophysics, 48, 409-418. Schneider, B., Parker, S., & Stein, D. (1974). The measurement of loudness using direct
comparisons of sensory intervals. Journal of Mathematical Psychology, 11,259-273. Schneider, B., Parker, S., Valenti, M., Farrell, G., & Kanow, G. (1978). Response bias in category and magnitude
estimation of difference and similarity for loudness and pitch. Journal of Experimental Psychology: Human Perception and Performance, 4, 483-496. Shannon, C. E. (1948). A mathematical theory of
communication. Bell System Technical Journal, 27, 379-423, 623-656. Shepard, R. N. (1962a). Analysis of proximities: Multidimensional scaling with an unknown distance function. I. Psychometrika, 27,
125-140. Shepard, R. N. (1962b). Analysis of proximities: Multidimensional scaling with an unknown distance function. II. Psychometrika, 27, 219-246. Shepard, R. (1966). Metric structures in ordinal
data. Journal of Mathematical Psychology, 3, 287315. Shepard, R. N. (1978). On the status of"direct" psychological measurement. In C. W. Savage (Ed.), Minnesota studies in the philosophy of science
(Vol. 9, pp. 441-490). Minneapolis: University of Minnesota Press. Shepard, R. N. (1981). Psychological relations and psychological scales: On the status of "direct" psychophysical measurement.
Jou,ml of Mathematical Psychology, 24, 21-57. Shepard, R. N., Kilpatric, D. W., & Cunningham, J. P. (1975). The internal representation of numbers. Cognitive Psychology, 7, 82-138. Sj6berg, L.
(1980). Similarity and correlation. In E. D. Lantermann & H. Feger (Eds.), Similarity and choice (pp. 70-78). Bern: Huber. Solomons, L. M. (1900). A new explanation of Weber's law. Psychological
Review, 7, 234-240. Sperling, G., & Sondhi, M. M. (1968). Model for visual luminance discrimination and flicker detection. Journal of the Optical Society of America, 58, 1133-1145. Staddon, J. E.,
King, M., & Lockhead, G. R. (1980). On sequential effects in absolute judgment experiments. Journal of Experimental Psychology: Human Perception and Performance, 6, 290-301. Stevens, J. C. (1957). A
comparison qf ratio scales for the loudness of white noise and the brightness of white light. Doctoral dissertation, Harvard University. Stevens, J. C. (1958). Stimulus spacing and the judgment of
loudness. Journal of Experimental Psychology, 56, 246-250. Stevens, J. C., & Cain, W. S. (1985). Age-related deficiency in perceived strength of odorants. Chemical Senses, 10, 517-529. Stevens, J.
C., & Guirao, M. (1964). Individual loudness functions. Journal of the Acoustical Society of America, 36, 2210-2213. Stevens, J. C., & Hall, J. W. (1966). Brightness and loudness as functions of
stimulus duration.
Perception & Psychophysics, 1,319-327.
Stevens, J. C., Mack, J. D., & Stevens, S. S. (1960). Growth of sensation on seven continua as measured by force of handgrip. Journal of Experimental Psychology, 59, 60-67. Stevens, J. C., & Marks,
L. E. (1965). Cross-modality matching of brightness and loudness. Proceedings of the National Academy of Sciences, 54, 407-411.
Lawrence E. Marks and Daniel A l g o m
Stevens, J. C., & Marks, L. E. (1980). Cross-modality matching functions generated by magnitude estimation. Perception & Psychophysics, 27, 379-389. Stevens, J. C., & Stevens, S. S. (1963).
Brightness function: Effects of adaptation. Journal of the Optical Society of America, 53, 375-385. Stevens, J. C., & Tulving, E. (1957). Estimations of loudness by a group of untrained observers.
AmericanJournal of Psychology, 70, 600-605. Stevens, S. S. (1946). On the theory of scales of measurement. Science, 103, 677-680. Stevens, S. S. (1951). Mathematics, measurement, and psychophysics.
In S. S. Stevens (Ed.), Handbook of experimental psychology (pp. 1-49). New York: Wiley. Stevens, S. S. (1955). The measurement of loudness. Journal of the Acoustical Society of America, 27, 815-829.
Stevens, S. S. (1956). The direct estimation of sensory magnitude--loudness. AmericanJournal
of Psychology, 69, 1-15.
Stevens, S. S. (1957). On the psychophysical law. Psychological Review, 64, 153-181. Stevens, S. S. (1958). Adaptation-level vs. the relativity of judgment. American Journal qf
Psychology, 71,633-646.
Stevens, S. S. (1959a). Cross-modality validation of subjective scales for loudness, vibration, and electric shock. Journal of Experimental Psychology, 57, 201-209. Stevens, S. S. (1959b).
Measurement, psychophysics, and utility. In C. W. Churchman & P. Ratoosh (Eds.), Measurement: Definitions attd theories (pp. 18-63). New York: Wiley. Stevens, S. S. (1959c). The quantification of
sensation. Daedalus, 88, 606-621. Stevens, S. S. (1959d). Review: L. L. Thurstonc's The measurement of values. Contemporary Psychology, 4, 388-389. Stevens, S. S. (1960). Ratio scales, partition
scales and confusion scales. In H. Gulliksen & S. Messick (Eds.), Psychological scaling: Theory and applications (pp. 49-66b). New York: Wiley. Stevens, S. S. (1961). The psychophysics of sensory
function. In W. A. Rosenblith (Ed.), Sensory Communication (pp. 1-33). New York: Wiley. Stevens, S. S. (1966). Power-group transformations under glare, masking, and recruitment. Journal of the
Acoustical Society of America, 39, 725-735. Stevens, S. S. (1971). Issues in psychophysical measurement. Psychological Review, 78, 426-450. Stevens, S. S. (1975). Psychophysics: An introduction to
its perceptual, neural, and social prospects. New York: Wiley. Stevens, S. S., & Galanter, E. (1957). Ratio scales and category scales for a dozen perceptual continua. Journal of Experimental
Psychology, 54,377-411. Stevens, S. S., & Greenbaum, H. B. (1966). Regression effect in psychophysical judgment.
Perception & Psychophysics, 1,439-446.
Stevens, S. S., & Guirao, M. (1967). Loudness functions under inhibition. Perception & Psychophysics, 2, 459-465. Stevens, S. S., & Volkmann, J. (1940). The relation of pitch to frequency: A revised
scale. American Journal of Psychology, 53, 329-353. Stillman, J. A., Zwislocki, J. J., Zhang, M., & Cefaratti, L. K. (1993). Intensity just-noticeable differences at equal-loudness levels in normal
and pathological ears. Journal of the Acoustical Society of America, 93, 425-434. Suppes, P., & Zinnes, J. L. (1963). Basic measurement theory. In R. I). Luce, R. R. Bush, & E. Galanter (Eds.),
Handbook of mathematical psychology (Vol. 1, pp. 1-76). New York: Wiley. Tanner, W. P., Jr., & Swets, J. A. (1954). A decision-making theory of visual detection.
Psychological Review, 61,401-409.
Teghtsoonian, M., & Teghtsoonian, R. (1971). How repeatable are Stevens's power law exponents for individual subjects? Perception & Psychophysics, 10, 147-149. Teghtsoonian, M., & Teghtsoonian, R.
(1983). Consistency of individual exponents in crossmodal matching. Perception & Psychophysics, 33, 203-214.
Psychophysical Scaling
Teghtsoonian, R. (1971). On the exponents of Stevens's law and the constant in Ekman's law. Psychological Review, 78, 71-80. Teghtsoonian, R. (1973). Range effects in psychophysical scaling and a
revision of Stevens's law. AmericanJournal Psychology, 86, 3-27. Teghtsoonian, R., & Teghtsoonian, M. (1978). Range and regression effects in magnitude scaling. Perception & Psychophysics, 24,
305-314. Thurstone, L. L. (1927). A law of comparative judgment. Psychological Review, 34, 273286. Thurstone, L. L. (1959). The measurement qfvalues. Chicago: University of Chicago Press. Titchener,
E. B. (1905). Experimental psychology: A manual of laboratory practice. Vol. II. Quantitative. 2. Instructor'sManual. New York: Macmillan. Torgerson, W. S. (1954). A law of categorical judgment. In
L. H. Clark (Ed.), Consumer behavior (pp. 92-93). New York: New York University Press. Torgerson, W. S. (1958). Theory and methods of scaling. New York: Wiley. Torgerson, W. S. (1961). Distances and
ratios in psychological scaling. Acta Psychologica, 19, 201-205. Treisman, M. (1964). Sensory scaling and the psychophysical law. QuarterlyJournal of Experimental Psychology, 16, 11-22. Treisman, M.
(1984). A theory of criterion setting: An alternative to the attention band and response ratio hypotheses in magnitude estimation and cross-modality matching. Journal of Experimental Psychology:
General, 113, 443-463. Troland, L. T. (1930). Principles qfpsychophysiology. Vol. 2. Sensation. Princeton, NJ: Van Nostrand. Tversky, A. (1969). Intransitivity of preferences. Psychological Revieu,,
76, 31-48. Tversky, A. (1972). Elimination by aspects: A theory of choice. Psychological Review, 79, 281-299. Tversky, A., & Sattath, S. (1979). Preference trees, Psychological Review, 86, 542-573.
Van Brakel, J. (1993). The analysis of sensations as the foundation of all sciences. The Behavioral and Brain Sciences, 16, 163-164. Vaughn, H. G., Jr., Costa, L. D., 8," Gilden, L. (1966). The
functional relation of visual evoked response and reaction time to stimulus intensity. Vision Research, 6, 645-656. Von Kries, J. (1882). l[Jber die Messung intensiver Gr6ssen und fiber das
sogenannte psychophysische Gesetz. Vierteljahrsschrififiir Wissenschafiliche Philosophie, 6, 257-294. Wagner, M., & Baird, J. C. (1981). A quantitative analysis of sequential effects with numeric
stimuli. Perception & Psychophysics, 29, 359-364. Ward, L. M. (1972). Category judgments of loudness in the absence of an experimenterinduced identification function: Sequential effects and
power-function fit. Journal ql-Experimental Psychology, 94, 179-184. Ward, L. M. (1973). Repeated magnitude estimation with a variable standard: Sequential effects and other properties. Perception &
Psychophysics, 13, 193-200. Ward, L. M. (1979). Stimulus information and sequential dependencies in magnitude estimation and cross-modality matching. Journal qf Experimental Psychology: Human
Perception and Performance, 5, 444-459. Ward, L. M. (1985). Mixed-modality psychophysical scaling: Inter- and intramodality sequential dependencies as a function of lag. Perception & Psychophysics,
38, 512-522. Ward, L. M. (1987). Remembrance of sounds past: Memory and psychophysical scaling. Journal of Experimental Psychology: Human Perception and Performance, 13, 216-227. Ward, L. M. (1990).
Critical bands and mixed-frequency scaling: Sequential dependencies, equal-loudness contours, and power function exponents. Perception & Psychophysics, 47, 551-562. Ward, L. M. (1992). Mind in
psychophysics. In D. Algom (Ed.), Psychophysical approaches to cognition (pp. 187-249). Amsterdam: North-Holland Elsevier.
Lawrence E. Marks and Daniel A l g o m
Ward, L. M., Armstrong, J., & Golestani, N. (1996). Intensity resolution and subjective magnitude in psychophysical scaling. Perception & Psychophysics, 58, 793-801. Ward, L. M., & Lockhead, G. R.
(1970). Sequential effects and memory in category judgments. Journal of Experimental Psychology, 84, 27-34. Ward, L. M., & Lockhead, G. R. (1971). Response system processes in absolute judgment.
Perception & Psychophysics, 9, 73-78. Warren, R. M. (1958). A basis for judgments of sensory intensity. AmericanJournal of Psycholo-
gy, 71,675-687.
Warren, R. M. (1969). Visual intensity judgments: An empirical rule and a theory. Psychological Review, 76, 16-30. Warren, R. M. (1981). Measurement of sensory intensity. The Behavioral and Brain
Sciences, 4, 175-223. Wasserman, G. S. (1991). Neural and behavioral assessments of sensory quality. The Behavioral and Brain Sciences, 14, 192-193. Weber, E. H. (1834). De pulsu, resorptione, auditu
et tactu: Annotationesanatomicae et physiologicae. Leipzig: K6hler. Wedell, D. H. (1995). Contrast effects in paired comparisons: Evidence for both stimulusbased and response-based processes. Journal
of Experimental Psychology: Human Perception and Performance, 21, 1158-1173. Weiss, D.J. (1975). Quantifying private events: A functional measurement analysis of equisection. Perception &
Psychophysics, 17, 351-357. Weissmann, S. M., Hollingsworth, S. R., & Baird, J. C. (1975). Psychophysical study of numbers: Ill. Methodological applications. Psychological Research, 38, 97-115.
Welford, A. T. (1960). The measurement of sensory-motor performance: Survey and reappraisal of twelve years' progress. Ergonomics, 3, 189-230. Woodworth, R. S. (1914). Professor Cattell's
psychophysical contributions. Archives of Psychology, 30, 60-74. Yellott, J. I. (1971). Correction for fast guessing and the speed-accuracy tradeoff in choice reaction time. Journal of Mathematical
Psychology, 8, 159-199. Yellott, J. I. (1977). The relationship between Luce's Choice Axiom, Thurstone's Theory of Comparative Judgment, and the double exponential distribution. Journal of
Mathematical Psychology, 15, 109-144. Zinnes, J. L. (1969). Scaling. Annual Review of Psychology, 20, 447-478. Zwislocki, J. j. (1965). Analysis of some auditory characteristics. In R. D. Luce, R. R.
Bush, & E. Galanter (Eds.), Handbook of mathematical psycholor (Vol. 3, pp. 3-97). New York: Wiley. Zwislocki, J. J. (1983). Group and individual relations between sensation magnitudes and their
numerical estimates. Perception & Psychophysics, 33, 460-468. Zwislocki, J. J. (1991). Natural measurement. In S. J. Bolanowski, Jr., & G. A. Gescheider (Eds.), Ratio scaling of psychological
magnitude (pp. 18-26). Hillsdale, NJ: Erlbaum. Zwislocki, J. j., & Goodman, D. A. (1980). Absolute scaling of sensory magnitudes: A validation. Perception & Psychophysics, 28, 28-38. Zwislocki, J.
J., & Jordan, H. N. (1986). On the relations of intensity jnd's to loudness and neural noise. Journal of the Acoustical Society of America, 79, 772-780.
Multidimensional Scaling j. Douglas Carroll Phipps Arabie
I. I N T R O D U C T I O N This technique comprises a family of geometric models for representation of data in one or, more frequently, two or more dimensions and a corresponding set of methods for
fitting such models to actual data. A much narrower definition would limit the term to spatial distance models for similarities, dissimilarities, or other proximity data. The usage we espouse
includes nonspatial (e.g., such discrete geometric models as tree structures) and nondistance (e.g., scalar product or projection) models that apply to nonproximity (e.g., preference or other
dominance) data as well as to proximities. As this chapter demonstrates, a large class of these nonspatial models can still be characterized as dimensional modelsmbut with discrete rather than
continuously valued dimensions. The successful development of any multivariate technique and its incorporation in widely available statistical software inevitably lead to substantive applications
over an increasingly wide range both within and among disciplines. Multidimensional scaling (MDS) is no exception, and within psychology and closely related areas we could catalog an immense variety
of different applications (not all of them cause for celebration, however); several thousand are given in the annual bibliographic survey SER VICE (Murtagh, 1997) published by the Classification
Society of North America. Measurement,Judgment, and Decision Making Copyright 9 1998 by Academic Press. All rights of reproduction in any form reserved.
J. Douglas Carroll and Phipps Arabie
Further evidence of the vitality of developments in MDS can be found in the numbers of recent (1) books and edited volumes and (2) review chapters and articles on the topic. In the former category,
we note Arce (1993); Ashby (1992); Cox and Cox (1994); de Leeuw, Heiser, Meulman, and Critchley (1986); De Soete, Feger, and Klaucr (1989); Gower and Hand (1996); Green, Carmone, and Smith (1989);
Okada and lmaizumi (1994); and Van Cutsem (1994). The conference proceedings volumes are too numerous even to cite, and the monograph series of DSWO Press at the University of Leiden has many
noteworthy contributions. Concerning review chapters and articles, the subareas of psychology recently targeted include counseling (Fitzgerald & Hubert, 1987), developmental (Miller, 1987),
educational (Weinberg & Carroll, 1992), experimental (L. E. Jones & Koehly, 1993; Luce & Krumhansl, 1988), and cognitive (Nosofsky, 1992; Shoben & Ross, 1987). Multivariate statistical textbooks also
continue to pay due attention to MDS (e.g., Krzanowski & Marriott, 1994, chap. 5). lverson and Luce's chapter in this volume focuses on a complementary aspect of measurement in psychology and the
behavioral sciences, measurement (primarily, but not exclusively, unidimensional) based on subjects' orderings of stimuli, whereas we are concerned with measurement (primarily, but not exclusively,
multidimensional, or multiattributc) based on proximity data on pairs of stimuli or other entities. In this chapter we focus almost exclusively on that substantive area where we see the strongest
bonds to MDS and its underpinnings and that seems most likely to spur new methodological developments in MDS, namely that answering fundamental questions about the psychological representation of
structure underlying perception and judgment, especially in terms of similarities and dissimilarities. From its inception (Shepard, 1962a, 1962b), nonmetric MDS has been used to provide visualizable
depictions of such structure, but current research focuses on much more incisive queries. Question 1 is whether any particular stimulus domain is better fitted by a discrete than by a continuous
(usually) spatial model. The latter possibility gives rise to Question 2, which concerns the nature of the metric of the multidimensional stimulus space (often assumed to be either Euclidean or
city-block, as defined later). Question 1, of course, is at the heart of such controversies in experimental psychology as categorical perception (Tartter, in press, chap. 7) and neural quantum theory
(Stevens, 1972). With the advent of increasingly general models (discussed later) for discrete structure and associated algorithms for fitting them, it has become possible in some cases to run
empirical comparisons of selected discrete versus spatial models for given data sets (cf. Carroll, 1976; De Soete & Carroll, 1996). Pruzansky, Tversky, and Carroll (1982) compared data from several
stimulus domains and concluded that: "In general, colors, sounds and factorial structures were better repre-
3 Multidimensional Scaling
sented by a plane [i.e., a two-dimensional MDS solution], whereas conceptual stimuli from various semantic fields were better modelled by a[n] additive] tree" (p. 17). Within the literature of
experimental psychology, Question 2 effectively begins with Attneave's (1950, p. 521) reflections on "the exceedingly precarious assumption that psychological space is Euclidean" (1950, p. 521). He
instead argued: "The psychological implication is that there is a unique coordinate system in psychological space, upon which 'distances' between stimuli are strictly dependent [as opposed to
rotation invariant]; and thus our choice of axes is to be dictated, not by linguistic expediency, but by psychological fact." Moreover, Attncave (1950, p. 555) began the tradition of distinguishing
between integral and analyzable stimulus domains with his sharp contrast between Euclidean and city-block metrics: "Perhaps the most significant psychological difference between these two hypotheses
is that the former assumes one frame of reference to be as good as any other, whereas the latter implies a unique set of psychological axes." For the development of theoretical positions on this
distinction between integral and analyzable stimuli, see Shcpard's (1991) and other chapters in Lockhead and Pomerantz's (1991) Festschrift for W. R. Garner. For a review of theoretical and
algorithmic approaches to city-block spaccs, see Arabie (1991). Since the mid-1980s, the most innovative and significant results pertaining to Question 2 have come from Nosofsky (e.g., 1992) and from
Shepard (1987, 1988). In the latter papers, Shcpard returned to his earlier interest in stimulus generalization to formulate and derive a universal law of generalization based on the distinctions
between analyzable and integral stimuli and between the Euclidean and city-block metrics. Reviewing recent work on "models for predicting a variety of performances, including generalization,
identification, categorization, recognition, same-different accuracy and reaction time, and similarity judgment," Nosofsky (1992, p. 40) noted that "The MDS-based similarity representation is a
fundamental component of these models." Additionally (Nosofsky, 1992, p. 34), "The role of M D S in developing these theoretical relations is critical [italics added]." The literature on Question 2
has become quite extensive; for example, see chapters in Ashby (1992) and work by Ennis and his collaborators (e.g., Ennis, Palen, & Mullen, 1988). To explain how MDS can be used to address Questions
1 and 2, we must immediately make some distinctions among types of data matrices, and we do so by summarizing a lengthier taxonomy found in Carroll and Arabie (1980, pp. 610-611). Consider two
matrices: one with n rows and the same number of columns (with entries depicting direct judgments of pairwise similarities for all distinct pairs of the n stimuli) and the other matrix with n rows of
stimuli and K columns of attributes of the stimuli. Although both matrices have two ways (namely, rows and columns), the former is said to
J. Douglas Carroll and Phipps Arabie
have one mode because both its ways correspond to the same set of entities (i.e., the n stimuli). But the matrix of stimuli by their attributes has two disjoint sets (and thus two modes; Tucker,
1964) of entities corresponding to the ways. For a one-mode two-way matrix, an additional consideration is whether conjugate off-diagonal entries are always equal, in which case the matrix is
symmetric; otherwise it is nonsymmetric. Another important distinction concerns whether the data are conditional (i.e., noncomparable) between rows/columns or among matrices. Row conditional data
arise most commonly when a subject is presented with each of n stimuli in turn and asked to rank the remaining n - 1 according to their similarity to the standard. If the ranks are entered as a row/
column for each successive standard stimulus in a two-way one-mode matrix, the entries are comparable within but not between rows/columns, and such data are therefore called row/column conditional
(Coombs, 1964). If the data are a collection of I one-mode, two-way matrices, all n • n for the same set of n stimuli, a more general question is whether numerical entries are comparable among the
matrices. If not, such three-way data are said to be matrix conditional (Takane, Young, & de Leeuw, 1977). It is not our intention to dwell on traditional methods of collecting data for
multidimensional scaling, given the excellent summaries already available (e.g., Kruskal & Wish, 1978; Coxon, 1982, chap. 2; Rosenberg, 1982, for the method of sorting; L. E. Jones & Koehly, 1993,
pp. 104-108). An important distinction offered by Shepard (1972) is whether the input data are the result of direct judgments (e.g., from subjects' judging all distinct pairs of stimuli~say, on a
9-point scale of similarity/dissimilarity~or confusions data) or of indirect or profile data, as result when the data at hand are two-mode, but the model to be fitted requires one-mode data. In such
cases, the user typically preprocesses the data by computing an indirect measure of proximity (e.g., squared Euclidean distances) between all pairs of rows or columns to obtain a one-mode matrix of
pairwise similarities/ dissimilarities. Although Shepard's (1962a, 1962b) original development of nonmetric MDS greatly emphasized applications to one-mode two-way direct similarities, applications
of various MDS models to indirect or profile data are quite common. A noteworthy development of recent years is that of models and associated algorithms for the direct analysis of types of data not
previously amenable to MDS without preprocessing: free recall sequences (Shiina, 1986); row conditional rank-order data (Takane & Carroll, 1982): similarity/dissimilarity judgments based on triples
(Daws, 1993, 1996; Joly & Le Calve, 1995; Pan & Harris, 1991) or even n-tuples of stimuli (T. F. Cox, M. A. A. Cox, & Branco, 1991); triadic comparisons (Takane, 1982); and sorting data (Takane,
1981, 1982). Carroll and Arabie (1980) organized their Annual Review chapter on
3 Multidimensional Scaling
MDS around the typology of ways and modes for data and for corresponding algorithms. Although these distinctions remain crucial in considering types of data, the typology is now less clear-cut for
algorithms. As we predicted in that chapter (p. 638), there has been intensive development of three-way algorithms, and the two-way special cases are often by-products. Thus, in our present coverage,
the two-way algorithms and models are mentioned only as they are subsumed in the more general three-way approaches. II. O N E - M O D E TWO-WAY DATA The inventor of the modern approach to nonmetric
MDS (Shepard, 1962a, 1962b) began by considering a single one-mode two-way matrix, typically some form of similarity, dissimilarity, or other proximity data (sometimes also referred to as
"relational" data). Another type of ostensibly dyadic data is so-called paired comparisons data depicting preferences or other forms of dominance relations on members of pairs of stimuli. However,
such data are seldom utilized in multidimensional (as opposed to unidimensional) scaling. We do not cover paired comparisons data in this chapter because we view such data not as dyadic but as
replicated monadic data (having n - 2 missing data values within each replication); see Carroll (1980) for an overview. III. SPATIAL DISTANCE MODELS (FOR O N E - M O D E TWO-WAY DATA) The most widely
used MDS procedures are based on geometric spatial distance models in which the data are assumed to relate in a simple and welldefined manner to recovered distances in an underlying spatial
representation. If the data are interval scale, the function relating the data to distances is generally assumed to be inhomogeneously linearmthat is, linear with an additive constant as well as a
slope coefficient. Data of interval or stronger (ratio, positive ratio, or absolute) scale are called metric, and the corresponding models and analyses are collectively called metric M D S . In the
case of ordinal data, the functional relationship is generally assumed to be monoton i c - e i t h e r monotonic nonincreasing (in the case of similarities) or monotonic nondecreasing (for
dissimilarities). Ordinal data are often called nonmetric data, and the corresponding MDS models and analyses are also referred to as nonmetric M D S . The distinction between metric and nonmetric
approaches is based on the presence or absence of metric properties in the data (not in the solution, which almost always has metric properties; Holman, 1978, is an exception). Following Kruskal's
(1964b, 1965) innovative work in monotone regression (as the basic engine for fitting any of the ordinal models considered in
J. Douglas Carroll and Phipps Arabie
this review), first devised by Ayer, Brunk, Ewing, Reid, and Silverman (1955), there has been much activity in this area of statistics. In addition to Shepard's (1962a, 1962b) early approach and
Guttman's (1968) later approach based on the rank image principle, alternative and related methods have been proposed by R. M. Johnson (1975), Ramsay (1977a), Srinivasan (1975), de Leeuw (1977b), de
Leeuw and Heiser (1977, 1980, in developing their SMACOF algorithm, considered later), and Heiser (1988, 1991). McDonald (1976) provided a provocative comparison between the approaches of Kruskal
(1964b) and Guttman (1968), and the two methods are subsumed as special cases of Young's (1975) general formulation. More recently, Winsberg and Ramsay (1980, 1983), Ramsay (1988), Winsberg and
Carroll (1989a, 1989b), and Carroll and Winsberg (1986, 1995) have introduced the use of monotone splines as an alternative to the totally general monotone functions introduced by Kruskal, while
other authors (e.g., Heiser, 1989b) have proposed using other not completely general monotonic functions, which, like monotone splines, can be constrained to be continuous and to have continuous
derivatives, if desired. (Carroll and Winsberg, 1986, 1995, and Winsberg and Carroll, 1989a, 1989b, have used monotone splines in a somewhat unique mannermpredicting data as monotone function(s) of
distances, rather than vice versa as is typically the case in fully nonmetric approaches. As discussed later, these authors argue that this quasi-nonmetric approach avoids degeneracies that occur
with fully nonmetric approaches.)
A. Unconstrained Symmetric Distance Models (for One-Mode Two-Way Data) Although one of the more intensely developed areas in rcccnt years has been the treatment of nonsymmetric data (discussed in
detail later), most of the extant data relevant to MDS are symmetric, owing in part to the previous lack of models allowing for nonsymmetric data and the ongoing absence of readily available software
for fitting such models. Therefore, we first consider recent developments in the scaling of symmetric data, that is, where the proximity ofj to k is assumed identical to that obtained when the
stimuli are considered in the reverse order. The most widely assumed metric in MDS is the Euclidean, in which the distance between two points j and k is defined as R
r= 1
where Xjr and Xkr are the rth coordinates of points j and k, respectively, in an R-dimensional spatial representation. Virtually all two-way MDS proce-
3 Multidimensional Scaling
dures use either the Euclidean metric or the Minkowski p (or Lp) metric, which defines distances as R
and so includes Euclidean distance as a special case in which p = 2. (Because a variable that later will appear extensively in this chapter will be labeled "p," we are using the nontraditional p for
Minkowski's exponent.)
B. Applications and Theoretical Investigations of the Euclidean and Minkowski p Metrics (for One-Mode Two-Way Symmetric Data) 1. Seriation A psychologist who harbors proximity data suspected of being
unidimensional is caught between a Scylla of substantive tradition and a Charybdis of deficient software. Concretely, the custom in experimental psychology has been to discount unidimensionality and
seek only higher-dimensional solutions. For example, Levelt, van de Geer, and Plomp (1966) developed an elaborate two-dimensional substantive interpretation of data later shown to be unidimensional
by Shepard (1974) and Hubert and Arabie (1989, pp. 308310). Similarly, Rodieck (1977) undermined a multidimensional theory of color vision proposed by Tansley and Boynton (1976, 1977). But a data
analyst willing to counter the tradition of overfitting immediately encountered a suspicion that gradient-based algorithms for nonmetric MDS could not reliably yield solutions faithful to an
underlying unidimensional structure in a proximities matrix (cf. Shepard, 1974). De Leeuw and Heiser (1977) pointed out that this is in fact a discrete problem of analysis masquerading as a
continuous one. Hubert and Arabie (1986, 1988) demonstrated analytically why gradient methods fail in the unidimensional case and then provided an alternative algorithm based on dynamic programming,
guaranteed to find the globally optimal unidimensional solution. Pliner (1996) has provided a different algorithm that can handle much larger analyses. Also see related work by Hubert and Arabie
(1994, 1995a); Hubert, Arabie, and Meulman (1997); Mirkin (1996); and Mirkin and Muchnik (1996, p. 319). 2. Algorithms Kruskal's (1964a, 1964b) option to allow the user to specify p #= 2.0 in Eq. (1)
ostensibly made it much easier for experimenters to decide which Minkowski metric was most suitable for their data. But evidence (Arabie, 1973)
J. Douglas Carroll and Phipps Arabie
and hearsay soon accumulated that, at least in the city-block case (where p = 1), the algorithm found suboptimal solutions, and there was a suspicion (e.g., Shepard, 1974) that the same conclusion
was true for unidimensional solutions (no matter what value of p was used, because all are mathematically equivalent in the case of one dimension). As noted earlier, de Leeuw and Heiser (1977) made
the crucial observation that the unidimensional case of gradient-based two-way MDS is in fact a discrete problem, and Hubert and Arabie (1986) provided an appropriately discrete algorithm to solve
it. Hubert and Arable (1988) then analytically demonstrated that the same discreteness underlies the problem of city-block scaling in two dimensions and conjectured that the result is actually much
more general. Hubert, Arabie, and Hesson-Mcinnis (1992)provided a combinatorial nonmetric algorithm for city-block scaling in two and three dimensions (for the two-way case) and demonstrated the
highly inferior fits typically obtained when traditional gradient methods were used instead on the same data sets. Nonetheless, such misguided and clearly suboptimal analyses continue to appear in
the experimental psychology literature (e.g, Ashby, Maddox, & Lee, 1994). Using a majorization technique, Heiser (1989a) provided a metric three-way city-block MDS algorithm. Neither the approach of
Hubert and colleagues (1992) nor that of Heiser can guarantee a global optimum, but they generally do much better than their gradient counterparts. In MDS the city-block metric has received more
attention during the past two decades than any other non-Euclidean Minkowski metric (see Arabie, 1991, for a review), but more general algorithmic approaches are also available. For example, Okada
and Imaizumi (1980b) provided a three-way nonmetric generalization of the INDSCAL model, as in Eq. (5) (where a monotone function is fitted to the right side of that equation). Groenen (1993; also
see Groenen, Mathar, & Heiser, 1995) has extended the majorization approach for 1 < p -< 2 in Eq. (1). His impressive results have usually been limited to two-way metric MDS but appear to have
considerably greater generality. There have been some attempts at fitting even more general non-Euclidean metrics such as Riemannian metrics (see the review in Carroll & Arabie, 1980, pp. 618-619),
but none have demonstrated any lasting impact on the field. Although Indow (1983, pp. 234-235) demonstrated, with great difficulty, that a Riemannian metric with constant curvature fits certain
visual data slightly better than a Euclidean metric, lndow concluded that the increase in goodness of fit was not sufficient to justify the effort involved and that, in practice, Euclidean
representations accounted exceedingly well for the data he and his colleagues were considering. In later work, however, Indow (1995; see also Suppes, Krantz, Luce, & Tversky, 1989, pp. 131-153, for
discussion) has shown that careful scrutiny of the geometric structure of these visual stimuli within different planes of a threedimensional representation reveals that the curvature is dependent on
3 Multidimensional Scaling
specific plane being considered. This discovery suggests that a more general Riemannian metric with nonconstant curvature may provide an even more appropriate representation of the geometry of visual
space. 3. Algebraic and Geometric Foundations of MDS in Non-Euclidean Spaces Confronted with the counterintuitive nature of non-Euclidean and/or highdimensional spaces, psychologists have regularly
culled (and occasionally contributed to) the vast mathematical literature on the topic, seeking results relevant to data analyses in such spaces (see, for example, Carroll & Wish, 1974a; Critchley &
Fichet, 1994; de Leeuw & Heiser, 1982; Suppes et al., 1989, chaps. 12-14). Linkages to that literature are impeded by its typical and unrealistic assumptions of (1) a large or even infinite number of
stimuli, (2) error-free data, and (3) indifference toward substantively insupportable high dimensionalities. The axiomatic literature in psychology does not always treat these problems
satisfactorily, because it postulates systems requiring errorless measurement structures that in turn entail an infinite number of (actual or potential) stimuli. For example, testing of the axiom of
segmental additivity for geometric representations of stimuli would be exceedingly difficult in a practical situation in which only a finite number of stimuli are available, and the proximity data
are subject to measurement or other error of various types, because, in principle, one has to demonstrate that an intermediate stimulus exists precisely between each pair of stimuli so that the
distances sum along the implicit line connecting the three. (As a further complication, these distances may be only monotonically related to the true proximities, whereas observed proximities are, at
best, measured subject to measurement or experimental error.) Given a finite sample of "noisy" stimuli, it is highly unlikely that, even under the best of circumstances (e.g., errorless data
entailing distances measured on a ratio scale), one would find a requisite third stimulus lying precisely, or even approximately, between each pair of stimuli. This instance is but one extreme case
illustrating the general difficulty of testing scientific models, whether geometric or otherwise, with finite samples of data subject to measurement or experimental error. For example, it would be
equally difficult, in principle, to test the hypothesis that noisy proximity data on a finite sample of stimuli are appropriately modeled via a Euclidean (or city-block, or other metric) spatial
model in a specified number of dimensions. In practice, we are often forced to rely on the principle of parsimony (or "Occam's razor"), that is, to choose, among a large set of plausible models for
such a set of data, the most parsimonious model, which appears to account adequately for a major portion of the variance (or other measure of variation) in the empirical proximity data. This approach
hardly qualifies as a rigorous scientific test of such a geometric model; rather it is more appropriately characterized as a practical
J. Douglas Carroll and Phipps Arabie
statistical rule of thumb for choosing the best among a large family of plausible models. The axiomatic approach, as exemplified by Suppes and colleagues (1989), focuses more on thc precise testing
of a very specific scientific model and constitutes an ideal toward which researchers in multidimensional scaling and other measurement models for the analysis of proximity data can, at the moment,
only aspire. We hope that a stronger nexus can be formed between the axiomatic and the empirical camps in future work on such measurement models, effecting a compromise that allows development of
practical measurement models for real-world data analysis in the psychological and other behavioral sciences while, at the same time, approaching more closely the ideal of testing such models with a
sufficiently well-defined rigor. IV. MODELS AND M E T H O D S FOR PROXIMITY DATA: REPRESENTING INDIVIDUAL DIFFERENCES IN PERCEPTION AND C O G N I T I O N The kind of differential attention to, or
differential salience of, dimensions observed by Shepard (1964) illustrates a very important and pervasive source not only of intraindividual variation but also of interindividual differences in
perception. Although people seem to perceive the world using nearly the same dimensions or perceptual variables, they evidently differ enormously with respect to the relative importance
(perceptually, cognitively, or behaviorally) of" these dimensions. These differences in sensitivity or attention presumably result in part from genetic differences (for example, differences between
color-blind and color-normal individuals) and in part from the individual's particular developmental history (witness the well-known but possibly exaggerated example of the Eskimos' presumably
supersensitive perception of varieties, textures, and colors of snow and ice). Although some attentional shifts might result simply from instructional or contextual factors, studies by Cliff,
Pennell, and Young (1966) have indicated that it is not so easy to manipulate saliences of dimensions. If a more behavioral measure of proximity were used, for example, one based on confusions in
identification learning, the differential weighting could result at least in part from purely behavioral (as opposed to sensory or central) processes, such as differential gradients of response
generalization. Nosofsky (1992) and Shepard (1987) have posited mechanisms underlying such individual differences. A. Differential A t t e n t i o n or Salience o f D i m e n s i o n s : The INDSCAL
The INDSCAL (for INdividual Differences SCALing)model (Carroll & Chang, 1970; Carroll, 1972; Carroll & Wish, 1974a, 1974b; Wish & Carroll,
3 Multidimensional Scaling
1974; Arabie, Carroll, & DeSarbo, 1987) explicitly incorporates this notion of individual differences in the weights, or perceptual importances, of dimensions. The central assumption of the model is
the definition of distances for different individuals. As with ordinary, or two-way, scaling, these recovered distances are assumed to relate in some simple w a y - - f o r example, linearly or m o n
o t o n i c a l l y ~ t o the input similarities or other proximities. INDSCAL, however, assumes a different set of distances for each subject. The distance between stimuli j and k for subject i, d!
9, is related to the I/e dimensions of a group (or common) stimulus space b y t h e equation R
(2) r=l
where R is the dimensionality of the stimulus space, Xir is the coordinate of stimulus j on the rth dimension of the group stimulus space, and wit is the weight (indicating salience or perceptual
importance) of the rth dimension for the ith subject. This equation is simply a weighted generalization of the Euclidean distance formula. Another way of expressing the same model is provided by the
following equations. We first define coordinates of what might be called a "private perceptual space" for subject i by the equation
i, - ( w i ,
and then calculate ordinary Euclidean distances according to these idiosyncratic or private spaces, as defined in R
[The expression on the right was derived by substituting the definition of ),!~) in Eq. (3) into the middle expression in Eq. (4).' defining d!~).] Thus the I . . j te Weighted distance formulation
is equivalent to one in which each dimension is simply rescaled by the square root of the corresponding weight. This rescaling can be regarded as equivalent to turning the "gain" up or down, thus
relatively increasing or decreasing the sensitivity of the total system to changes along the various dimensions. 1 1 Tucker and Messick's (1963) "points of view" model, which assumes that subjects
form several subgroups, each of which has its own private space, or point of view, can be incorporated within the scope of INDSCAL. At the extreme, the group stimulus space includes the union of all
dimensions represented in any of the points of view, and an individual would have positive weights for all dimensions corresponding to the point of view with which he or she is identified and zero
weights on all dimensions from each of the other points of view. For an updated treatment of points of view, see Meulman and Verboon (1993).
J. Douglas Carroll and Phipps Arabie
The input data for INDSCAL, as with other methods of three-way MDS, constitute a matrix of proximity (or antiproximity) data, the general entry of which is 8(i) jk' the dissimilarity (antiproximity)
of stimulij and k for subject i. If there are n stimuli and I subjects, this three-way matrix will be n x n x I. The ith two-way "slice" through the third way of the matrix results in an ordinary
two-way n x n matrix of dissimilarities for the ith subject. The output in the case of INDSCAL (although not necessarily for other three-way scaling methods) consists of two matrices. The first is an
n x R matrix, X - = (Xjr) of stimulus coordinates, the second an I • R matrix W - (Wir) of subject weights. The input and output arrays for INDSCAL are illustrated in Figure 1. The coordinates
described in the two matrices X and W can be plotted to produce two disjoint spaces, both with dimensionality R, and which we have called, respectively, the group stimulus space and the subject
space. These are illustrated in Figure 2 for a purely hypothetical data set, as are two of these subjects' idiosyncratic or private perceptual spaces. Geometrically they are derived by stretching or
shrinking each dimension by applying a rescaling factor to the rth dimension, proportional t o (Wir) 1/2. The rth weight, Wir, for subject i can be derived from the subject space by simply projecting
subject i's point onto the rth coordinate axis. Quite different patterns of similarity/dissimilarity judgments are predicted in Figure 2 for Subjects 2, 3, and 4. Subject 3 (who weights the
dimensions equally and so would have a private space that looks just like the group stimulus space) presumably judges Stimulus A to be equally similar to B and D, because these two distances are
equal in that subject's private space. In contrast, Subject 2 would judge Stimulus A to be more similar to D than to B (because A is closer to D), and Subject 4 would judge Stimulus A to be more
similar to B than to D. There would, of course, be many other differences in the judgments of these three subjects, even though all three are basing their judgments on exactly the same dimensions.
Subjects 1 and 5, who are both one-dimensional, represent two extreme cases in the sense that each gives nonzero weight to only one of the two dimensions. Geometrically it is as though (if these were
the only dimensions and the model fitted the data perfectly) Subject 1 has simply projected the stimulus points onto the Dimension 1 axis so that Stimuli A, D, and G, for example, project into the
same point and so are seen by this subject as identical. Subject 5 exhibits the opposite pattern and presumably attends only to Dimension 2; this subject would see Stimuli A, B, and C as identical.
Thus, as a special case, some subjects can have private perceptual spaces of lower dimensionality than that of the group stimulus space. Distance from the origin is also meaningful in this subject
space. Subjects who are on the same ray issuing from the origin but at different distances from it would have the same pattern of distances and therefore of predicted similarities/dissimilarities.
They would have the same private space, in fact,
Multidimensional Scaling
A k
i i i
/~ ....... I
i i I i i/ ,
/ /
I 1 I
L . . . . .
I I
/ . /i 9
/ 1
DIMENSIONS 1 2..-r
1 2...r R 1
I n
GROUP STIMULUS SPACE FIGURE 1 A schematic representation of input for (A) and output from (B) INDSCAL. Input consists of 1(---2) n x n square symmetric data matrices (or half-matrices) one for each
of I subjects (or other data sources), d~;)is the dissimilaritv of stimuli (or other objects) i and k for subject (or other data source) i. Til~is set of I square matrices can be thought of as
defining the rectangular solid, or three-way array, of data depicted at top in the figure. (This is the form of the input for other three-way scaling methods also.) The output from INDSCAL consists
of two matrices, an n • R matrix of coordinates of the n stimuli (objects) on R coordinate axes (or dimensions) and an I • R matrix of weights of I subjects for the R dimensions. These matrices
define coordinates of the group stimulus space and the subject space, respectively. Both of them can be plotted graphically, as in Figure 2, and a private space for each subject can be constructed,
as shown there, by applying the square roots of the subject weights to the stimulus dimensions, as in Equation 3. Note: "Objects" need not be "stimuli." "Subjects" may come from other data sourccs.
J. Douglas Carroll and Phipps Arabie Dim 2
Dim 2 I
( ')
Dim l
I-Dim I
"Group" stimulus space
Subject space Dim 2
Dim 2
' |
(D ( ( Perceplual space for Subject 2
~ D J m l
Dim l
Perceptual space for
s~ie~ 4
FIGURE 2 Illustration of the Carroll-Chang INDSCAL model for individual differences in multidimensional scaling. Weights (plotted in subject space) are applied to group stimulus space to produce
individual perceptual spaces for Subjects 2 and 4, shown at the bottom of the figure. (For purposes of illustration, the dimensions are multiplied by the weights themselves, rather than by their
square roots as is more technically correct.)
except for an overall scale factor. The main difference between such subjects is that this same private space and pattern of predicted judgments account for less of the variance in the (scalar
products computed from the) data for subjects who are closer to the origin. Thus, although Subjects 3 and 7 in Figure 2 would have the same private space (the one corresponding to the group stimulus
space), these two dimensions would account for more variance in the (hypothetical) matrix of Subject 3 than of Subject 7. Subject 9, being precisely at the origin (indicating zero weight on both
3 Multidimensional Scaling
would be completely out of this space; that is, none of that subject's data could be accounted for by these two dimensions. The residual variance may be accounted for by other dimensions not
extracted in the present analysis or simply by unreliability, or error variance, in the particular subject's responses. The square of the distance from the origin is closely analogous to the concept
of communality in factor analysis. In fact, the square of that distance is approximately proportional to variance accounted for. Although only an approximation, it is generally a good one and is
perfect if the coordinate values on dimensions are uncorrelated. The cosine of the angle between subject points (treated as vectors issuing from the origin) approximately equals the correlation
between distances (or, more properly, between scalar products) in their private perceptual spaces. Distances between these points are also meaningful--they approximate profile distances between
reconstructed distances (or, again more properly, scalar products) from the respective private perceptual spaces in which the overall scale is included. We therefore reject arguments made by Takane
et al. (1977), MacCallum (1977), and others that lengths (or distances from the origin) of these subject weight vectors are not meaningful. We believe the lengths (as well as directions) of these
subject vectors are meaningful and interpretable, even when the data are matrix conditional rather than unconditional; in the latter case, Takane et al. (1977), MacCallum (1977), and others have
argued these lengths have no meaning; thus those authors normalize subject weight vectors to unit lengths, contrary to the practice in the I N D S C A L / S I N D S C A L method of fitting the
INDSCAL model. The lengths, in fact, often contain information that is quite critical in distinguishing among well-defined groups of subjects. Wish and Carroll (1974) presented one very good example,
entailing perception of the rhythm and accent of English words or phrases by various groups of subjects. Most compelling, in this respect, is the fact that native and nonnative speakers of English
were distinguished most clearly by the subject vectors--those for the former group having systematically greater length (terminating farther from the origin) than those for the latter group--implying
that all dimensions characterizing the rhythm and accent (or stress patterns) of English words were much more salient to native than to normative speakers of English. 2 2 In statistical terms, the
small set of"common" dimensions in the group space accounted for more variance in scalar products computed from the data of the native English speakers--the square of the length of the subject vector
approximating the proportion of variance accounted for--whereas the nonnative English speakers apparently were largely accounted for by other variables not emerging from this analysis, such as unique
linguistic dimensions (which might emerge if higher dimensional solutions were sought) more appropriate to their individual native languages or greater systematic or random errors stemming from an
imperfect assimilation of English stress patterns.
J. Douglas Carroll and Phipps Arabie
One of the more important aspects of INDSCAL is the fact that its dimensions are unique, that is, not subject to the rotational indeterminacy characteristic of most two-way MDS procedures involving
the Euclidean metric. INDSCAL recovered dimensions are generally defined uniquely up to what is called an "extended permutation," defined later. In the psychological model, the dimensions are
supposed to correspond to fundamental perceptual or other processes whose strengths, sensitivities, or importances vary among individuals. Mathematically, the family of transformations induced by
allowing differential weighting (which corresponds geometrically to stretching or compressing the space in directions parallel to coordinate axes) will differ for the various orientations of
coordinate axes~that is, the family of admissible transformations is not rotationally invariant, as can be seen graphically by considering what kinds of private spaces might be generated in the case
illustrated in Figure 2 if one imagines that the coordinate system of the group stimulus space were rotated, say, 45 ~ Instead of the square lattice transforming into various rectangular lattices, it
would transform into various rhombuses, or diamond-shaped lattices. Rotating the coordinate system by something other than 45 ~ would generate other families of parallelograms, generally a unique
family for each different angle of rotation. These families are genuinely different, because they allow different admissible sets of distances among the objects or stimuli. Statistically speaking, a
rotation (not corresponding to a reflection, permutation, or extended permutation) of the axes generally degrades the solution in the sense that the variance accounted for in fitting the model
decreases after such a rotation, even if optimal weights are recomputed for the rotated coordinate system. This dimensional uniqueness property is important because it obviates the need, in most
cases, to rotate the coordinate system to find an interpretable solution. If one adopts the psychological model underlying INDSCAL, then these statistically unique dimensions should be
psychologically unique as well. Indeed, practical experience has shown that the dimensions obtained directly from INDSCAL are usually interpretable without rotation (even when there is little reason
to believe the underlying model's assumptions). Kruskal (1976) has provided a formal proof of this uniqueness property of INDSCAL (and of a wider class of three-way models of which it is a special
case). Technically, the INDSCAL stimulus space is identified, under very general conditions, up to a permutation and reflection of coordinate axes, followed by a rescaling of all dimensions via a
diagonal scaling matrix (with scale factors that may be either positive or negative). The rescaling transformation is generally resolved via the usual INDSCAL normalization convention, in which
stimulus dimensions are scaled so as to have unit sum of squared (and zero mean) coordinate values; this way only the signs of the scale factors are nonidentified. In practice INDSCAL dimensions are
identified up to a permutation and possible reflection of axes--what we call an extended
3 Multidimensional Scaling
permutation. In fact, even the permutation indeterminacy generally is resolved by ordering axes based on a variance accounted for (averaged over all subjects) criterion. Space limitations preclude us
from giving substantive illustrations of fitting the INDSCAL model (or any of the others covered in this chapter). Two protracted analyses are given in Arabie et al. (1987, pp. 12-16, 25-33). Because
of the particular normalization conventions used in the "standard" formulation described earlier, distances in the group stimulus space are not immediately interpretable but must instead be compared
to the interstimulus distances of a hypothetical (or real) subject who weights all dimensions equally. As is so often the case, the (weighted) Euclidean metric in Eq. (4) was chosen for mathematical
tractability, conceptual simplicity, and historical precedence. In many stimulus domains (typically with nonanalyzable or unitary perceptual stimuli, or even with more conceptual analyzable stimuli
when dimensionality becomes large) the Euclidean metric seems to fit quite well (Shepard, 1964). Furthermore, there is considerable evidence that methods based on it are robust, so that even if the
basic metric is nonEuclidean, multidimensional scaling in a Euclidean space may recover the configuration adequately. We regard this particular choice of basic metric, then, as primarily heuristic
and pragmatic, although on many grounds it does seem to be the best single choice we could have made. It is, however, within the spirit of the INDSCAL model to assume a much wider class of weighted
metrics, and Okada and Imaizumi (1980) have provided such a generalization, along with gradient-based software to fit the model. Also, as argued in the discussion of two-way MDS models, among certain
nonEuclidean metrics, the L 1 or city-block metric in particular appears to be more appropriate for the more cognitive or conceptual stimulus domains involving analyzable stimuli in which the
dimensions are psychologically separable. For this reason we consider an obvious generalization entailing a weighted Minkowski p or power metric of the form dj('k) - -
According to the rescaling of dimensions, the private space for this generalized L~ model would be defined as jr(i) "~ W l/p ir
It is evident that computing ordinary Minkowski p metric in this rescaled space, now involving the pth root of the weights, is equivalent to the weighted Minkowski p metric in Eq. (5). See Carroll
and Wish (1974b, pp. 412-428) for a technical discussion concerning metrics in MDS.
J. Douglas Carroll and Phipps Arabie
B. The IDIOSCAL Model and S o m e Special Cases The most general in this Euclidean class of models for MDS is what has been called the IDIOSCAL model, standing for Individual Differences in
Orientation S C A L i n g (Carroll & Chang, 1970, 1972; Carroll & Wish, 1974a). The intuitive appeal of the IDIOSCAL model is demonstrated by the number of times it, or special cases of it, have been
invented or reinvented (e.g., "PARAFAC-2" by Harshman, 1972a, 1972b; "Three-Mode Scaling," by Tucker, 1972, and other procedures proposed by Bloxom, 1978, and by Ramsay, 1981, incorporating this
general Euclidean metric or some variant of it); indeed, sometimes it has been simultaneously reinvented and renamed (e.g., "the General Euclidean Model" by Young, 1984a). In the IDIOSCAL model, the
recovered distance d!!) ,,~ between objects j and k for the ith source of data is given by / R
-- Xkr ) t"(i)rr,(Xjr'
Xkr, ) ,
where r and r' are indices of the R dimensions in the object space and (separately) the source space. This model differs from the INDSCAL model in Eq. (2) by the inclusion of matrix C(;) -= (t,,j, "
(;)~ which is an R x R symmetric positive definite or semidefinite matrix, instead of matrix Wi, which is diagonal, with the weights wi~ on the diagonals. If each Ci is constrained to be such a
diagonal matrix 3 W i with nonnegative entries, then the diagonal entries in the Ci matrices are interpretable as source weights in the INDSCAL formulation of distance, and the INDSCAL model follows
as a special case. This result can be seen by noting that if in Eq. (7), cS'fi = w;~ when r r', and 0 when r # r', then the terms (xi, - xk,')c~!(xi," -- xk,") drop out if r ~: r' and become Wir(Xjr
-- Xk,.)2 for r -- r', thus producing the INDSCAL model of Equation (2). In the general IDIOSCAL model, C~ provides a rotation of the object space to a new (or IDIOsyncratic) coordinate system for
source i, followed by differential weighting of the dimensions of this rotated coordinate system. In the Carroll and Chang (1970, pp. 305-310; /972) approach to interpreting the model, this rotation
will be orthogonal. The alternative approach suggested independently by Tucker (1972) and by Harshman (1972a, 1972b) entails no such rotation but assumes differing correlations (or, more
geometrically, cosines of angles) between the same dimensions of the object space over different sources. (Further details on the two interpretations of the C; matrices are given by Carroll and Wish,
1974a, and in the source articles; also see de Leeuw and Heiser, 1982.) 3 We note here that the matrix W; is an R • R diagonal matrix for the ith subject whereas, previously, the symbol W has been
used to demote the I • R matrix of weights for the I subjects on the R dimensions (so that the ith row of W contains the diagonal entries of Wi).
3 Multidimensional Scaling
In vector and matrix form, this model can be written as -[(xj
where C i ~ (C~!) is an R x R matrix. The matrix Ci is generally assumed to be symmetric and positive definite or semidefinite. This metric is exactly what we would obtain if we defined a private
perceptual space for individual i by a general linear transformation defined as R
y}[) = E xj, q('),, , s----I
which in vector-matrix notation is
and we then computed ordinary Euclidean distances in these private spaces. Matrix Ci in Eq. (8) will, in this case, simply be Ci = Q,Q~,
because [d)/~)]2 ~ (.),j(i) --}1 (ki)).(yJ(i) --Y (~)),
= ( x j - xk)QiQ;(x j - xk)',
which is equivalent to Eq. (8) with C i as defined in Eq. (11). Another closely related interpretation is provided by decomposing the (symmetric, positive definite) matrix C; into a product of the
form Ci
with T i singular analysis, definite, Then
orthogonal and 6i diagonal. (This decomposition, based on the value decomposition and closely related to principal components can always be effected. If the Ci's are positive definite or semithe
diagonal entries of 13; will be nonnegative.) we can define ~D, = TJ3!,
C, = ~;~;.
and clearly
Actually, ~i provides just one possible definition of the matrix Qi in Eq. (10). Given any orthogonal matrix F, we may define Qi-
J. Douglas Carroll and Phipps Arabie
and it will turn out that Q,Q; = ~ , F F ' ~ = (I)i(I);
C i.
Any Qi satisfying Eq. (11) can be shown to be of the form stipulated in Eq. (16), but the decomposition of Ci defined in Eqs. (13)-(14) or (15) (with F as the identity matrix) leads to a particularly
convenient geometric interpretation. Ti can be viewed as defining an orthogonal rotation of the reference frame, and thus of the Individual Differences In Orientation (of the reference system)
referred to earlier. The diagonal entries of 13i can be interpreted as weights analogous to the Wir'Sin the INDSCAL model that are now applied to this IDIOsyncratic reference frame. The considerable
intuitive appeal of the IDIOSCAL model notwithstanding, it has empirically yielded disappointing results in general. A major practical drawback of using the IDIOSCAL model is the potential need to
provide a separate figure (or set of them) for the spatial representation of each source. Young's (1984a) approach to fitting what he called the "General Euclidean Model," specifically in the form of
his "Principal Directions Scaling," can be viewed as a special case of IDIOSCAL in which the Ci matrix for each subject is positive semidefinite, with rank Ri less than R (generally R i = 2). Young
assumes each subject projects the IDIOSCAL-type stimulus space defined by X into an Ri-dimensional subspace so that in this model Yi = X ~ i where (I)i is an R x R i projection matrix (so ~(I) i =
IR; that is ~i is an orthonormal section projecting orthogonally from X into an Rrdimensional subspace, Yi)- In this case C i = ~ i ~ will be positive semidefinite and is of rank R,. The main
advantage of this particular special case of IDIOSCAL appears to be that it enables the graphic representation of each subject's private perceptual space in (usually the same) smaller
dimensionality~typically two. It is not clear, however, that this model has a convincing rationale beyond this practical graphical advantage (see Easterling, 1987, for a successful analysis). Other
models closely related to IDIOSCAL are discussed at length in Arabie et al. (1987, pp. 44-53), but one final three-way model for proximities that bears mentioning generalizes the IDIOSCAL model by
adding additional parameters associated with the stimuli (or other objects)" the PINDIS (Procrustean INdividual Differences Scaling) model and method of Lingoes and Borg (1978). PINDIS adds to the
parameters of the IDIOSCAL model a set of weights for stimuli, so the model for an individual, in the scalar product domain, is of the form B i ~ AiXCiX'Ai, where B i is an n x n matrix of scalar
products among the n stimuli for subject/ source i, whereas A i is an n x n diagonal matrix of rescaling weights for stimuli. (Although we shall not demonstrate the result here, the IDIOSCAL
3 Multidimensional Scaling
model in the scalar product domain is of this form, but with A i = I for all i, so that, in effect, the pre- and postmultiplication by Ai is omitted.) The interpretation of these additional
parameters is difficult to justify on psychological grounds. Even more parameters defining different translations of the coordinates of each individual or other source of data, i, are allowed in the
general formulation of PINDIS in its scalar product form. Geometrically, the rescaling parameters for stimuli have the effect of moving each stimulus closer to or farther from the centroid in the
stimulus space; they do this by multiplying the coordinates by the weight associated with that object. It is hard to envision a psychological mechanism to account for such nonuniform dilations.
Moreover, Commandeur (1991, p. 8-9) provides a trenchant and compelling algorithmic critique of PINDIS. Thus, we pursue this model and method no further.
C. Available Software for Two- and Three-Way MDS 1. The Two-Way Case KYST2A (Kruskal, Young, & Seery, 1973) is the dominant software for two-way MDS. The acronym stands for "Kruskal, Young, Shepard,
and Torgerson," and the software synthesizes some of the best parts of various approaches to nonmetric (two-way) MDS that these four contributors have proposed. These algorithms are described in
great detail in the previously cited references, so they will not be further described here. The important distinctions are the following: 1. KYST2A minimizes a criterion Kruskal calls STRESS. The
standard version of STRESS, often called STRESSFORM1, is defined as STRESSFORM1
( ~jk(djk_ djk)2 )1/2 ~_~jkdf.k " R
(i.e., the Euclidean distance in the recovered configuration has coordiantes Xjr, f o r j = 1, 2 . . . n, r = 1, 2 = R) and djk is, depending on the user's specification, a linear, monotonic, or
other function of the input similarity, sjk, or dissimilarity, ~jk, ofj and k (a decreasing or nonincreasing function in the former case and an increasing or nondecreasing function in the latter).
STRESSFORM2 differs only in the normalization factor in the denominator, which is ~ok(djk -- j)2, where d is the mean of the djk's. All sums (and the mean if STRESSFORM2 is used) are over only the
values ofj and k for
J. Douglas Carroll and Phipps Arabie
which data are given. Generally, the diagonals (sjj or ~jj) or self-similarities/dissimilarities are undefined and therefore are treated as missing data (so that sums and means exclude those diagonal
values as well). 2. KYST2A allows both metric and nonmetric fitting (and, in fact, includes options for other than either linear or general monotonic functions transforming data into estimated
distances; the most important special case allows polynomial functions up to the fourth degreeAbut such generalized linear functionals are not necessarily monotonic). KYST2A allows still other
options (see Kruskal et al., 1977, for details) for analyzing three-way data, but fitting only two-way or nonindividual differences models to all subjects or other sources, as well as for performing
what Coombs (1964) and others call "multidimensional unfolding" (to be discussed later). 3. KYST2A allows fitting of metrics other than Euclidean--specifically the "Minkowski p," or Lp, metric of the
form given in Eq. (1). In practice, the only two values of p that are used at all frequently are p = 2, the Euclidean case, and, quite inappropriately, p - 1, the city-block or Manhattan metric case
(see Arabie, 1991, for a review). As noted earlier, however, Hubert and Arabie (1988; Hubert, Arabie, & Hesson-Mcinnis, 1992) demonstrated that the problem of fitting an L 1 or city-block metric is
more appropriately approached via combinatorial optimization. 4 Another available algorithm for two-way nonmetric MDS is Heiser and de Leeuw's (1979) SMACOF (Scaling by MAjorizing a COmplicated
Function) procedure, based on a majorization algorithm, (see de Leeuw and Heiser, 1980, for details), which we will not discuss here except to say that SMACOF optimizes a fit measure essentially
equivalent to Kruskal's STRESS. Majorization is an important algorithmic approach deserving much more coverage than space allows. Important references include de Leeuw (1988), Groenen (1993),
Groenen, Mathar, and Heiser (1995), Heiser (1991, 1995), Kiers (1990), Kiers and ten Berge (1992), and Meulman (1992). Wilkinson's (1994) SYSTAT allows many options and considerable flexibility for
two-way MDS. Two other valuable algorithmic developments in two-way (and threeway) MDS are the ALSCAL (Takane et al., 1977) procedure and Ramsay's (1978) MULTISCALE. ALSCAL (for Alternating Least
squares SCALing) differs from previous two-way MDS algorithms in such ways as (1) its loss function, (2) the numerical technique of alternating least squares (ALS) used earlier by Carroll and Chang
(1970) and originally devised by Wold (1966; also see de Leeuw 1977a, and de Leeuw & Heiser 1977), and (3) its allowance For other combinatorial approaches to MDS, see Hubert and Schuhz (1976), Poole
(1990), and Waller, Lykken, and Tellegen (1995).
3 Multidimensional Scaling
for nominal scale (or categorical) as well as interval and ordinal scale data. ALSCAL and MULTISCALE are also applicable to two-mode three-way data, and a three-way version of SMACOF is under
development. All three programs will be considered again under spatial distance models for such data. MULTISCALE (MULTidimensional SCAL[E]ing), Ramsay's (1977b, 1978a, 1978b, 1980, 1981, 1982a, 1983)
maximum-likelihood-based procedure, although strictly a metric approach, has statistical properties that make it potentially much more powerful as both an exploratory and (particularly) a
confirmatory data analytic tool. MULTISCALE, as required by the maximum likelihood approach, makes very explicit assumptions regarding distribution of errors and the relationship of the parameters of
this distribution to the parameters defining the underlying spatial representation. One such assumption is that the dissimilarity values 82k are log normally distributed over replications, but
alternative distributional assumptions are also allowed. The major dividend from Ramsay's (1978) strong assumptions is that the approach enables statistical tests of significance that include, for
example, assessment of the correct dimensionality appropriate to the data (via an asymptotically valid chi square test of significance for three-way data treated as replications) while fitting a
two-way model. Another advantage is the resulting confidence regions for gauging the relative precision of stimulus coordinates in the spatial representation. The chief disadvantage is the very
strong assumptions entailed for the asymptotic chi squares or confidence regions to be valid. Not least of these is the frequent assumption of ratio scale dissimilarity judgments. In addition, there
is the assumption of a specific distribution (log normal, normal, or others with specified parameters) and of statistical independence of the dissimilarity judgments. 2. The Three-Way Case The most
widely used approach to fitting the three-way INDSCAL model is the method implemented in the computer program SINDSCAL (for Symmetric INDSCAL, written and documented by Pruzansky, 1975), which
updated the older INDSCAL program of Chang and Carroll (1969a, 1989). SINDSCAL begins with some simple preprocessing stages, initially derived by Torgerson and his colleagues (Torgerson, 1952, 1958)
for the twoway case (also see Gower, 1966, and Keller, 1962). The first step, based on the assumption that the initial data are defined on at most an intervalscale (so that the origin of the scale is
arbitrary, leading to the similarities/dissimilarities being related to distances by an inhomogeneous linear function), involves solving the so-called additive constant problem. Then a further
transformation of the resulting one-mode two-way matrix of estimated
J. Douglas Carroll and Phipps Arabie
distances to one of estimated scalarproductsis effected. (See Torgerson, 1952, 1958, or Arabie et al., 1987, pp. 71-77, for further details on these preprocessing steps.) In the two-way classical
metric MDS, as described by Torgerson (1952, 1958) and others, the derived (estimated) scalar product matrix is thus simply subjected to a singular value decomposition (SVD), which is mathematically
equivalent to a principal components analysis of a correlation or covariance matrix, to obtain an estimate of the n • R matrix X of coordinates of the n stimuli in R dimensions, X, by minimizing what
has been called the STRAIN criterion: n
STRAIN = lIB - X R ' ] [ 2 ~- E E (bjk - ~jk)2, w h e r e ~..jk ~ E j
XjrfCkr 9
This approach yields a least-squares measure of fit between derived scalar products B = (bjk) and estimated scalar products I] = (bjk). (In some cases, e.g., when fitting nonmetrically, it might be
necessary to normalize STRAIN by, say, dividing by the sum of squared entries in the B matrix; but for the current metric case, and with the preprocessing described earlier, we may use this raw
unnormalized form without loss of generality.) In the three-way case, preprocessing entails these same steps for each similarity or dissimilarity matrix, S,. or A;, respectively, converting an
initial three-way array S (of similarities) or A (of dissimilarities) into a three-way array B of derived scalar products, where each two-way slice, Bi, is a symmetric matrix of derived (estimated)
scalar products for the ith subject or other source of data. C A N D E C O M P , as applied in this case, optimizes a three-way generalization of the STRAIN criterion discussed earlier, namely,
STRAIN = E E E (b)~)- ~)~))2 = E STRAIN/, ; j k where STRAIN/is STRAIN defined for the ith subject or source and where, if the usual matrix normalization option is used, the constraint is imposed
E E (b(~))2 i
for all i,
b)~)- ~ ff,(ifCjrfCkr
is a generalized (weighted) scalar product, and parameters rb,). and ~j~ are (estimates of) the same parameters (without the "hats") as those entering the
3 Multidimensional Scaling
weighted Euclidean distance defined for INDSCAL in Eq. (2), as demonstrated in Carroll and Chang (1970) and elsewhere (e.g., Appendix B, Arabie et al., 1987). The INDSCAL/SINDSCAL approach to metric
three-way MDS then applies a three-way generalization of the SVD, called (three-way) C A N D E C O M P (for CANonical DECOMPosition of N-way arrays) to array B, to produce estimates (minimizing the
least-squares STRAIN criterion) X and W, respectively, of the group stimulus space and the subject weight space. For details of this C A N D E C O M P procedure and its application to the estimation
of parameters of the INDSCAL model, see Carroll and Chang (1970) or Arable et al. (1987). Probably the most widely used approach for nonmetricfitting of the INDSCAL model is ALSCAL (Takane et al.,
1977), which fits the model by optimizing a criterion called SSTRESS, analogous to Kruskal's STRESS, except that it is a normalized least-squares criterion of fit between squared distances (in the
fitted configuration) and monotonically transformed data (called "disparities" by Takane et al.). For each subject or data source, SSTRESS is defined analogously to Kruskal's STRESSFORM1, except
that, again, squared Euclidean distances replace first-power distances. Another difference, irrelevant to the solutions obtained but definitely important vis-a-vis interpretation of values of
SSTRESS, is that the square root of the normalized least-squares loss function defines STRESS, whereas SSTRESS is the untransformed normalized least-squares criterion of fit. Thus, to the extent that
SSTRESS is comparable to STRESS(FORM1) at all, SSTRESS should be compared with squared STRESS. In the three-way case, overall SSTRESS is essentially a (possibly weighted) sum of SSTRESSi, where
SSTRESS; is the contribution to the SSTRESS measure from subject/source i. As in the case of KYST2A, ALSCAL allows either monotonic or linear transformations of the data, in nonmetric or metric
versions, respectively. See Young and Lewyckyj (1981) for a description of the most recent version of the ALSCAL program. In a recently published Monte Carlo study, Weinberg and Menil (1993) compared
recovery of structure of SINDSCAL to that by ALSCAL, under conditions in which both metric and nonmetric analyses were appropriate. Because SINDSCAL allows only metric analyses, even if only ordinal
scale data are given, one would expect ALSCAL to be superior in recovering configurations under such ordinal scale conditions because ALSCAL allows a more appropriate nonmetric analysis whereas
SINDSCAL necessarily treats the data (inappropriately) as interval scale. It is not clear which of the two should yield better recovery of configurations in the case of interval scale data, because
both can allow (appropriate) metric analyses in this case. Surprisingly, the Weinberg and Menil (1993) Monte Carlo study found that SINDSCAL was superior in recovery both of the stimulus
configuration and of subject weights, in the case both of interval and of ordinal scale
J. Douglas Carroll and Phipps Arabie
data (with some fairly severely nonlinear monotonic transformations of the data). The Weinberg and Menil findings may confirm some preliminary results reported by Hahn, Widaman, and MacCallum (1978),
at least in the case of mildly nonlinear ordinal data. The explanation of this apparent anomaly appears to rest in the SSTRESS loss function optimized by ALSCAL, probably because SSTRESS measures the
fit of transformed data to squared rather than first power distances; the squaring evidently tends to put too much weight on the large distances. A STRESS-based three-way approach might do better in
this respect, but unfortunately no such methodology exists at present. Willem Heiser (personal communication) has indicated that he and his colleagues expect eventually to have a three-way version of
SMACOF available, which should fill this void. Version 6 of SYSTAT (Wilkinson, 1994) included, for the first time, software for nonmetric fitting of the INDSCAL model. It is too early to evaluate
SYSTAT's performance in this particular domain, but we note that the example given in the documentation (Wilkinson, 1994, p. 140) erroneously suggests that both the subjects and the stimuli are
positioned in the same space, rather than in disjoint spaces having a common dimensionality. Another widely available program for both two- and three-way MDS is Ramsay's (1978, 1982a) MULTISCALE,
briefly discussed earlier, which generally assumes ratio scale data, and fits via a maximum likelihood criterion, assuming either additive normal error or a lognormal error process. Although a power
transformation is allowed, Ramsay's approach generally entails only metric options and in fact makes even stronger metric assumptions than other metric approaches in that it generally requires ratio
scale, not the weaker form of interval scale proximity data generally assumed in metric MDS. The main advantage of Ramsay's approach is that it does utilize a maximum likelihood criterion of fit and
thereby allows many of the inferential statistics associated with that approach, notably the asymptotic chi square tests that can be used to assess the statistical significance of various effects.
(This advantage is undermined somewhat by the fact that the additional parameters associated with subjects or other sources of data in the three-way case can be regarded as nuisance parameters, whose
number increases with the number of subjects/sources, thus violating one of the key assumptions on which the asymptotic behavior of the asymptotic chi square is based. Ramsay, 1980, however, provided
some Monte Carlo results that led to adjustments in the degrees of freedom for the associated statistical tests that correct, at least in part, for this problem.) Ramsay (1982b) and Winsberg and
Ramsay (1984) also introduced a quasi-nonmetric option in MULTISCALE, in which the proximity data are transformed via a monotone spline function or functions in the case of matrix conditional
three-way data, which, incidentally, can include an inhomogeneous linear function as a special case (thus allowing for more gener-
3 Multidimensional Scaling
al metric fitting). But this option is not available in most versions of MULTISCALE. It is important, however, to note that this quasi-nonmetric option in MULTISCALE is quite different from the one
introduced by Winsberg and Carroll (1989a, 1989b) and Carroll and Winsberg (1986, 1995) in their extended Euclidean two-way MDS. It also differs from the extended INDSCAL (or EXSCAL) approach,
described next. D. The Extended Euclidean Model and Extended I N D S C A L
Winsberg and Carroll (1989a, 1989b) and Carroll and Winsberg (1986, 1995) proposed an extension of the simple Euclidean model for two-way proximities and of the INDSCAL model for three-way
proximities for which the continuous dimensions of common space are supplemented by a set of specific dimensions, also continuous, but relevant only to individual stimuli or other objects. Here we
state the extended model for distances for the three-way, extended INDSCAL case because the two-way extended model is a special case, R
d)~) = [ ~
Wi,.(Xjr--Xkr)2+ ~ij+ ~ile ]
r----- 1
where ~ , called the "specificity" of stimulus j for subject i, is the sum of squares of coordinates of specific dimensions for subject i on stimulus j. Note that we cannot tell in this model how
many specific dimensions pertain to a given subject-stimulus combination, only their total effect on (squared) distances in the form of this specificity. Winsberg and Carroll have adduced both
theoretical and strong empirical evidence for the validity of this extended (ordinary or weighted) Euclidean model. They discussed the topic in a series of papers on maximum likelihood methods for
either metric or quasi-nonmetric fitting of both the twoand three-way versions of this extended model. As noted in considerable detail in Carroll and Winsberg (1995), there are theoretical reasons
why the now classical approach to nonmetric analysis pioneered by Kruskal (1964a, 1964b)~in which a totally general monotonic function (or functions in the three-way case) of the data is sought
optimizing either of two forms of Kruskal's STRESS measure (or a large class of other STRESS-like fit measures)--cannot be used for nonmetric fitting of these extended models. The basis of this
assertion is the existence of theoretical degeneracies or quasidegeneracies (solutions yielding apparent perfect or near-perfect fit, but retaining essentially none or very little of the information
in the original data) that can always be obtained via such a fully nonmetric fitting. Instead, Winsberg and Carroll (1989a, 1989b) and Carroll and Winsberg (1986, 1995) use a form of quasi-nonmetric
fitting in which very (though not
J. Douglas Carroll and Phipps Arable
totally) general monotonic functions constrained to be continuous and to have continuous first and possibly second derivatives are applied to the distances derived from the model, rather than to the
data. Winsberg and Carroll use monotone splines, which can be constrained to have any desired degree of continuity of function and derivativesmalthough other classes of functions possessing these
desiderata could also be utilized. For complete details, see Carroll and Winsberg (1995). Also, see the discussion of the primordial model presented later in this chapter. Carroll (1988, 1992) has
also demonstrated that similar degeneracies would affect attempts at fully nonmetric fitting of discrete models (e.g., ADCLUS/INDCLUS, or tree structures), to be discussed later, and that such
quasi-nonmetric fitting would be appropriate here as well. In fact, we argue that even in more well-behaved cases, such as fitting the ordinary two-way Euclidean or three-way INDSCAL model,
quasi-degeneracies tend to occur in the case of fully nonmetric fitting, so that such quasinonmetric fitting may be more appropriate even in standard MDS. The essence of such quasi-nonmetric fitting
is twofold: (1) the monotone function is applied on the model side (as seems more appropriate statistically, in any case), not to the data, and (2) a less than totally general class of monotone
functions, such as monotone splines, is utilized so that continuity of the function and at least some of its derivatives can be guaranteed. Concerning the extended simple and weighted Euclidean
models assumed in this work, such extensions, entailing assumptions of dimensions specific to particular stimuli in addition to common dimensions, can be made for such other generalized Euclidean
models as IDIOSCAL, threemode scaling, and PARAFAC-2, or even to non-Euclidean models such as those based on city-block or other L, metrics.
E. Discrete and Hybrid Models for Proximities In addition to the continuous spatial models so closely associated with traditional MDS, nonspatial models (which are still geometric in the generic
sense of being distance models) entailing discrete, rather than continuous, parameters can also be used profitably for representing proximity data (see, e.g., Gordon, 1996, and other chapters in
Arabie, Hubert, and De Soete, 1996; S. C. Johnson, 1967, Hartigan, 1967; Kruskal and Carroll, 1969; Carroll and Chang, 1973; Carroll, 1976; Carroll and Pruzansky, 1975, 1980, 1983, 1986; De Soete &
Carroll, 1996; Shepard and Arabie, 1979; Carroll and Arabie, 1980; Arabie et al., 1987). As already argued, such discrete (or "feature") representations may be more appropriate than continuous
spatial models for conceptual or cognitive stimuli. A large number of these discrete models are special cases of a model originally formulated by Shepard and Arabie (1979; see also Shepard, 1974)
3 Multidimensional Scaling
called ADCLUS, for ADditive C L U S t e r i n g , and generalized to the threeway, individual differences case by Carroll and Arabie (1983) in the form of the I N D C L U S (INdividual Differences C
L U S t e r i n g ) model. We can state both models in the single equation R
S)k) ~ S)ff ~" E
W irPjrPkr -{- g i,
where s(0 - proximity (similarity or other measure of closeness) of stimuli (or othJrk objects)j and k for subject (or other source of data)/(/', k = 1, 2 . . n; i, = 1, 2 . . . /). Note. that,, ~!{)
j~is~the model estimate_ of s!0, and " ~ " means approximately equals except tor error terms mat willsnot be further specified here. In addition, Pjr - a binary (0, 1) valued variable defining
membership (Pjr = 1) or nonmembership (Pir = 0) of stimulus (or other object) j in class or cluster r (j = 1, 2 . . . n; r = 1, 2 . . . R); wit - (a continuous nonnegative) importance weight of class
or cluster r for proximity judgments (or other measurements) for subject (or other source of data) i; g i - additive constant for subject (source) i, or, alternatively, that subject's weight for the
universal class or cluster of which all the stimuli are members; and R - number of classes or clusters (excluding the universal 9
Equation (20) gives the basic form of the three-way INDCLUS model; ADCLUS is simply the two-way special case in which I = 1, so if desired we may drop the "i" subscript. It might be noted immediately
that the A D C L U S / I N D C L U S model as stated in Eq. (20) is algebraically of the same form as the scalar product form of the INDSCAL model given in Eq (18) We simply substitute b(i) = s (i)
jk .ik and x(i) o = 0 , for all i, and because we are concerned jr. = n(i) r jr while we set e~, ,, ,, here with models themselves, we may, conceptually, remove the hats from Equation (18), of
course! In the INDSCAL approach the b's, or (approximate) scalar products, can be interpreted as proximity measures derived from directly judged similarities or dissimilarities. From the purely
algebraic perspective, the A D C L U S / I N D C L U S models can be viewed as scalar product models for proximities (s!iB, but with dimensions coordinates (Xjr = Pjr) constrained to be binary i
(~k'-' 1) rather than continuous. Thus, this particular discrete model for proximities can be viewed simply as a special case of the scalar product form of the continuous, spatial model discussed
earlier, and most typically associated with MDS, albeit with the simple and straightforward constraint that the dimensions' coordinates must be discrete (specifically, binary). An interpretation of
the A D C L U S / I N D C L U S model was provided by Shepard and Arabie (1979) using what is sometimes called a "common features" model, which can be viewed as a special case of Tversky's (1977) 9
J. Douglas Carroll and Phipps Arabie
features of similarity model. Each of the R classes or clusters can potentially be identified with what Shepard (1974) called an attribute or what Tversky (1977) later dubbed a feature, which each
stimulus or other object either has or does not h a v e ~ a kind of all-or-none dimension, that is. The similarity (proximity) of two objects is incremented for subject (source) / b y an amount
defined by the weight (wi~) associated with that particular subject/attribute combination if both objects have the attribute, but it is not incremented if either one fails to possess it. This model
defines the similarity of a pair of objects as a weighted count of common attributes of those two objects~an intuitively quite compelling model. As with INDSCAL, in the three-way, individual
differences case, the subjects or objects are differentiated by the profile of (cluster) weights characterizing the individual subjects. Arabie and Carroll (1980) devised the MAPCLUS algorithm, the
most widely used method for fitting the two-way ADCLUS special case of this model. Published data analyses using MAPCLUS include examples from psychoacoustics (Arabie & Carroll, 1980), marketing
(Arabie, Carroll, DeSarbo, & Wind, 1981), and sociometry (Arabic & Carroll, 1989); other references are given in Arabic and Hubert (1996, p. 14). A more widely used method for the discrete
representation of similarity data is hierarchical clustering (Gordon, 1996; Hartigan, 1967; Johnson, 1967; Lance & Williams, 1967), which yields a family of clusters such that either two distinct
clusters arc disjoint or one includes the other as a proper subset. In the usual representation, the objects being clustered appear as terminal nodes of an inverted tree (known as a dendrogram),
clusters correspond to internal nodes, and the reconstructed distance between two objects is the height of the internal node constituting their meeting point. The model implies that, given two
disjoint clusters, all recovered distances between objects in the same cluster are smaller than distances between objects in the two different clusters, and that for any given pair of clusters these
between-cluster distances are cqual; all triangles are therefore acute isosccles (isosceles with the two larger distances equal). This property is equivalent to the ultrametric inequality, and the
tree representation is called an ultrametric tree. The ultrametric incquality (u.i.) states that, for ultrametric distances h, hi i 0). Therefore it is equally valid to write R -- Pkr) 2
w* I p j r - - PkriP, for p > 0], so that ~* can with equal validity be [or ~)~) = s viewed as a weighted city-block, or L1, metric defined on the (discrete) space whose coordinates are defined by P
= (Pjr) or as a weighted squared Euclidean metric defined on the same space (or, indeed, as any weighted L 0 or Minkowski p metric, raised to the pth power). We now utilize the definition of ~* in
Eq. (25), as squared Euclidean distances, for mathematical convenience. Expanding Wi@r(Pjr -- Pkr) 2 ,
~j('k)* -- E Y
= ~ r
w*(P 2, - 2pjrPkr + p2r)
3 Multidimensional Scaling -
-__E __
wi,, ps2, +
WirPj r +
~ , w * p L">r
2 ~
-- 2
2 ~
W i*r P j r P k r ,
w*pi,Pkr, W i*r P . i r P k r ,
(since p2 ~ p, given p binary), or ~*
- - ~(i) "Jr- bl(j "Jr" Uik , jk
where uij = u* - (ti - gi)/2 - s Wi~pi r * --t*, Wi~ = 2W*, and t* = (ti - gi) / 2 As stated earlier, ~ki -__ ti - - -~(9 with t; as defined in Eq. (22), whereas .~{{)and g i are as defined in Eq.
(20~.~ ;'~ Thus, the distinctive feature model can be viewed as a common features model supplemented by uniqueness u!; and Uik that have the same mathematical form as the specificities that transform
the common space I N D S C A L model into the extended I N D S C A L model discussed earlier. It should be stressed that the substantive interpretation of uniqueness in the present case differs
greatly from the specificities in the extended Euclidean model/ I N D S C A L case. In the latter case, specificities are related to dimensions specific to the stimulij and k, respectively, whereas
in the distinctive features model, the uniqueness values pertain to a weighted count of all features that the stimulus possesses and can be viewed in the same spirit as Nosofsky's (1991, p. 98)
stimulus bias. As Sattath and Tversky (1987) have shown, the distinctive features model can always be formulated as a special case of the common features model, however, so that the uniquenesses are
not (explicitly) necessary. This conversion is accomplished by supplementing the features in the common features model by a set of additional complementary common f e a t u r e s n o n e
complementary feature corresponding to each stimulus or other object. A complementary feature for a particular object is a feature possessed by all objects except that object, such as a class or
cluster containing all n - 1 objects excluding that one. (Weights for the common features, including these complementary features, and the additive constants, must be adjusted appropriately.) In the
case of hierarchically nested features (classes or cultures), a distinctive features model will lead to the family of path length or additive trees, discussed later. Other discrete structures such as
multiple trees (either ultrametric or path length/additive) are also special cases of either c o m m o n or distinctive features models, whereas distinctive features models are special cases of
common features models, as we noted earlier, so that all of a very ,
J. Douglas Carroll and Phipps Arabie
large class of discrete models to be discussed later are special cases of the A D C L U S / I N D C L U S form of common features model. Although any distinctive features model can be formulated as a
c o m m o n features model with a large set of features (including complementary ones), the more parsimonious form (covering both common and distinctive features models) stated for similarity data is
S)~e>~ S(fe)-" s
WirPjrPkr- 14!j- Uik ,
where for a common features case u,j = uik = - g i / 2 for all i, j, k. G. The Primordial Model
We the els, the
can now formulate a general model that includes the INDSCAL model, two-way (Euclidean) MDS model, and this large class of discrete modall as special cases. This primordial model will be the linchpin
for much of remaining discussion in Section IV and can be written as
-_- jk - Mi(
where M i is a monotone (nondecreasing) function. In the case of the INDSCAL model uij = .5s Wi,X]r, SO that the expression in parentheses on the right side of Eq. (29) equals - " 5(d(i)~ 2 where, as
before, \.jk j R
df~e>-~- [ s
Wir(Xjr--Xkr) 2 ]
In the case of the extended INDSCAL (or EXSCAL) model, R
(30) K
where cr0 denotes the (i, j)th specificity as defined in that model. Here Mi is a linear function only if similarities are assumed to be (inversely) linearly related to squared (weighted) Euclidean
distances. (The two-way special cases of both of these should be obvious.) If the Mi's are assumed to be monotonic (but nonlinear), we recommend the quasi-nonmetric approach, for reasons discussed
earlier in the case of fitting the extended INDSCAL (i.e., EXSCAL), or the extended Euclidean model in the two-way case. As we have already shown, if Xjr -- Pjr (i.e., if the coordinates of the R
dimensions are constrained to binary (0, 1) values), then the model becomes the common or distinctive features model and thus has all the other discrete
3 Multidimensional Scaling
models discussed earlier as special cases. Thus, Eq. (29) can be viewed as the primordial model, of which all others are descendants! It might be noted in passing that since, as Joly and Le Calv6
(submitted) have shown, a cityblock, or L 1, metric can always be written as the square of a Euclidean metricmalthough in a space, generally, of very high dimensionalitymthe L 1 metric
modelsnincluding the three-way (weighted) version discussed earlier--can also, at least in principle, be included as special cases of this primordial scalar product model. In fact, although this
primordial model has the form of a scalar product plus some additive constants, it is easy to show that it can in fact be formulated as an overall scalar product model that requires two additional
dimensions to accommodate two additional scalar product terms with special constraints. Although most MDS models are based on distances between points, not scalar products among vectors, we have
shown here that such distance models can easily be converted to this general scalar product form, at least in the case of Euclidean and city-blockbased models. Some have argued, however, that the
processes involved in computing, say, Euclidean distances are very "unnatural" (taking differences between coordinates of two stimuli in an internal spatial representation of the stimuli, squaring
this difference, and then summing these squares of coordinate differences over all dimensions; this is possibly followed by a final step of taking the square root, at least in the case of ratio scale
distance judgments). It is hard to imagine such operations being wired into the human neural apparatus. In contrast, calculating scalar products (simply multiplying the coordinates for the stimulus
pair and summing these products) seems much more plausible as an innate neurological/psychological process. In fact, the general semantic model offered by Landauer and Dumais (1997) assumes the
representation of words in a highdimensional semantic space (about 300 dimensions for their data). Those authors argue that such scalar products can be computed by a very simple neural network. (The
model assumes that the association of a given word with unordered strings of other words is based on finding the word in this semantic space closest to the centroid of the words in that string, in
the sense of maximizing the cosine of the angle between the word and that centroid. The cosine of an angle in multidimensional space is, in turn, a simple function of the scalar products of vectors.)
Now if we just take the one additional evolutionary step of allowing some x's to be continuous and others discrete (binary, in particular), we immediately generate the hybrid models originally
discussed by Carroll (1976; De Soete & Carroll, 1996; also see Hubert & Arabie, 1995a) as an even more general family of models in which continuous spatial structure is combined with discrete,
nonspatial structure. We discuss some of the discrete and hybrid models that emerge as such special cases of the very broad, general model stated in Eq. (29). See Carroll and Chaturvedi (1995) for a
J. Douglas Carroll and Phipps Arabie
general approach, called C A N D C L U S , that allows fitting of a large class of discrete and hybrid models, including the (two- and three-way) common features models discussed previously, to many
types of data that are twoway, three-way, and higher-way via either least-squares or a least absolute deviations (LAD) criterion. Chaturvedi and Carroll (1994) apply this approach to provide a more
efficient algorithm, called SINDCLUS, for fitting the A D C L U S / I N D C L U S models via an OLS criterion, whereas Chaturvedi and Carroll (1997) have extended this work to fit with a LAD
criterion in a procedure called LADCLUS. A tree with path-length metric (Carroll & Chang, 1973), or simply a path-length tree, is synonymous with what Sattath and Tversky (1977) called an "additive
similarity tree." Unlike ultrametric trees, which have a natural root node, a path-length tree has no unique root. It is not necessary to think of it as being vertically organized into a hierarchy.
(In fact, such a tree, for n objects, is consistent with 2n - 2 different hierarchies, corresponding to rooting the tree along any one of its 2n - 2 distinct branches.) Underlying the structure of a
path-length tree is the four-point condition that must be satisfied by the estimated path-length distances. This condition, which is a relaxation of the ultrametric inequality, is satisfied by a set
of distances (arjk) if and only if, for all quadruples of points j, k, 1, and m, "rrjk + "tr1,,, --> "rrjl + "rrk,,,--> "rrkl + ~.i,,, implies that "rCjk + 'Yrhn = "ITjl + "ffkm"
That is, the two k, 1, and m must or De Soete and point condition
largest sums of pairs of distances involving the subscripts j, be equal. See Carroll (1976), Carroll and Pruzansky (1980), Carroll (1996) for a discussion of the rationale for this fourand its
relationship to the u.i.
H. Fitting Least-Squares Trees by Mathematical Programming 1. Fitting a Single Ultrametric Tree Carroll and Pruzansky (1975, 1980) pioneered a mathematical programming approach to fitting uhrametric
trees to proximity data via a least-squares criterion. This strategy basically attempts to find a least-squares fit of a distance matrix constrained to satisfy the u.i. by use of a penalty function,
which measures the degree of violation of that inequality, as defined in Eq. (21), to a given matrix of dissimilarities. This approach can be extended easily but indirectly to the fitting of
path-length trees satisfying the fourpoint condition, as described later. A more direct procedure entailing a generalization of the Carroll and Pruzansky penalty function approach was proposed and
implemented by De Soete (1983) using a penalty function to enforce the four-point condition.
3 Multidimensional Scaling
2. Fitting Multiple Tree Structures Using Mathematical Programming Combined with Alternating Least Squares Many sets of proximity data are not well represented by either simple or hierarchical
clusterings. A general model already discussed is the ADCLUS/ INDCLUS model, in which proximity data are assumed to arise from discrete attributes that define overlapping but nonhierarchically
organized sets. It may happen, however, that the attributes can be organized into two or more separate hierarchies, each of which could represent an organized family of subordinate and superordinate
concepts. For example, in the case of animal names one might imagine one hierarchical conceptual scheme based on the phylogenetic scale and another based on function (or relationship to humankind)
involving such categories as domesticated versus wild. The former could be classified as pets, work animals, and animals raised for food; pets could be further broken down into house versus outdoor
pets, and so on. This case requires a method to allow fitting multiple tree structures to d a t a ~ a multidimensional generalization of the single tree structure, as it were. We now describe a
procedure for fitting such multiple tree structures to a single two-way data matrix of dissimilarities. Consider fitting A, the two-way data matrix, with a mixture of hierarchical tree structures
(HTSs), each satisfying the u.i. In particular, we want to approximate A as a sum A ~ H 1 + H 2 +...
where each H matrix satisfies the u.i. We use an overall alternating leastsquares (ALS) strategy to fit the mixture of tree structures. In particular, given current fixed estimates of all H matrices
except Hq, we may define 2i~ = A -
and use the mathematical programming procedure discussed earlier to fit a least-squares estimate, 121q, of Hq, to A~'. 3. Fitting a Single Path-Length Tree J. s. Farris (personal communication), as
Hartigan (1975, p. 162) noted, has shown that it is possible to convert a path-length tree into an ultrametric tree by a simple operation, given the distances from the root node to each of the nodes
corresponding to objects. Letting "rrio represent the distance from thejth object to the root node O and ~ik represent the path-length distance from j to k, it can be shown that hik = arik - arjo -
J. Douglas Carroll and Phipps Arabie
(34) satisfies the u.i. The hik will not, however, necessarily satisfy the positivity condition for distances. But both the u.i. and positivity will be satisfied by adding a sufficiently large
constant II by defining t!ik as hjk = rgk -
r(.io -
+ H -
"r(.ik -- '9 -- Uk (j # k)
where uj = ,rrio - II/2. An equivalent statement is that "ITJk
I{ik + ui + 14k
(J # k)
which states that the path-length distance matrix Il is decomposable into a distance matrix H that satisfies the u.i. plus an additive residual (which we shall simply call U) where l{ik = ui + u k
forj -#= k, and the diagonals of U are undefined, or zero if defined. The decomposition can be defined so that the uy's are nonnegative, in which case U is the distance matrix for a very special
path-length tree, usually called a "bush" by numerical taxonomists or a "star" by graph theorists, and is a path-length tree with only one nonterminal (or internal) node. (We use the more standard
graph-theoretic term star henceforth.) The nonnegative constant 19 is, then, just the length of the branch connecting terminal node j to that single internal node, and the distance between any two
distinct terminal nodes, j and k, of the star tree equals uj + u k. Thus we may summarize Eq. (36) verbally as A path-length tree = An ultrametric tree + A star tree. It should be noted that this
decomposition is not unique. Many different ways exist for decomposing a fixed path-length tree (PLT) into such a sum. In the case of multiple PLTs, because the sum of Q star trees is itself just a
single star tree, we have the extended theorem that Q
Hq= q
+ U
or, in words, A sum of PLTs = A sum of ultrametric trees + One star tree. It should also be noted that both single and multiple path-length or additive trees are also, by quite straightforward
inference, special cases of the primordial model in Eq. (29). We may thus fit mixtures of path-length trees by simply adding to the ALS strategy defined earlier an additional step in which the
constants uj, defining the single star component, are estimated via least-squares procedures. Details of this and of the procedure implementing estimation of the uj's can be found in Carroll and
Pruzansky (1975, 1980). A more computationally efficient, but heuristic (and therefore more likely to be suboptimal),
3 Multidimensional Scaling
approach to fitting multiple trees was also devised by Carroll and Pruzansky (1986).
I. Hybrid Models: Fitting Mixtures of Tree and Dimensional Structures Degerman (1970) proposed the first formal hybrid model combining elements of continuous dimensional structure and of discrete
class-like structure, using a rotational scheme for high-dimensional MDS solutions, and seeking subspaces with class-like rather than continuous variation. Since then, much has been said but little
done about such mixed or hybrid models. By further generalizing the multiple tree structure model that Carroll and Pruzansky proposed, it is possible to formulate a hybrid model that would include a
continuous spatial component in addition to the tree structure components. To return to our hypothetical animal name example, we might postulate, in addition to the two hierarchical structures
already mentioned, continuous dimensions of the type best captured in spatial models. In the case of animals, obvious dimensions might include size, ferocity, or color (which itself is
multidimensional). Carroll and Pruzansky (1975, 1980), in fact, generalized the multiple tree structure model just discussed in precisely this direction. The model can be formally expressed as A ~--
D 1 + D 2 + . . .
+ DQ + D~:R,
where D 1 through DQ are distance matrices arising from tree structures based on either ultrametric or path-length trees, and D 2ER is a matrix of squared distances arising from an R-dimensional
Euclidean space. (The reason for adding squared rather than first-power Euclidean distances is a technical one largely having to do with mathematical tractability and consistency with the general
primordial model in Eq. (29).) In effect, to estimate this additional continuous component, we simply add an extra phase to our alternating least-squares algorithm that derives conditional
leastsquares estimates of these components. Carroll and Pruzansky (1975, 1980) provided details of this additional step. The same reference also provides an illustrative data analysis with a
protracted substantive interpretation. Hubert and Arabie (1995b) and Hubert, Arabie, and Meulman (1997) have provided yet another approach to fitting multiple tree structures.
J. Other Models for Two-- and Three-Way Proximities Another direction, already explored to some extent, involves generalization of the discrete models discussed to the case of nonsymmetric proximity
data, such as two-mode matrices of proximities or nonsymmetric one-mode
J. Douglas Carroll and Phipps Arabie
proximities (e.g., confusability measures) between pairs of objects from the same set. More extensive discussions of the analysis ofnonsymmetric proximities are found in the next section, but we
mention some particularly interesting discrete models and methods here. DcSarbo (1982) has devised a model/method called GENNCLUS, for example, which generalizes the A D C L U S / M A P C L U S
approach to nonsymmetric proximity data. Furnas (1980) and De Soete, DeSarbo, Furnas, and Carroll (1984a, 1984b) have done the same for tree structures, in a general approach often called "tree
'unfolding." Yet another fruitfully explored direction involves three-way extensions of a number of these models, which provide discrete analogues to the INDSCAL generalization (Carroll & Chang,
1970) of two-way multidimensional scaling. One such three-way generalization has already been discussed, namely the Carroll and Arabie (1983) INDCLUS generalization of ADCLUS/MAPCLUS to the three-way
case--including an application of INDCLUS to some of the Rosenberg and Kim (1975) kinship data (where the third way was defined by those authors' various experimental conditions). In the case of tree
structures and multiple tree structures, an obvious direction for individual differences generalization is one in which different individuals are assumed to base their judgments on the same family of
trees, but are allowed to have different node heights (in the case of ultrametric trees) or branch lengths (for path-length or additive trees)--that is, single or multiple trees having identical
topological structures, but different continuous parameters for each individual or other data source. Carroll, Clark, and DeSarbo (1984) implemented an approach called INDTREES, for fitting just such
a model to three-way proximity data. In the hybrid casc, a set of continuous stimulus dimensions defining a group stimulus space, together with individual subject weights similar to those assumed in
INDSCAL, could also be introduced. We emphasize that all the models discussed thus far for proximity data (even including IDIOSCAL, PARAFAC-2, Tucker's three-mode scaling model, and DeSarbo's
GENNCLUS, if sufficiently high dimensionality is allowed) are special cases of the general primordial scalar products model in Eq. (29), some with continuous dimensions and others with discrete
valued coordinates on dimensions constrained to binary values and often called "attributes" or "features." The only model discussed not in conformity with this generic framework is Lingoes and Borg's
(1978) PINDIS--a model we have argued is substantively implausible and overparametrized, in any case. Thus, a very large class of continuous, discrete, and hybrid models can all be viewed as special
cases of the primordial model--relatively simple in algebraic form, as well as in its theoretical assumptions concerning psychological processes underlying perception or cognition. Therefore, all can
be viewed as special cases of this generic multidimensional model, with the
3 Multidimensional Scaling
different models varying only with respect to the class of continuous or discrete constraints imposed on the structure and interrelations of the dimensions assumed.
K. Models and Methods for Nonsymmetric Proximity Data All the approaches to MDS discussed thus far have involved symmetric models for symmetric proximity data. Several types of proximity data are,
however, inherently nonsymmetric; for example, the similarity/dissimilarity ofj to k presented in that order is not necessarily equal to that of k to j when presented in the reverse order, so that
theoretical problems may arise in modeling these data via distance models--which are inherently symmetric, because one of the metric axioms (which by definition is satisfied by all distance
functions) demands that dik = dki for all j and k. (We prefer the term nonsymmetric to asymmetric, which is often used as a synonym of the former, because some definitions of asymmetric imply
antisymmetry--that is, that ~jk is definitely not equal to gki, or even that ~ik = a(~kj), where a is a decreasing monotonic function [e.g., a(g) = some constant -~]). Examples of inherently
nonsymmetric proximities include (1) confusions data, in which the probability of confusing k with j (i.e., responding j when stimulus k is presented) is not necessarily the same as that of confusing
j with k, (2) direct judgments of similarity/dissimilarity in which systematic order effects may affect judgments, and the subject judges both (j', k) and (k, j) pairs (perhaps the best example of
this involves auditory stimuli, where there may be systematic order effects, so that stimulus "q followed by stimulus i~ may appear, and be judged, either more or less similar than ~ followed by ~1;
visual and other psychophysical stimuli may be subject to analogous order and other effects; see Holman, 1979, and Nosofsky 1991, for impressive theoretical and substantive developments in this
area); and (3) brandswitching data, in which the data comprise estimated probabilities (or observed relative frequencies) of consumers who choose brand TI on a first occasion but select brand i~ at
some later time (see Cooper & Nakanishi, 1988). Tversky (1977) argued that even direct judgments of similarity/dissimilarity of conceptual/cognitive stimuli may be systematically n o n s y m m e t r
i c ~ largely depending (we would argue) on how the similarity or dissimilarity question is phrased~and he provided numerous empirical examples. For instance, if subjects are asked "How similar is
Vietnam to China?" the response will be systematically different than if they are asked "How similar is China to Vietnam?" In this particular case Vietnam will generally be judged more similar to
China than vice versa. Tversky (1977) argued that this occurs because China has more "features" for most subjects than Vietnam does, and that, in this wording of the similarity question, greater
weight is given to
J. Douglas Carroll and Phipps Arabic
"distinctive features" unique to the second stimulus than to those unique to the first. This example will be discussed in more detail later when we consider Tversky's (1977) "features of similarity"
theoretical framework. We would argue that a slightly different wording of this question, namely "How similar are ~ and/~?" would tend to produce symmetric responses (i.e., that any deviations from
symmetry are not systematic but result only from random error). It is, in fact, this latter wording or some variation of it that is most often used when direct judgments of similarities/
dissimilarities are elicited from human subjects. 1. The Two-Mode Approach to Modeling Nonsymmetric Proximities The first of the two approaches to modeling nonsymmetric proximities is the two-mode
approach, in which the stimuli or other objects being modeled are treated as two sets rather than one--in the two-way case, in effect, the proximity data are treated as two-mode two-way, rather than
one-mode two-way, with one mode corresponding to rows of the proximity matrix and the other to columns. In the case of confusions data, for example, the rows correspond to the stimuli treated as
stimuli, whereas the columns correspond to those same stimuli treated as responses. In the case of psychophysical stimuli for which there are or may be systematic order effects, the two modes
correspond, respectively, to the first and second presented stimulus. More generally, we have the following important principle: any O-mode N-way data nonsymmetric in any modes corresponding to two
ways (say, rows and columns) can be accommodated by a symmetric model designed for (0 + 1)-mode N-way data. The extra mode arises from considering the rows and columns as corresponding to distinct
entities, so that each entity will be depicted twice in the representation from the symmetric model. (One could, of course, generalize this approach to data nonsymmetric in more than one modemperhaps
even to generalized nonsymmetries involving more than two-ways for a single m o d e - - b u t we know of few, if any, actual examples of data of this more general type.) The two-set distance model
approach can be viewed very simply as a special case of Coombs's (1964) unfolding model, which is inherently designed for data having two or more modes. (In the two-mode case, with respective
cardinalities of the stimulus sets being n~ and n2, the two-mode data can also be regarded as being in the "corner" of an augmented (nl + n2) X (n I + n2) matrix with missing entries for all but the
n~ x n2 submatrix of observed data~hence the traditional but unhelpful jargon of a "corner matrix.") Because most programs for two-way (and some for three-way) MDS allow for missing data. KYST2A
allows the user to provide as input such an nl x n2 matrix. The case with which we are dealing, where nl = n2 = n, leads directly to a representation in which the stimuli (or other objects) are
modeled by 2n pointsmone set of points corresponding to each mode.
3 Multidimensional Scaling
Coombs's (1964) distance-based unfolding model assumes preference is inversely monotonically related to distance between the subject's ideal point and a point representing the stimulus in a
multidimensional space. Because of the historical association with Coombs's unfolding model, the general problem of analyzing two-mode proximity data (irrespective of whether they are row/column
conditional or unconditional and whether ordinal, interval, or ratio scale) is often referred to as the multidimensional ut~folding problem. From a methodological perspective, there arc serious
problems with the analysis of two-mode proximities, whether of the type discussed previously or of another type more normally associated with preferential choice (or other dominance) data~which in
some cases can lead to data that, as defined earlier, arc row or column conditional (e.g., an I x n matrix of preference ratings for I subjects on n stimuli). Discussion of the problem of
multidimensional unfolding as a special case of MDS, and the associated problems of theoretical degeneracies that make such analyses intractable if great care is not taken, can be found in Kruskal
and Carroll (1969) orin Carroll (1972, 1980). To summarize the practical implications for the analyses of two-mode proximities: Either these analyses should be done metrically (i.e., under the
assumption of ratio or interval scale data) while assuming row (or column) unconditional off-diagonal data, or they must be done using STRESSFORM2 (or its analogues, in case of other loss functions,
such as SSTRESSS), whether doing a metric or nonmctric analysis, if row (column) conditional data arc entailed. Ifa fully nonmetric analysis is attempted treating the data as unconditional (whether
using STRESSFORM1 or 2), a theoretical degeneracy can be shown always to exist corresponding to perfect (zero) STRESS, although it will account for essentially none of the ordinal information in the
data. On the other hand, either a metric or nonmetric analysis assuming (row or column) conditional data, but using STRESSFORM1 instead of STRESSFORM2, will always allow another, even more blatant
theoretical degeneracy--as described in Kruskal and Carroll (1969) and Carroll (1972, 1980). Discrete analogues of the two-set approach to the analysis of nonsymmetric data (or, more generally,
rectangular or off-diagonal proximities) arc also possible. The tree unfolding approach discussed briefly in the previous section is the most notable example. Note that this analysis was
(necessarily) done metrically, assuming row/column unconditional data, for exactly the reasons cited earlier concerning possible degeneracies (which arc even more serious in the case of such discrete
models as tree structures, where, as discussed earlier, theoretical degeneracies arise in the case of nonmetric analyses--even in the case of symmetric proximities). 5 It should be noted that ALSCAL
should not be used for unfolding analyses, however, because the appropriate analogue to STRESSFORM2 is not available in any version of the ALSCAL software.
j. Douglas Carroll and Phipps Arabie
Tree unfolding has been generalized to the three-way case by De Soete and Carroll (1989). Various approaches to generalizing spatial unfolding to the three-way case have been pursued by DeSarbo and
Carroll (1981, 1985); all are restricted to the metric case and to unconditional proximity data, for reasons discussed previously. Although fully nonmetric analyses are inappropriate (except under
the conditions mentioned in the case of spatial unfolding models and always in the case of discrete models), the type of quasi-nonmetric analyses described in the case of the extended Euclidean and
INDSCAL models should be permissible, though to our knowledge no one has attempted this approach. Heiser (1989b), however, has pursued some different quasi-nonmetric methods as well as other
approaches to unfolding by imposing various constraints on the configurations or by using homogeneity analysis, which is closely related to correspondence analysis; see Girl (1990) for a fuller
discussion of this approach to multivariate data analysis, or see Greenacre (1984), Greenacre and Blasius (1994), Lebart, Morineau, and Warwick (1984), and Nishisato (1980, 1993, 1996a, 1996b) for
discussions of correspondence analysis. For reasons why correspondence analysis should not be considered a routine alternative to either metric or nonmetric MDS, see Carroll, Kumbasar, and Romney
(1997) and Hubert and Arabie (1992). We note tangentially that a large number of multidimensional models used for representing preferential choice data and methods for analyzing these data using
these models have been proposed and can be included under the general rubric of multidimensional scaling (broadly defined). If one characterizes preferences, as does Coombs (1964), as measures of
proximity between two sets (stimuli and subjects' ideal points), then the models can be classified as MDS models even if we restrict the domain to geometric models/methods for proximity data. In
fact, as Carroll (1972, 1980) has pointed out, a large class of models called the linear quadratic hierarchy of models, including the so-called vector model for preferences (Tucker, 1960; Chang &
Carroll, 1969a) can all be viewed as special cases or generalizations of the Coombsian unfolding or ideal point model. 6 The vector model, frequently fit by use of the popular MDPREF program (Chang &
Carroll, 1969b, 1989), can be viewed as a special case of the unfolding model corresponding to ideal points at infinity (a subject vector then simply indicates the direction of that subject's
infinitely distant ideal point). Overviews of these and other models/methods for deterministic (i.e., nonstochastic) analyses of preference data are provided by Carroll (1972, 1980), Weisberg (1974),
Heiser (1981, 1987), and DeSarbo and Carroll (1985), whereas discussion of some stochastic models and related methods is found in Carroll and De Soete (1991), De Soete and Carroll (1992), and Marley
(1992). 6 In an important development, the ideal point model has been extended to the technique of discriminant analysis (Takane, Bozdogan, & Shibayama, 1987; Takane, 1989).
3 Multidimensional Scaling
2. The One-Mode Approach to Modeling Nonsymmetric Proximities The other general approach to analyzing nonsymmetric proximities entails a single set representation that assumes a nonsymmetric model.
These models can be viewed as adaptations of either a spatial or a discrete (e.g., feature structure) model, modified to accommodate nonsymmetries. Many of these models are subsumed as special cases
of a nonsymmetric modification of what we called the primordial (symmetric) model for proximities in Eq. (29), which, in its most general (three-way) case, can be written for ~), the proximity
between objects j and k for subject i, as 5(!) ~- M.(b(~) + u~i + v~k)' .ik ~" .ik .
where b(!) = 5~rRW i r X i r X k r k for suJ(3ject/source i), Xjr
weighted symmetric scalar product between j and
continuous (discrete) value ofjth object on rth dimension (feature), Wir = salience weight of rth dimension/feature for the ith subject, uij = uniqueness ofjth object for ith subject in the first
(row) mode, and vik = uniqueness of kth object for ith subject in the second (column) mode, while M i is a (nonincreasing or nondecreasing) monotonic function for subject i, depending on whether ~!!)
is, respectively, a similarity or a disJ Pc similarity measure. For nonsymmetric proximities, among the special cases of this model are the following. =
a. Tversky's Features of Similarity Model
A general statement of this model in set-theoretic terms is (Tversky, 1977) SO', k) = O~A n B) - ~t(A - B) - f ~ B - A)
for 0, et, [3 --- 0,
where SO', k) is the similarity of stimulij and k; A and B are corresponding sets of discrete dimensions/attributes/features (whichever term one prefers); A N B is the intersection of sets A and B
(or, the set of features common to j and k); A - B is the set difference between A and B or, in words, the set of features possessed byj but not by k (whereas B - A has the opposite meaning); 0, ct,
and [3 are numerical weights to be fitted; and f i s a finitely additive function, that is,
glad) A ~ E f/
J. Douglas Carroll and Phipps Arabic
(where A~ denotes a feature included in the feature set ~). An MDS algorithm tailored to fit this model is described by DeSarbo, M. D. Johnson, A. K. Manrai, L. A. Manrai, and Edwards (1992). When a
= ]3, this model leads to symmetric proximities S; otherwise it leads to a nonsymmetric model. Tversky (1977) pointed out that the Shepard and Arabie (1979) ADCLUS model corresponds to the special
case in which ot = [3 (so that the model is symmetric) and f ( A ) = f ( B ) , for all A , B (that is, the weights of the feature sets for stimuli j, k, etc. are all equal). We now demonstrate that
the more general model is a special case of the primordial nonsymmetric proximity model expressed in Eq. (39). First, we rewrite Eq. (40) as sO, k) = 0f(A n ,3) + (,~ + ~)f(A n B) - ~f(A - B) - ,,f(A
n ,3) - ~ f ( ~ A) - ~f(A n ,3) = (0 + ot + f 3 ) f ( A N B) -
otf(A) -
with the last expression resulting from substitutions of the set identity A = (A n B) + (A - B). Rewriting Eq. (41) with the same notation used in formulating the twoway ADCLUS model results in
nonsymmetric (similarities) of the form R
site -- s
Wrpirpk r - - u j -
Vie ,
where ,,j = - , ~ , ~ w~p,~ r
and lek = - - ~ *
while et* =
lB* =
Here sjk -- o + ~1+ ~ S ( j , k) (an unimportant scale transformation), and wr = "r(Ar), where Ar is the rth "feature" and P_ir is a binary indicator variable; pj~ = 1 iff stimulus j has feature r,
and -r is a nonnegative function. This formulation, of course, is a two-way special case of Eq. (39). Extending this reinterpretation of the features of similarity model to the threeway case, we have
3 Multidimensional Scaling
s ik(i) ~ ~., W;rPi,'Pk,"_ uii
12ik ,
which is a special case of the three-way primordial nonsymmetric scalar product model of Eq. (39), with xir ~- Pir, that is, with discrete valued dimensions or features (and with ~(;) = s(;) with M
as the identity function). " ik ik' Because Eq. (43) is the three-way generalization of Eq. (39), the u and v terms now have an additional subscript for subject i. Thus, Tversky's (1977) features of
similarity model leads to an extended (nonsymmetric) version of the A D C L U S / I N D C L U S model--extended by adding the terms u!i and vik. Holman (1979) generalized Tversky's featurcs of
similarity model to include a monotone transformation of the expression on the right sidc of Eq. (40), making the model more nearly equivalent to Eq. (39), but only in the two-way case. Holman then
formulated a general model for nonsymmetric proximities entailing response biases, a special case of which can be viewed as the two-way case of Eq. (39), with the terms ui and vk representing the
response biases. Holman defined a general symmetric similarity function as part of his response bias modcl; our interpretation of Eq. (39) as a special two-way case is dependent on a particular
definition of that general similarity function. Krumhansl (1978) proposed a (continuous) model for nonsymmetric proximities based on what she called a distance-density hypothesis, which leads to an
expression for modified distances d of the form [-tjk = dik + ~
+ [3~bk,
where c~, [3, and d, arc unrelated to previous usage in this chapter. The distance-density model has occasioned an impressive algorithmic tradition in two-way MDS. Okada and lmaizumi (1987; Okada,
1990) provide a nonmetric method in which a stimulus is represented as a point and an ellipse (or its generalization) whose center is at that very point in a Euclidean space. Although theirs is a
two-way method, it could readily be extended to the three-way case. Distance between the points corresponds to symmetry, and between the radii to skew-symmetry. Boy4 and Critchley (1989, 1993)
devised a metric method for fitting the same model and related their solution to work by Toblcr (1979) and Weeks and Bentler (1982). Saito's approach (1991, 1993; Saito & Takcda, 1990) allows the
useful option of including unequal diagonal values (i.e., disparate self-similarities) in the analysis. DeSarbo and A. K. Manrai (1992) devised an algorithm that, they maintain, links estimated
parameters more closely to Krumhansl's original concept of density. Krumhansl's original justification for her model, in which r and 6k are
J. Douglas Carroll and Phipps Arabie
measures of the spatial density of stimuli in the neighborhoods o f j and k, respectively, is actually equally consistent with a formulation using squared (Euclidean) distances, namely, modified
squared distances ~2 defined as
cl~ = d2k + o~i + f3d~k,
which, in the three-way case, is a nonsymmetric generalization of the extended Euclidean model formulated in the symmetric case by Winsberg and Carroll (1989a, 1989b) and extended to the three-way
(EXSCAL) case by Carroll and Winsberg (1986, 1995). It should be clear that this slight reinterpretation of Krumhansl's distance-density model also leads, in the most general three-way case, to a
model with continuous spatial parameters of the same general form defined in Eq. (39).
b. Drift Models As a final class of models leading to this same primordial generalized scalar product form, we now consider two frequently discussed models. One entails "drift" in a fixed direction
(referred to as a slide-vector model in the implementation of Zielman & Heiser, 1993) and the second entails "drift" toward a fixed point. (The first can actually be viewed as a special case of the
second, with the fixed point at infinity in some direction.) Before stating the fixed directional form of the drift model in mathematical terms, we consider a stimulus identification task leading to
confusions data, in which a stimulus is presented and the subject attempts to identify it by naming or otherwise giving a response associated with the stimulus presented. In the drift model, we
assume the presented stimulus is mapped onto a point (in a continuous multidimensional spatial representation) corresponding to the "true" location of that stimulus plus a fixed vector entailing a
drift in a fixed direction (and for a fixed distance). Specifically, if xj is the vector representing the true position of stimulus j, the effective position of the presented stimulus will be Xi +
t~, where t~ is the fixed drift vector. If we then assume a Euclidean metric space, the perceived distance between j and another (nonpresented) stimulus k will be (in the two-way case) R
(1j'k= [ E (Xjr + I~r--Xkr) 2 ] r= 1
1/2 (46)
Now, if we assume that the probability of confusion is a decreasing monotonic function of d then we have
sjk ~ Prob (k]j) =
M*(dik ) g
3 Multidimensional Scaling
= M**
2 ~.,
CrXkr 3r- 2 r
= M** [ - 2
~, X2r + ~, Xkr
E XirXkr + P"
+ 2 E CrXjr- 2 E CrXkr q- 2 r r
= M [ ~ , XirXkr--Ri--Idk] ,
where M* is (an arbitrary) monotonic function, and M** and M are also monotonic functions (implied by absorbing first the square root transformation and then the multiplicative factor o f - 2 ) . (If
M* is monotone decreasing, of course, M will be a monotone increasing function.) The important point is that Eq. (47) is of the same form as (the two-way case of) Eq. (39), with uj = - . 5 ( s X]~ +
2s CrXjr if- '~r r and Vk = --.5(s X~r -- 2s r CrXir + E r r Clearly, if we assume a separate drift vector for each subject/source in the three-way case, we get exactly the model form assumed in Eq.
(39), with u!i = -.5(Er wir + 2E~ r + gr r and lJir ~-- --.5(s r WirX~r-- 2s r qlirXkr + E r r In the case of the (two-way) model entailing drift toward a fixed point, we assume that the effective
position of the presented stimulus, whose true location is xj, will be Xi + to(z - X;)' where z is the fixed point toward which stimuli drift, while to is a parameter (0 -< to 5 1) governing the
degree to which xj will drift toward z. In this two-way case, the modified Euclidean distance will be
~_a (xJr + to(Zr_ Xlr) __ Xkr)2 ]1/2
= [E(i
1/2 1 - coJxir +
t o Z r -- Xkr) 2
-[_ r
j. Douglas Carroll and Phipps Arabie 1/2
+2o,(1 - o,) Z r
2,0 Z r
Z r
Again, if we assume that the probability of confusion, as a measure of proximity, is a monotonic function of dik, we have after some simple algebraic manipulations that proximity is of the same form
as in Eq. (47) (with M, Ui, and vk defined appropriately), although, again the three-way generalization (assuming a possibly different fixed point zi for each subject) will be of the same primordial
form given in Eq. (39). It is important to note that, except for the additive constants u(i and v;k, this generalized (primordial) scalar product model is essentially symmetric (for each subject/
source i). To summarize this section, a large number of superficially disparate models for nonsymmetric proximities are of the same general form as the primordial modified three-way scalar product
model stated in Eq. (39), although a very large class of discrete, continuous, and hybrid models for symmetric proximities are of that same general form but have the constraint that u!i = vii,
leading to the primordial symmetric model stated in Eq. (29). It thus appears that a large class of seemingly unrelated models (both two- and three-way, symmetric and nonsymmetric) that have been
proposed for proximity data of widely varying kinds are special cases of this generic three-way model that we call the primordial scalar product model, expressed in its most general form in Eq. (39).
3. Three-Way Approaches to Nonsymmetric Proximity Data In a seminal two-way approach to representing structure underlying nonsymmetric one-mode data, Gowcr (1977) used areas of triangles and
collinearities for the graphical representation of the skew-symmetric component of a nonsymmetric matrix. (Each stimulus was represented by two points, one for its row and another for its column.)
The degree of nonsymmetry relates to the area (or sum of signed areas) of triangles, defined by pairs of points and the origin, in two-dimensional subspaces corresponding to matched pairs of
eigenvalues in an SVD of the skew-symmetric component of the original matrix of proximity data (after a standard decomposition of the matrix into symmetric and skew-symmetric parts); the direction of
the nonsymmetry depends on the sign of the area or of the summed signed areas. That approach forms the basis for numerous three-way models. Boy6 and Rocci (1993) generalized Escoufier and Grorud's
(1980) approach, in which nonsymmetries are represented by areas of triangles, to the three-way case. Kiers and Takane (1994) provided algorithmic advances on earlier work by Chino (1978, 1990).
Similarly, Zielman (1993) provided a three-way approach emphasizing directional planes and collineari'ties for representing the skew-symmetric component of a nonsymmetric three-way matrix. We have
reviewed elsewhere (Arabie et al., 1987, pp. 50-53) other ap-
3 Multidimensional Scaling
proaches to this problem (e.g., Krooncnbcrg & dc Lceuw, 1980; also see Kroonenberg, 1983; for developments of Tuckcr's three-mode three-way principal component analysis, see Tucker, 1972) and will
not repeat the discussion here. But Kroonenberg and de Leeuw's (1980, p. 83) empirical conclusion after a protracted analysis that "symmetrization does not really violate the structure of the data"
they were analyzing is noteworthy. It is our impression that the extensive collective effort to provide MDS algorithms capable of faithfully representing the nonsymmetric psychological structure so
emphasized by Tversky (1977) has borne little substantive fruit. 7 Two possible (and nonexclusive) explanations arc (1) nonsymmetry is not very important psychologically or is a minor component of
most proximity data, and (2) the extant models are failing to capture the implicit structure. Also see remarks by Nosofsky (1992, p. 38) on this topic. Concerning the former explanation, Hubert and
Baker's (1979) inferential test for detecting significant departures from symmetry has been greatly underemployed. Their examples suggest that presence of nonsymmetry in psychological data has been
exaggerated. Similarly, Nosofsky's (1991) incisive treatment of the topic suggests that models incorporating terms like those for stimulus uniqueness in Eq. (39) may preclude the need to posit more
fundamental nonsymmetries in similarity data. Concerning the appropriateness of" extant models, integrative reviews (e.g., Zielman & Heiser, 1994) and comparative analyses (e.g., Takane & Shibayama,
1986; Molenaar, 1986) should afford a better understanding of exactly what is being captured by models for nonsymmctric data. We now turn to a different class of such models. 4. Nonspatial Models and
Methods for Nonsymmctric Proximity Data The reader who expects to find nonspatial counterparts to the models lust discussed will not be disappointed. For the case of one-mode two-way nonsymmetric
data, Hutchinson (1981, 1989) provides a network model, NETSCAL (for NETwork SCALing), in which a reconstructed distance, defined as the minimum path length between vertices corresponding to stimuli,
is assumed to be a generalized power function of the input dissimilarities, and the topology of the network is based only on ordinal information in the data. Hutchinson's illustrative data analyses
provide impressive support for the usefulness of his approach. Klauer and Carroll used a mathematical programming approach to fit network models to one-mode two-way symmetric (1989) and nonsymmetric
(1991) proximity data. Using a shortest path definition for the reconstructed distances, their metric algorithm, MAPNET (for MAthmetical Programming NETwork fitting), seeks to provide the connected
network 70kada and lmaizumi (1997) have provided a noteworthy exception to this statement.
J. Douglas Carroll and Phipps Arabie
with a least-squares fit using a specified number of arcs. Klauer and Carroll (1991) compared their algorithm to Hutchinson's NETSCAL and found the two yielded comparable results, although MAPNET ran
faster and provided better variance accounted for. (MAPNET has also been generalized to the three-way case called INDNET; see Klauer and Carroll, 1995.) We note that neither Gower's (1977) approach
nor these network models are subsumed in the primordial model. V. CONSTRAINED AND CONFIRMATORY APPROACHES TO MDS Substantive theory can provide a priori expectations concerning the configuration that
MDS algorithms generate in the course of an analysis. Beyond being useful in interpreting the configuration, such expectations can actually be incorporated in the analysis in the form of constraints,
if the algorithm and software at hand so allow. Most of the literature on constrained MDS considers only two-way onemode analyses, but the extension to the three-way case is usually fairly
straightforward; thus, we invoke this distinction here much less than in some of the previous sections (also in contrast to our treatment of the topic in Carroll & Arabie, 1980, pp. 619, 628, 633).
A. Constraining the Coordinates
As Heiser and Meulman (1983a, 1983b) noted, most constrained approaches focus either on the coordinates of the configuration or on the function relating the input data to the corresponding recovered
interpoint distances. We now consider the former case. Most of the discussion on this topic in our 1980 review centered on constraining the coordinates, and we will not repeat the coverage here.
Important subsequent contributions include de Leeuw and Heiser (1980), Lee and Bentler (1980), Takane and Carroll (1981), Weeks and Bentler (1982), DeSarbo, Carroll, Lehmann, and O'Shaughnessy
(1982), Heiser and Meulman (1983a, pp. 153-158; 1983b, pp. 387-390), Takane and Sergent (1983), Carroll, De Soete, and Pruzansky (1988, 1989), and Krijnen (1993). 1. Circular/Spherical Configurations
Shepard (1978) masterfully demonstrated the pervasive relevance of spherical configurations in the study of perception. In response, designers of MDS algorithms have made such configurations a
popular form of constrained (two-way) MDS. T. F. Cox and M. A. A. Cox (1991) provided a nonmetric algorithm, and earlier metric approaches were devised by de Leeuw and
3 Multidimensional Scaling
Heiser (1980) and Lee and Bentler (1980); also see Hubert and Arabie (1994, 1995a) and Hubert, Arabie, and Meulman (1997). 2. Hybrid Approaches Using Circular Configurations It is too easy to think
only of orthogonal dimensions in a metric space for representing the structure in proximities data via MDS, despite the emphasis earlier in this chapter on trees and related discrete structures. Yet
other alternatives to dimensions are circles and the matrix form characterized by permuting input data according to a seriation analysis. That is, instead of a series of axes/dimensions or trees (as
in Carroll & Pruzansky's hybrid approach, 1975, 1980, discussed earlier) accounting for implicit structure, a set of circles, for example, could be used to account for successively smaller
proportions of variance (or components in some other decomposition of an overall goodness-of-fit measure). Taking this development a step further in the hybrid direction, one could also fit a circle
as one component, the seriation form as another component, and yet another structure as a third, all in the same analysis of a one-mode symmetric proximities matrix, using the algorithms devised by
Hubert and Arabie (1994) and Hubert, Arabie, and Meulman (1997). Those authors (1995a) subsequently generalized this approach to include two-way two-mode proximity matrices.
B. Constraining the Function Relating the Input Data to the Corresponding Recovered Interpoint Distances In various programs for nonmetric two-way MDS, the plot of this function is appropriately
known as the Shepard diagram, to give due credit to Shepard's emphasis on this function, which before the advent ofnonmetric MDS was generally assumed be linear between derived measures. (Recall that
the subtitle of his two 1962 articles is "Multidimensional scaling with an unknown distance function.") Shepard (1962a, 1962b) and Kruskal (1964a, 1964b) devised algorithms for identifying that
function with assumptions no stronger than weak monotonicity. In later developments, Shepard (1972, 1974) pointed to the advantages of imposing such constraints as convexity on the monotone
regression function. Heiser (1985, 1989b) extended this approach to multidimensional unfolding. Work by Winsberg and Ramsay (1980, 1981, 1984) and Ramsay (1982a, 1988) using splines rather than
Kruskal's (1964b) unconstrained monotone regression to approximate this function has afforded new approaches to imposing constraints on the monotonic function, such as continuity of the function and
its first and possibly second derivatives. As already discussed extensively, these continuity constraints have allowed Winsberg and Carroll (1989a, 1989b) and Carroll and Winsberg (1986, 1995) to
reverse the direction of the monotone function--treating the data as a (perturbed) monotone
j. Douglas Carroll and Phipps Arabie
function of the distances in the underlying model rather than vice versa, as is done almost universally elsewhere in nonmetric (or even other approaches to quasi-nonmetric) M D S ~ i n their
quasi-nonmetric approach to fitting the Extended Euclidean model or its generalization, the Extended INDSCAL (or EXSCAL) model~which includes the ordinary two-way Euclidean MDS model or the three-way
INDSCAL models as special cases. The statistical and other methodological advantages of this strategy have already been discussed. The imposition of some mild constraints on various aspects of MDS
models often leads to considerable advantages of greater robustness; it also enables fitting, in many cases, of models that are essentially impossible to fit without such constraints.
C. Confirmatory MDS As Heiser and Meulman (1983b, p. 394) note, "the possibility of constraining the MDS solution in various ways greatly enhances the options for analyzing data in a confirmatory
fashion." Approaches to confirmatory MDS have taken several paths. For example, beginning with a traditional statistical emphasis of looking at the residuals, specifically of a nonmetric two-way
analysis, Critchley (1986) proposed representing stimuli as small regions rather than points in the MDS solution. The advantage of this strategy is that the regions allow better goodness of fit to
the ordinal proximity data. We noted earlier that Ramsay's maximum likelihood approach to two- and three-way MDS allows computing confidence regions for the stimulus mode. An alternative strategy,
used by Weinberg, Carroll, and Cohen (1984), employs resampling (namely, jackknifing and bootstrapping on the subjects' mode in INDSCAL analyses) to obtain such regions. The latter approach is more
computationally laborious but less model-specific than Ramsay's, and the results suggest that Ramsay's estimates based on small samples provide an optimistic view of the actual reliability of MDS
solutions. For resampling in the two-way case, de Leeuw and Meulman (1986) provide an approach for jackknifing by deleting one stimulus at a time. This approach also provides guidelines as to the
appropriate dimensionality for a two-way solution. Heiser and Meulman (1983a) used bootstrapping to obtain confidence regions and assess the stability of multidimensional unfolding solutions.
Extending earlier results by Hubert (1978, 1979) to allow significance tests for the correspondence (independent of any model of MDS) between two or more input matrices, Hubert and Arabic (1989)
provided a confirmatory approach to test a given MDS solution against an a priori, idealized structure codified in matrix form. Hubert's (1987) book is essential reading for this topic of research.
3 Multidimensional Scaling
Vocational psychology has recently provided a setting for numerous developments related to confirmatory MDS (Hubert & Arabie, 1987; Rounds, Tracey, & Hubert, 1992; Tracey & Rounds, 1993), including a
clever application of the INDSCAL model in such an analysis (Rounds & Tracey, 1993).
VI. VISUAL DISPLAYS AND MDS SOLUTIONS A. Procrustes Rotations
It is often desirable to compare two or more MDS solutions based on the same set of stimuli. When the interpoint distances in the solution(s) to be rotated to maximal congruity with a target
configuration arc rotationally invariant (as in two-way MDS solutions in the Euclidean metric), the problem of finding the best-fitting orthogonal rotation and a dilation (or overall scale) factor
(and even a possible translation of origin of one of the two to align the centroids of the two configurations, if not already done via normalization) has an analytic least-squares solution. But
devising a canonical measure of goodness of fit between a pair of matched configurations has proven to be a more challenging problem (see Krzanowski and Marriott, 1994, pp. 134-141, for a concise
history of developments). Analogous to the shift in emphasis from two- to three-way MDS, advances in rotational strategies have progressed from an emphasis on comparing two MDS solutions to comparing
more than two. This problem, one variant of which is known as generalized Procrustes analysis (Gower, 1975), has occasioned considerable algorithmic development (e.g., ten Berge, 1977; ten Berge &
Knol, 1984; ten Berge, Kiers, & Commandeur, 1993; see Commandeur, 1991, and Gower, 1995a, for overviews) and can be cast in the framework of generalized canonical correlation analysis (Green &
Carroll, 1988; ten Berge, 1988). As in the case of generalizing many twoway models and associated methods to the three-way (or higher) case, there are a plethora of different approaches to the
multiset (e.g., MDS solutions) case, many (but not all) of which are equivalent in the two-set case. Also, in the case of Procrustes analyses, different techniques are appropriate, depending on the
class of transformations to which the user believes, on theoretical or empirical grounds, the two (or more) configurations can justifiably be subjected. For example, Gower's generalized Procrustes
analysis assumes that each configuration is defined up to an arbitrary similarity transformation (but that the translation component can generally be ignored because of appropriate
normalization--e.g., translation of each so that the origin of the coordinate system is at the centroid of the points in that configuration). The canonical correlation-based approaches, on the other
hand, allow more general affine transformations of the various configurations.
J. Douglas Carroll and Phipps Arabie
Yet another approach, first used by Green and Rao (1972, pp. 95-97) as a configuration matching approach (in the case of two as well as of three or more configurations) utilizes INDSCAL, applied to
distances computed from each separate configuration, as a form of generalized configuration matching (or an alternative generalized Procrustes approach, implicitly assuming yet another class of
permissible transformations too complex to be discussed in detail here). This INDSCAL-based approach to configuration matching has been quite useful in a wide variety of situations and has the
advantage, associated with INDSCAL in other applications, of yielding a statistically unique orientation of common coordinates describing all the separate configurations. The general approach of
configuration matching has long been used to assess mental maps in environmental psychology (e.g., Gordon, Jupp, & Byrne, 1989) and has also found many applications in food technology (see
Dijksterhuis & Gower, 1991/1992) and morphometries (Rohlf & Slice, 1990). In addition to the earlier applications in marketing by Green, cited earlier, a recent approach utilizing either (1) Gower's
generalized Procrustes analyses, (2) INDSCAL-bascd rotation to congruence, or (3) a canonical correlation or generalized canonical correlationbased technique for configuration matching (Carroll,
1968; Green & Carroll, 1989)--or all three--has been quite successfully applied to provide a highly provocative and quite promising new paradigm for marketing analysis, synthesizing elements of a
semantic differential approach in a neoKellyian framework with an MDS-type spatial representation (see Steenkamp, van Trijp, & ten Berge, 1994). Although devised in the context of a marketing
problem, this novel methodological hybridization could very profitably be used in several areas of applied psychology. Other aspects of MDS that are applied to marketing and that could have useful
analogues in psychology are discussed in Carroll and Green (1997).
B. Biplots As Greenacre (1986) succinctly noted, "Biplot" is a generic term for a particular class of techniques which represent the rows and columns of a [two-way two-mode} data matrix Y as points
in a low-dimensional Euclidean space. This class is characterized by the property that the display is based on a factorization of the form AB' [notation modified from the original] of a matrix
approximation Z of Y. The biplot recovers the approximate elements % as scalar products aib~ of the respective i-th and j-th rows of A and B, which represent row i and column j respectively in the
display. (Note: The names of these variables bear no necessary relation to usage elsewhere in this chapter.) Such representations have been available since the advent of MDPREF (Carroll & Chang,
1969), but by emphasizing the
3 Multidimensional Scaling
graphical presentation and by naming it a "biplot" (after its two modes), Gabriel (1971) contributed to the display's popularity. For advances in the underlying statistical techniques, see Gower
(1990, 1992, 1995b), Gower and Harding (1988), Meulman and Heiser (1993), and Gower and Hand (1996). C. Visualization
Young (1984b, p. 77) predicted that "methods for graphically displaying the results of scaling analyses rather than new scaling methods as such" were the new frontier of MDS developments and
emphasized color and interactive graphic hardware. This prophecy has turned out to be highly myopic. Although the graphics capabilities of multivariate statistical packages like SYSTAT's SYSGRAPH
(Wilkinson, 1994) are indeed impressive and will no doubt continue to improve, they are in no way specific to MDS analyses. The most dramatic graphics-based advances in our understanding of MDS
techniques have come from black-and-white graphics portraying results of highly sophisticated investigations that rely on clever and insightful theoretical analyses and simulations (Furnas, 1989; W.
P. Jones & Furnas, 1987; Littman, Swayne, Dean, & Buja, 1992).
VII. STATISTICAL FOUNDATIONS OF MDS During the 1960s, MDS tended to be ignored in the statistical literature, but in the past 15 years, most comprehensive textbooks on multivariate data analysis have
included at least one chapter on MDS (e.g., Krzanowski & Marriott, 1994, chap. 5). But relatively few papers (e.g., Cuadras, Fortiana, & Oliva, 1996; Groenen, de Leeuw, & Mathar, 1996) have looked
intently at the problem of estimation in MDS. Focusing on the consistency of the Shepard-Kruskal estimator in two-way nonmetric MDS, Brady (1985) reached several interesting conclusions. For example,
in aggregating over sources of data to go from a three-way two-mode matrix to a two-way onemode matrix (as is typically done when two-way nonmetric MDS is applied), it is better to use medians than
the traditional arithmetic mean when the data are continuous (e.g., collected using a rating scale). If the data are not continuous (e.g., aggregated over same-different judgments or overt
confusions), then accurate recovery of the monotone function typically displayed as the Shepard diagram is unlikely. Brady also developed the beginnings of an hypothesis test for the appropriate
dimensionality of MDS solutions. Ramsay (1982b) provided a scholarly and comprehensive discussion of the underpinnings of his maximum likelihood-based MULTISCALE algorithms (described earlier).
J. Douglas Carroll and Phipps Arabie
Using matrix permutation/randomization techniques as the basic engine, Hubert and his collaborators (Hubert, 1985, 1987; Hubert & Arable, 1989; Hubert & Golledge, 1981; Hubert & Subkoviak, 1979) have
provided a variety of confirmatory tests applicable to MDS analyses. This general approach makes considerably weaker distributional assumptions than the other papers cited in this section. Brady
(1990) studied the statistical properties of ALS and maximum likelihood estimators when applied to two-way unfolding (e.g., Greenacre & Browne, 1986) and reached the unsettling conclusion that "even
after making some strong stochastic assumptions, the ALS estimator is inconsistent (biased) for any squared Euclidean model with an error term." Further statistically based research that could lead
to practical improvements in the everyday use of MDS is sorely needed.
Acknowledgments We are indebted to Yuko Minowa and Zina Taran for bibliographic assistance and to Kathleen Power for editorial expertise.
References Arabie, P. (1973). Concerning Monte Carlo evaluations of nonmetric scaling algorithms. Psychometrika, 38, 607-608. Arabie, P. (1991). Was Euclid an unnecessarily sophisticated
psychologist? Psychometrika, 56, 567-587. Arabie, P., & Carroll, J. D. (1980). MAPCLUS: A mathematical programming approach to fitting the ADCLUS model. Psychometrika, 45, 211-235. Arabie, P., &
Carroll, J. D. (1989). Conceptions of overlap in social structure. In L. Freeman, D. R. White, & A. K. Romney (Eds.), Research methods ofsocial network analysis (pp. 367392). Fairfax, VA: George
Mason University Press. Arabie, P., Carroll, J. D., & DeSarbo, W. S. (1987). Three-way scaling and clustering. Newbury Park, CA: Sage. (Translated into Japanese by A. Okada & T. lmaizumi, 1990,
Tokyo: Kyoritsu Shuppan) Arabie, P., Carroll, J. D., DeSarbo, W., & Wind, J. (1981). Overlapping clustering: A new method for product positioning. Journal of Marketing Research, 18, 310-317.
(Republished in 1989, Multidimensional scaling, pp. 235-246, by P. E. Green, F.J. Carmone, Jr., & S. M. Smith, Boston: Allyn and Bacon) Arabie, P., & Hubert, L. (1996). An overview of combinatorial
data analysis. In P. Arable, L.J. Hubert, & G. De Soete (Eds.), Clustering and classification (pp. 5-63). River Edge, NJ: World Scientific. Arabie, P., Hubert, L.J., & De Soete, G. (Eds.). (1996).
Clustering and classification. River Edge, NJ: World Scientific. Arce, C. (1993). Escalamiento multidimensional [Multidimensional scaling]. Barcelona: Promociones y Publicaciones Universitarias.
Ashby, F. G. (Ed.). (1992). Multidimensional models of perception and cognition. Mahwah, NJ: Erlbaum.
Multidimensional Scaling
Ashby, F. G., Maddox, W. T., & Lee, W. W. (1994). On the dangers of averaging across subjects when using multidimensional scaling or the similarity-choice model. Psychological Science, 5, 144-151.
Attneave, F. (1950). Dimensions of similarity. American Journal of Psychology, 63, 516-556. Ayer, M., Brunk, H. D., Ewing, G. M., Reid, W. T., & Silverman, E. (1955). An empirical distribution
function for sampling with incomplete information. Annals of Mathematical Statistics, 26, 641-647. Bloxom, B. (1978). Constrained muhidimensional scaling in N spaces. Psychometrika, 43,397408.
Blumenthal, L. M., & Menger, K. (197()). Studies in geometry. New York: W. H. Freeman. Bov6, G., & Critchley, F. (1989). The representation of asymmetric proximities. Proceedings of the First Meeting
of the IFCS Italian Group of the Italian Statistical Society (pp. 53-68). Palermo: lla Palma. Bov6, G., & Critchley, F. (1993). Metric muhidimensional scaling for asymmetric proximities when the
asymmetry is one-dimensional. In R. Steyer, K. F. Wender, & K. F. Widaman (Eds.), Psychometric methodology: Proceedings of tire 7th European Meeting q(the Psychometric Society in Trier (pp. 55-60).
Stuttgart: Gustav Fischer Verlag. Bov6, G., & Rocci, R. (1993). An alternating least squares method to analyse asymmetric twomode three-way data. Proceedings of the 1993 European Meeting of the
Psychometric Society (p. 58). Barcelona: Universidad Pompeu Fabra. Brady, H. E. (1985). Statistical consistency and hypothesis testing for nonmetric multidimensional scaling. Psychometrika, 50,
509-537. Brady, H. E. (1990). Statistical properties qf alternating least squares and maximum likelihood estimators for vector and squared Euclidean fimctional preference models. Berkeley: University
of California, Department of Political Science. Carroll, J. D. (1968). Generalization of canonical correlation analysis to three or more sets of variables. Proceedings of the 76th Anm,al Convention
of the American Psychological Association, 3, 227-228. Carroll, J. D. (1972). Individual differences and multidimensional scaling. In R. N. Shepard, A. K. Romney, & S. B. Nerlove (Eds.),
Multidimensional scaling: Theory a,~d applications in the behavioral sciences: Vol. 1. Theory (pp. 105-155). New York: Seminar Press. (Reprinted in Key texts on multidimensional scaling; by P.
Davies, & A. P. M. Coxon, Eds., 1984, Portsmouth, NH: Heinemann) Carroll, J. D. (1976). Spatial, non-spatial and hybrid models for scaling. Psychometrika, 41, 439-463. Carroll, J. D. (1980). Models
and methods for multidimensional analysis of preferential choice (or other dominance) data. In E. D. Lantermann & H. Feger (Eds.), Similarity and choice (pp. 234-289). Bern: Hans Huber. Carroll, J.
D. (1988). Degenerate soh,tions in the nonmetric fitting qf a wide class qf models for proximity data. Unpublished manuscript, Rutgers University, Graduate School of Management, Newark, New Jersey.
Carroll, J. D. (1992). Metric, nonmetric, and quasi-nonmetric analysis of psychological data. Presidential Address for Division 5, 1992 American Psychological Association Meeting, Washington, DC.
Abstract in October 1992 Score (Division 5 Newsletter). Carroll, J. D., & Arabie, P. (1980). Multidimensional scaling. In M. R. Rosenzweig & L. W. Porter (Eds.), Annual review qfpsychology (Vol. 31,
pp. 6/)7-649). Palo Alto, CA: Annual Reviews. (Reprinted in Multidimensional scaling: Concepts and applications, pp. 168-204, by P. E. Green, F. J. Carmone, & S. M. Smith, 1989, Needham Heights, MA:
Allyn and Bacon) Carroll, J. D., & Arabie, P. (1983). INDCLUS: An individual differences generalization of the ADCLUS model and the MAPCLUS algorithm. Psychometrika, 48, 157-169. (Reprinted
J. Douglas Carroll and Phipps Arabie
in Research methods for multimode data analysis, pp. 372-402, by H. G. Law, W. Snyder, J. Hattie, & R. P. McDonald, Eds., 1984, New York: Praeger) Carroll, J. D., & Chang, J. J. (1969). A new method
for dealing with individual differences in multidimensional scaling (Abstract). Proceedings of the 19th hlternational Congress of Psychology. London, England. Carroll, J. D., & Chang, J. j. (1970).
Analysis of individual differences in multidimensional scaling via an N-way generalization of "Eckart-Young" decomposition. Psychometrika, 35, 283-319. (Reprinted in Key texts in multidimensional
scaling, by P. Davies & A. P. M. Coxon, Eds., 1984, Portsmouth, NH: Heinemann) Carroll, J. D., & Chang, J. J. (1972, March). IDIOSCAL (hldividual Differences in Orientation SCALing): A generalization
of lNDSCAL allowing IDIOsyncratic reference systems as well as an analytic approximation to INDSCAL. Unpublished manuscript, AT&T Bell Laboratories, Murray Hill, NJ. Presented at a meeting of the
Psychometric Society, Princeton, NJ. Carroll, J. D., & Chang, J. J. (1973). A method for fitting a class of hierarchical tree structure models to dissimilarities data and its application to some
"body parts" data of Miller's. Proceedings of the 8Ist Annual Convention of the American Psychological Association, 8, 1097-1098. Carroll, J. D., & Chaturvedi, A. (1995). A general approach to
clustering and multidimensional scaling of two-way, three-way, or higher-way data. In R. D. Luce, M. D'Zmura, D. D. Hoffman, G. Iverson, & A. K. Romney (Eds.), Geometric representations of perceptual
phenomena (pp. 295-318). Mahwah, NJ: Erlbaum. Carroll, J. D., Clark, L. A., & DeSarbo, W. S. (1984). The representation of three-way proximities data by single and multiple tree structure models.
Journal of Classification, 1, 25-74. Carroll, J. D., & Corter, J. E. (!995). A graph-theoretic method for organizing overlapping clusters into trees and extended trees. Journal of Class(fication, 12,
283-313. Carroll, J. D., & De Soete, G. (1991). Toward a new paradigm for the study ofmuhiattribute choice behavior. American Psychologist, 46, 342-351. Carroll, J. D., De Soete, G., & Pruzansky, S.
(1988). A comparison of three rational initialization methods for INDSCAL. In E. Diday (Ed.), Data analysis and informatics V (pp. 131142). Amsterdam: North Holland. Carroll, J. D., De Soete, G. &
Pruzansky, S. (1989). Fitting of the latent class model via iteratively reweighted least squares CANDECOMP with nonnegativity constraints. In R. Coppi & S. Bolasco (Eds.), Multiway data analysis (pp.
463-472). Amsterdam: North Holland. Carroll, J. D., & Green, P. E. (1997). Psychometric methods in marketing research: Part II, multidimensional scaling [Guest editorial]. Journal qfMarketing
Research, 34, 193-204. Carroll, J. D., Kumbasar, E., & Romney, A. K. (1997). An equivalence relation between correspondence analysis and classical metric multidimensional scaling for the recovery of
Euclidean distances. British Journal of Mathematical and Statistical Psychology, 50, 81-92. Carroll, J. D., & Pruzansky, S. (1975). Fitting of hierarchical tree structure (HTS) models, mixtures of
HTS models, and hybrid models, via mathematical programming and alternating least squares. Proceedings of the U.S.-Japan Seminar on Multidimensional Scaling, 9-19. Carroll, J. D., & Pruzansky, S.
(1980). Discrete and hybrid scaling models. In E. D. Lantermann & H. Feger (Eds.), Similarity and choice (pp. 108-139). Bern, Switzerland: Hans Huber. Carroll, J. D., & Pruzansky, S. (1983).
Representing proximities data by discrete, continuous or "hybrid" models. In J. Felsenstein (Ed.), Numerical taxonomy (pp. 229-248). New York: Springer-Verlag. Carroll, J. D., & Pruzansky, S. (1986).
Discrete and hybrid models for proximity data. In W. Gaul & M. Schader (Eds.), Class(~cation as a tool qfresearch (pp. 47-59). Amsterdam: North Holland.
Multidimensional Scaling
Carroll, J. D., & Winsberg, S. (1986). Maximum likelihood procedures for metric and quasinonmetric fitting of an extended INDSCAL model assuming both common and specific dimensions. In J. de Leeuw,
W. J. Heiser, J. Meulman, & F. Critchley (Eds.), Multidimensional data analysis (pp. 240-241). Leiden: DSWO Press. Carroll, J. D., & Winsberg, S. (1995). Fitting an extended INDSCAL model to
three-way proximity data. Journal of Classification, 12, 57-71. Carroll, J. D., & Wish, M. (1974a). Models and methods for three-way multidimensional scaling. In D. H. Krantz, R. C. Atkinson, R. D.
Luce, & P. Suppes (Eds.), Contemporary developments in mathematical psychology (Vol. 2, pp. 57-105). San Francisco: W. H. Freeman. Carroll, J. D., & Wish, M. (1974b). Multidimensional perceptual
models and measurement methods. In E. C. Carterette & M. P. Friedman (Eds.), Handbook of perception (Vol. 2, pp. 391-447). New York: Academic Press. (Reprinted in Key texts in multidimensional
scaling, by P. Davies & A. P. M. Coxon, Eds., 1984, Portsmouth, NH: Heinemann) Chandon, J. L., & De Soete, G. (1984). Fitting a least squares uhrametric to dissimilarity data: Approximation versus
optimization. In E. Diday, M. Jambu, L. Lebart, J. Pag6s, & R. Tomassone (Eds.), Data analysis and informatics III (pp. 213-221). Amsterdam: NorthHolland. Chang, J. J., & Carroll, J. D. (1969a). How
to use INDSCAL, a computer program for canonical decomposition of N-way tables and individual differences in multidimensional scaling. Murray Hill, NJ: AT&T Bell Laboratories. Chang, J. j., &
Carroll, J. D. (1969b). How to use MDPREF, a computer program for multidimensional analysis of preference data. Murray Hill, NJ: AT&T Bell Laboratories. Chang, J. J., & Carroll, J. D. (1989). A
short-guide to MDPREF: Multidimensional analysis of preference data. In P. E. Green, F. J. Carmone, & S. M. Smith, Multidimensional scaling: Concepts and applications (pp. 279-286). Needham Heights,
MA: Allyn and Bacon. Chaturvedi, A., & Carroll, J. D. (1994). An alternating combinatorial optimization approach to fitting the INDCLUS and generalized INDCLUS models. Journal of Classification, 11,
155-170. Chaturvedi, A., & Carroll, J. D. (1997). An Ll-norm procedure for fitting overlapping clustering models to proximity data. In Y. Dodge (Ed.), Statistical data analysis based on the Ll-norm
and related methods (IMS Lecture Notes Monograph No. 30, pp. 443-456). Hayward, CA: Institute of Mathematical Statistics. Chino, N. (1978). A graphical technique for representing the asymmetric
relationships between N objects. Behaviormetrika, 5, 23-40. Chino, N. (1990). A generalized inner product model for the analysis of asymmetry. Behaviormetrika, 27, 25-46. Cliff, N., Pennell, R., &
Young, F. W. (1966). Multidimensional scaling in the study of set. American Psychologist, 21,707. Commandeur, J. J. F. (1991). Matching configurations. Leiden: DSWO Press. Coombs, C. H. (1964). A
theory of data. New York: Wiley. Cooper, L. G., & Nakanishi, M. (1988). Market-share analysis. Boston: Kluwer. Cotter, J., & Tversky, A. (1986). Extended similarity trees. Psychometrika, 51,429-451.
Cox, T. F., & Cox, M. A. A. (1994). Multidimensional scaling. London: Chapman & Hall. Cox, T. F., Cox, M. A. A., & Branco, J. A. (1991). Multidimensional scaling for n-tuples. British Journal of
Mathematical and Statistical Psychology, 44, 195-206. Critchley, F. (1986). Analysis of residuals and regional representation in nonmetric multidimensional scaling. In W. Gaul & M. Schader (Eds.),
Classification as a tool of research (pp. 67-77). Amsterdam: North-Holland. Critchley, F., & Fichet, B. (1994). The partial order by inclusion of the principal classes of dissimilarity on a finite
set, and some of their basic properties. In B. Van Cutsem (Ed.), Classification and dissimilarity analysis (pp. 5-66). Heidelberg: Springer-Verlag.
J. Douglas Carroll and Phipps Arabie
Cuadras, C. M., Fortiana, J., & Oliva, F. (1996). Representation of statistical structures, classification and prediction using multidimensional scaling. In W. Gaul & D. Pfeifer (Eds.), From data to
knowledge (pp. 20-31). Heidelberg: Springer-Verlag. Daws, J. T. (1993). Tile analysis of ~'ee-sorting data: Beyond pairwise cooccurrences. (Doctoral dissertation, University of Illinois at
Urbana-Champaign). (UMI Dissertation NO. 9411601). Daws, J. T. (1996). The analysis of free-sorting data: Beyond pairwise cooccurrences. Journal qf Classification, 13, 57-80. Degerman, R. L. (1970).
Multidimensional analysis of complex structure: Mixtures of class and quantitative variation. Psychometrika, 35, 475-491. de Leeuw, J. (1977a). Applications of convex analysis to multidimensional
scaling. In J. R. Barra, F. Brodeau, G. Romier, & B. van Cutsem (Eds.), Recent developments in statistics (pp. 133-145). Amsterdam: North-Holland. de Leeuw, J. (1977b). Correctness of Kruskal's
algorithms for monotone regression with ties. Psychometrika, 42, 141-144. de Leeuw, J. (1988). Convergence of the majorization method for muhidimensional scaling. Jou,lal of Classification, 5,
163-180. de Leeuw, J., & Heiser, W. (1977). Convergence of correction-matrix algorithms for multidimensional scaling. In J. C. Lingoes (Ed.), Geometric representations of relational data: Readings in
multidimensional scaling (pp. 735-752). Ann Arbor, MI: Mathesis. de Leeuw, J., & Heiser, W. (1980). Multidimensional scaling with restrictions on the configuration. In P. R. Krishnaiah (Ed.),
Multivariate analysis (Vol. 5, pp. 501-522). New York: North Holland. de Leeuw, J., & Heiser, W. (1982). Theory of muhidimensional scaling. In P. R. Krishnaiah, & L. N. Kanal (Eds.), Handbook qf
statistics l/bl. 2: Class!lication, pattern recognition and reduction ofdimensionality (pp. 285-316). Amsterdam: North-Holland. de Leeuw, J., Heiser, W., Meulman, J., & Critchley, F. (Eds.). (1986).
Multidimensional data analysis. Leiden: DSWO Press. de Leeuw, J., & Meulman, J. (1986). A special jackknife for muhidimensional scaling. Journal qf Classification, 3, 97-112. DeSarbo, W. S. (1982).
GENNCLUS: New models for general nonhierarchical clustering analysis. Psychometrika, 47, 446-449. DeSarbo, W. S., & Carroll, J. D. (1981). Three-way metric unfolding. Proceedings of the Third O R S A
/ T I M S Special Interest Conference on Market ,'~leasurementand Analysis, 157-183. DeSarbo, W. S., & Carroll, J. D. (1985). Three-way metric unfolding via weighted least squares. Psychometrika, 50,
275-300. DeSarbo, W. S., Carroll, J. D., Lehman, D. R., & O'Shaughnessy, J. (1982). Three-way multivariate conjoint analysis. Marketing Science, 1, 323-350. DeSarbo, W. S., Johnson, M. D., Marital,
A. K., Marital, L. A., & Edwards, E. A. (1992). TSCALE: A new multidimensional scaling procedure based on Tversky's contrast model. Psychometrika, 57, 43-69. DeSarbo, W. S., & Manrai, A. K., (1992).
A new muhidimensional scaling methodology for the analysis of asymmetric proximity data in marketing research. Marketing Science, 11, 1-2~. De Soete, G. (1983). A least squares algorithm for fitting
additive trees to proximity data. Psychometrika, 48, 621-626. De Soete, G., & Carroll, J. D. (1989). Uhrametric tree representations of three-way threemode data. In R. Coppi & S. Bolasco (Eds.),
Analysis qfmultiway data matrices (pp. 415426). Amsterdam: North-Holland. De Soete, G., & Carroll, J. D. (1992). Probabilistic multidimensional models of pairwise choice data. In F. G. Ashby (Ed.),
Multidimensional models of perception and cognition (pp. 61-88). Mahwah, NJ: Erlbaum.
Multidimensional Scaling
De Soete, G., & Carroll, J. D. (1996). Tree and other network models for representing proximity data. In P. Arabie, L.J. Hubert, & G. De Soete (Eds.), Clustering and class!hcation (pp. 157-197).
River Edge, NJ: World Scientific. De Soete, G., DeSarbo, W. S., Furnas, G. W., & Carroll, J. D. (1984a). The estimation of ultrametric and path length trees from rectangular proximity data.
Psychometrika, 49, 289310. De Soete, G., DeSarbo, W. S., Furnas, G. W., & Carroll, J. D. (1984b). Tree representations of rectangular proximity matrices. In E. Degreef & J. Van Buggenhaut (Eds.),
Trends in mathematical psychology (pp. 377-392). Amsterdam: North-Holland. De Soete, G., Feger, H., & Klauer, K. C. (Eds.) (1989). New developments in psychological choice modeling. Amsterdam:
North-Holland. Dijksterhuis, G. B., & Gower, J. C. (1991 / 1992). The interpretation of generalized Procrustes analysis and allied methods. Food Quality and Preference, 3, 67-87. Easterling, D. V.
(1987). Political science: Using the generalized Euclidean model to study ideological shifts in the U.S. Senate. In F. Young & R. M. Hamer (Eds.), Multidimensional scaling: History, theory, and
applications (pp. 219-256). Mahwah, NJ: Erlbaum. Ennis, D. M., Palen, J. J., & Mullen, K. (1988). A muhidimensional stochastic theory of similarity. Journal of Mathematical Psychology, 32, 449-465.
Escoufier, Y., & Grorud, A. (1980). Analyse factorielle des matrices carrees non symetriques. [Factor analysis of square nonsymmetric matrices]. In E. Diday, L. Lebart, J. P. Pag6s, & R. Tomassone
(Eds.), Data analysis and informatics (pp. 263-276). Amsterdam: NorthHolland. Fichet, B. (1994). Dimensionality problems in L~-norm representations. In B. Van Cutsem (Ed.), Classification and
dissimilarity analysis (pp. 201-224). Heidelberg: SpringerVerlag. Fitzgerald, L. F., & Hubert, L. J. (1987). Multidimensional scaling: Some possibilities for counseling psychology. Journal qf
Counseling Psychology, 34, 469-480. Furnas, G. W. (1980). Objects and their features: The metric analysis qf two-class data. Unpublished doctoral dissertation, Stanford University, Stanford, CA.
Furnas, G. W. (1989). Metric family portraits. Journal of Classification, 6, 7-52. Gabriel, K. R. (1971). The biplot-graphic display of matrices with application to principal component analysis.
Biometrika, 58, 453-467. Girl, A. (1990). Nonlinear multivariate analysis. New York: Wiley. Glazer, R., & Nakamoto, K. (1991). Cognitive geometry: An analysis of structure underlying representations
of similarity. Marketing Science, 10, 205-228. Gordon, A. D. (1996). Hierarchical classification. In P. Arable, L. J. Hubert, & G. De Soete (Eds.), Clustering and class!fzcation (pp. 65-121). River
Edge, NJ: World Scientific. Gordon, A. D., Jupp, P. E., & Byrne, R. W. (1989). The construction ahd assessment of mental maps. British Journal of Mathematical and Statistical Psychology, 42, 169-182.
Gower, J. C. (1966). Some distance properties of latent root and vector methods used in multivariate analysis. Biometrika, 53, 325-338. Gower, J. C. (1975). Generalized procrustes analysis.
Psychometrika, 40, 33-51. Gower, J. C. (1977). The analysis of asymmetry and orthogonality. InJ. Barra, F. Brodeau, G. Romier, & B. van Cutsem (Eds.), Recent developments in statistics (pp. 109-123).
Amsterdam: North-Holland. Gower, J. C. (1990). Three-dimensional biplots. Biometrika, 77, 773-785. Gower, J. C. (1995a). Orthogonal and projection Procrustes analysis. In W. J. Krzanowski (Ed.),
Recent advances in descriptive multivariate analysis (pp. 113-134). Oxford: Clarendon Press. Gower, J. C. (1995b). A general theory ofbiplots. In W. J. Krzanowski (Ed.), Recent advances in
descriptive multivariate analysis (pp. 283-303). Oxford: Clarendon Press.
J. Douglas Carroll and Phipps Arabie
Gower, J. C., & Greenacre, M.J. (1996). Unfolding a symmetric matrix. Journal of Classification, 13, 81-105. Gower, J. C., & Hand, D.J. (1996). Biplots. New York: Chapman & Hall. Gower, J. C., &
Harding, S. A. (1988). Nonlinear biplots. Biometrika, 75, 445-455. Green, P. E., Carmone, F. J., Jr., & Smith, S. M. (1989). Multidimensionalscaling: Concepts and applications. Boston: Allyn and
Bacon. Green, P. E., & Rao, V. R. (1972). Configural synthesis in multidimensional scaling. Journal of Marketing Research, 9, 65-68. Greenacre, M. J. (1984). Theory and applications of correspondence
analysis. London: Academic Press. Greenacre, M. J. (1986). Discussion on paper by Gabriel and Odoroff. In J. de Leeuw, W. Heiser, J. Meulman, & F. Critchley (Eds.), Multidimensional data analysis
(pp. 113-114). Leiden: DSWO Press. Greenacre, M. J., & Blasius, J. (Eds.). (1994). Correspondence analysis: Recent developments and applications. New York: Academic Press. Greenacre, M. J., & Browne,
M. W. (1986). An efficient alternating least-squares algorithm to perform multidimensional unfolding. Psychometrika, 51,241-250. Groenen, P. J. F. (1993). The majorization approach to
multidimensionalscaling: Some problems and extensions. Leiden: DSWO Press. Groenen, P.J.F., de Leeuw, J., & Mathar, R. (1996). Least squares multidimensional scaling with transformed distances. In W.
Gaul & D. Pfeifer (Eds.), From data to knowledge (pp. 177-185). Heidelberg: Springer-Verlag. Groenen, P. J. F., Mathar, R., & Heiser, W. J. (1995). The majorization approach to multidimensional
scaling for Minkowski distances. Journal of Classification, 12, 3-19. Guttman, L. (1968). A general nonmetric technique for finding the smallest coordinate space for a configuration of points.
Psychometrika, 33, 465-506. Hahn, J., Widaman, K. F., & MacCallum, R. (1978). Robustness oflNDSCAL and ALSCAL with respect to violations of metric assumptions. Paper presented at the Annual Meeting
of the Psychometric Society, Hamilton, Ontario, Canada. Harshman, R. A. (1972a). Determination and proof of minimum uniqueness conditions for PARAFAC1. University of California at Los Angeles,
Working Papers in Phonetics 22. Harshman, R. A. (1972b). PARAFAC2: Mathematical and technical notes. University of California at Los Angeles, Working Papers in Phonetics 22. Hartigan, J. A. (1967).
Representation of similarity matrices by trees. Journal of the American Statistical Association, 62, 1140-1158. Hartigan, J. A. (1975). Clustering algorithms. New York: Wiley (Translated into
Japanese by H. Nishida, M. Yoshida, H. Hiramatsu, & K. Tanaka, 1983, Tokyo: Micro Software). Heiser, W. J. (1981). Ut~lding analysis qfproximity data. Unpublished doctoral dissertation, University of
Leiden. Heiser, W. J. (1985). Multidimensional scaling by optimizing goodness-of-fit to a smooth hypothesis. Internal Report RR-85-07. University of Leiden: Department of Data Theory. Heiser, W. J.
(1987). Joint ordination of species and sites: The unfolding technique. In P. Legendre & L. Legendre (Eds.), Developments in numerical ecology (pp. 189-221). Heidelberg: Springer-Verlag. Heiser, W.
J. (1988). Multidimensional scaling with least absolute residuals. In H.-H. Bock (Ed.), Classification and related methods of data analysis (pp. 455-462). Amsterdam: NorthHolland. Heiser, W. J.
(1989a). The city-block model for three-way multidimensional scaling. In R. Coppi & S. Bolasco (Eds.), Multiway data analysis (pp. 395-404). Amsterdam: NorthHolland.
Multidimensional Scaling
Heiser, W.J. (1989b). Order invariant unfolding analysis under smoothness restrictions. In G. De Soete, H. Feger, & C. Klauer (Eds.), New developments in psychological choice modeling (pp. 3-31).
Amsterdam: North-Holland. Heiser, W. J. (1991). A generalized majorization method for least squares multidimensional scaling of pseudodistances that may be negative. Psychometrika, 56, 7-27. Heiser,
W. J. (1995). Convergent computation by iterative majorization: Theory and applications in multidimensional data analysis. In W. Krzanowski (Ed.), Recent advances in descriptive multivariate analysis
(pp. 149-181). New York: Oxford University Press. Heiser, W. J., & de Leeuw, J. (1979). How to use SMACOF-III (Research Report). Leiden: Department of Data Theory. Heiser, W. J., & Meulman, J.
(1983a). Analyzing rectangular tables by joint and constrained multidimensional scaling. Journal of Econometrics, 22, 139-167. Heiser, W. J., & Meulman, J. (1983b). Constrained multidimensional
scaling, including confirmation. Applied Psychological Measurement, 7, 381-404. Holman, E. W. (1972). The relation between hierarchical and Euclidean models for psychological distances.
Psychometrika, 37, 417-423. Holman, E. W. (1978). Completely nonmetric muhidimensional scaling. Journal q(Mathematical Psychology, 18, 39-51. Holman, E. W. (1979). Monotonic models for asymmetric
proximities. Journal of Mathematical Psychology, 20, 1-15. Hubert, L.J. (1978). Generalized proximity function comparisons. BritishJournal q(Mathematical and Statistical Psychology, 31, 179-192.
Hubert, L. J. (1979). Generalized concordance. Psychometrika, 44, 135-142. Hubert, L.J. (1985). Combinatorial data analysis: Association and partial association. Psychometrika, 50, 449-467. Hubert,
L. J. (1987). Assignment methods in combinatorial data analysis. New York: Marcel Dekker. Hubert, L., & Arabie, P. (1986). Unidimensional scaling and combinatorial optimization. In j. de Leeuw, W.
Heiser, J. Meulman, & F. Critchley (Eds.), Multidimensionaldata analysis (pp. 181-196). Leiden: DSWO Press. Hubert, L., & Arabie, P. (1987). Evaluating order hypotheses within matrices. Psychological
Bulletin, 102, 172-178. Hubert, L. J., & Arabie, P. (1988). Relying on necessary conditions for optimization: Unidimensional scaling and some extensions. In H.-H. Bock (Ed.), Class!tication and
relaled methods of data analysis (pp. 463-472). Amsterdam: North-Holland. Hubert, L., & Arabie, P. (1989). Combinatorial data analysis: Confirmatory comparisons between sets of matrices. Applied
Stochastic Models and Data Analysis, 5, 273-325. Hubert, L., & Arabie, P. (1992). Correspondence analysis and optimal structural representations. Psychometrika, 56, 119-140. Hubert, L., & Arabie, P.
(1994). The analysis of proximity matrices through sums of matrices having (anti-)Robinson forms. BritishJournal qf Mathematical and Statistical Psychology, 47, 1-40.
Hubert, L., & Arabie, P. (1995a). The approximation of two-mode proximity matrices by sums of order-constrained matrices. Ps),chometrika, 60, 573-605. Hubert, L., & Arabie, P. (1995b). lterative
projection strategies for the least-squares fitting of tree structures to proximity data. British Journal q( Mathematical and Statistical Psychology, 48, 281-317. Hubert, L.J., Arabie, P., &
Hesson-Mcinnis, M. (1992). Muhidimensional scaling in the cityblock metric: A combinatorial approach. Journal qfClass(lication, 9, 211-236. Hubert, L.J., Arabie, P., & Meulman, J. (1997). Linear and
circular unidimensional scaling for symmetric proximity matrices. BritishJournal of Mathematical and Statistical Psychology, 50.
J. Douglas Carroll and Phipps Arabie
Hubert, L. J., & Baker, F. B. (1979). Evaluating the symmetry of a proximity matrix. Quality and Quantity, 13, 77-84. Hubert, L. J., & Gol]edge, R. G. (1981). A heuristic method for the comparison of
related structures. Journal of Mathematical Psychology, 23, 214-226. Hubert, L.J., & Schultz, J. R. (1976). Quadratic assignment as a general data analysis strategy. British Journal of Mathematical
and Statistical Psychology, 29, 190-241. Hubert, L. J., & Subkoviak, M. J. (1979). Confirmatory inference and geometric models. Psychological Bulletin, 86, 361-370. Hutchinson, J. W. (1981). Network
representations of psychological relations. Unpublished doctoral dissertation, Stanford University. Hutchinson, J. W. (1989). NETSCAL: A network scaling algorithm for nonsymmetric proximity data.
Psychometrika, 54, 25-51. Indow, T. (1983). An approach to geometry of visual space with no a priori mapping functions: Multidimensional mapping according to Riemannian metrics. Journal of
Mathematical Psychology, 26, 204-236. Indow, T. (1995). Psychophysical scaling: Scientific and practical applications. In R. D. Luce, M. D'Zmura, D. D. Hoffman, G. lverson, & A. K. Romney (Eds.),
Geometric representations of perceptual phenomena (pp. 1-28). Mahwah, NJ: Erlbaum. Johnson, R. M. (1975). A simple method for pairwise monotone regression. Psychometrika, 40, 163-168. Johnson, S. C.
(1967). Hierarchical clustering schemes. Psychometrika, 32, 241-254. Joly, S., & Le Calve, G. (submitted). Realisable 0-1 matrices and city block distance. Joly, S., & Le Calve, G. (1995). Three-way
distances. Journal of Classification, 12, 191-205. Jones, L. E., & Koehly, L. M. (1993). Multidimensional scaling. In G. Keren & C. Lewis (Eds.), A handbook for data analysis in the behavioral
sciences: Methodological issues (pp. 95163). Mahwah, NJ: Erlbaum. Jones, W. P., & Furnas, G. W. (1987). Pictures of relevance: A geometric analysis of similarity measures. Journal of the American
Society for Information Science, 38, 420-442. Keller, J. B. (1962). Factorization of matrices by least-squares. Biometrika, 49, 239-242. Kiers, H. A. L. (1990). Majorization as a tool for optimizing
a class of matrix functions. Psychometrika, 55, 417-428. Kiers, H. A. L., & Takane, Y. (1994). A generalization of GIPSCAL for the analysis of nonsymmetric data. Journal of Classification, 11, 79-99.
Kiers, H. A. L., & ten Berge, J. M. F. (1992). Minimization of a class of matrix trace functions by means of refined majorization. Psychometrika, 57, 371-382. Klauer, K. C., & Carroll, J. D. (1989).
A mathematical programming approach to fitting general graphs. Journal of Classification, 6, 247-270. Klauer, K. C., & Carroll, J. D. (1991). A comparison of two approaches to fitting directed graphs
to nonsymmetric proximity measures. Journal of Classification, 8, 251-268. Klauer, K. C., & Carroll, J. D. (1995). Network models for scaling proximity data. In R. D. Luce, M. D'Zmura, D. Hoffman, G.
J. Iverson & A. K. Romney (Eds.), Geometric representations of perceptual phenomena (pp. 319-342). Mahwah, NJ: Erlbaum. Krijnen, W. P. (1993). The analysis of three-way arrays by constrained PARAFAC
methods. Leiden: DSWO Press. Kroonenberg, P. M. (1983). Three-mode principal component analysis: Theory and applications. Leiden: DSWO Press. Kroonenberg, P. M., & de Leeuw, J. (1980). Principal
component analysis of three-mode data by means of alternating least squares algorithms. Psychometrika, 45, 69-97. Krumhansl, C. L. (1978). Concerning the applicability of geometric models to
similarity data: The interrelationship between similarity and spatial density. Psychological Review, 85, 445463.
Multidimensional Scaling
Kruskal, J. B. (1964a). Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis. Psychometrika, 29, 1-27. Kruskal, J. B. (1964b). Nonmetric multidimensional scaling: A
numerical method. Psychometrika, 29, 115-129. Kruskal, J. B. (1965). Analysis of factorial experiments bv estimating monotone transformations of the data. Journal of the Royal Statistical Society,
Series B, 27, 251-263. Kruskal, J. B. (1976). More factors than subjects, tests and treatments: An interdeterminacy theorem for canonical decomposition and individual differences scaling.
Psychometrika, 41,281-293. Kruskal, J. B., & Carroll, J. D. (1969). Geometrical models and badness-of-fit functions. In P. R. Krishnaiah (Ed.), Multivariate analysis II (pp. 639-671). New York:
Academic Press. Kruskal, J. B., & Wish, M. (1978). Multidimensional scaling. Newbury Park, CA: Sage. Kruskal, J. B., Young, F. W., & Seery, J. B. (1973). How to use KYST, a very flexible program to
do multidimensional scaling and ut~blding. Murray Hill, NJ: AT&T Bell Laboratories. Krzanowski, W. J., & Marriott, F. H. C. (1994). Multivariate analysis. Part 1: Distributions, ordination and
it~rence. New York: Wiley. Lance, G. N., & Williams, W. T. (1967). A general theory of classificatory sorting strategies. I. Hierarchical systems. Computer Journal, 9, 373-380. Landauer, T. K., &
Dumais, S. T. (1997). A solution to Plato's problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological Review, 104, 211-240. Lebart, L.,
Morineau, A., & Warwick, K. M. (1984). Multivariate descriptive statistical analysis: Correspondence analysis and related techniques for lare,e matrices (E. M. Berry, Trans.). New York: Wiley.
(Original work published 1977). Lee, S.-K., & Bentler, P. M. (1980). Functional relations in muhidimensional scaling. British Journal of Mathematical and Statistical Psycholqgy, 33, 142-150. Levelt,
W. J. M., van de Geer, J. P., & Plomp, R. (1966). Triadic comparisons of musical intervals. British Journal ql*Mathematical and Statistical Psychology, 19, 163-179. Lingoes, J. C., & Borg, I. (1978).
A direct approach to individual differences scaling using increasingly complex transformations. Psychometrika, 43, 491-519. Littman, L., Swayne, D. F., Dean, N., & Buja, A. (1992). Visualizing the
embedding of objects in Euclidian space. In Computing science and statistics: Proceedings of the 24th symposium on the interface (pp. 208-217). Fairfax Station, VA: Interface Foundation of North
America. Lockhead, G. R., & Pomerantz, J. R. (Eds.) (1991). The perception of structure. Arlington, VA: American Psychological Association. Luce, R. D., & Krumhansl, C. L. (1988). Measurement,
scaling, and psychophysics. In R. C. Atkinson, R. J. Herrnstein, G. Lindzey, & R. D. Luce (Eds.), Stevens' handbook of experimental psychology (pp. 3-74). New York: Wiley. MacCallum, R. C. (1977).
Effects of conditionality on INI)SCAL and ALSCAL weights. Psychometrika, 42, 297-305. Marley, A. A.J. (1992). Developing and characterizing multidimensional Thurstone and Luce models for
identification and preference. In F. G. Ashby (Ed.), Multidimensional models q( perception and cognition (pp. 299-333). Mahwah, NJ: Erlbaum. McDonald, R. P. (1976). A note on monotone polygons fitted
to bivariate data. Psychometrika, 41, 543-546. Meulman, J. J. (1992). The integration of multidimensional scaling and multivariate analysis with optimal transformations. Psychometrika, 57, 539-565.
Meulman, J. J., & Heiser, W.J. (1993). Nonlinear biplots for nonlinear mappings. In O. Opitz, B. Lausen, & R. Klar (Eds.), Information and class(tication (pp. 201-213). New York: Springer-Verlag.
j. Douglas Carroll and Phipps Arabie
Meulman, J. J., & Verboon, P. (1993). Points of view analysis revisited: Fitting multidimensional structures to optimal distance components with cluster restrictions on the variables. Psychometrika,
58, 7-35. Miller, K. F. (1987). Geometric methods in developmental research. In J. Bisanz, C.J. Brainerd, & R. Kail (Eds.), Formal methods in developmental psychology (pp. 216-262). New York:
Springer-Verlag. Mirkin, B. (1996). Mathematical classification and clustering. Dordrecht: Kluwer. Mirkin, B. G., & Muchnik, I. (1996). Clustering and multidimensional scaling in Russia (1960-1990):
A review. In P. Arable, L. J. Hubert, & G. De Soege (Eds.), Clustering and classification (pp. 295-339). River Edge, NJ: World Scientific. Molenaar, I. W. (1986). Deconfusing confusion matrices, lnJ.
de Leeuw, W. Heiser, J. Meulman, & F. Critchley (Eds.), Multidimensional data analysis (pp. 139-145). Leiden: DSWO Press. Murtagh, F. (Ed.). (1997). Classification Literature Automated Search
Service, 26. Nishisato, S. (1980). Analysis of categorical data: Dual scaling and its applications. Toronto: University of Toronto Press. Nishisato, S. (1993). Elements of dual scaling: An
introduction to practical data analysis. Mahwah, NJ: Erlbaum. Nishisato, S. (1996a). An overview and recent developments in dual scaling. In W. Gaul & D. Pfeifer (Eds.), From data to knowledge (pp.
73-85). Heidelberg: Springer-Verlag. Nishisato, S. (1996b). Gleaning in the field of dual scaling. Psychometrika, 61,559-599. Nosofsky, R. M. (1991). Stimulus bias, asymmetric similarity, and
classification. Cognitive Psychology, 23, 94-140. Nosofsky, R. M. (1992). Similarity scaling and cognitive process models. Annual Review of Psychology, 43, 25-53. Okada, A. (1990). A generalization
of asymmetric multidimensional scaling. In M. Schader & W. Gaul (Eds.), Knowledge, data and computer-assisted decisions (pp. 127-138). Heidelberg: Springer-Verlag. Okada, A., & Imaizumi, T. (1980).
Nonmetric method for extended INDSCAL model. Behaviormetrika, 7, 13-22. Okada, A., & Imaizumi, T. (1987). Nonmetric multidimensional scaling of asymmetric proximities. Behaviormetrika, 21, 81-96.
Okada, A., & Imaizumi, T. (1994). pasokon tajigen shakudo kouseihou [Multidimensional scaling using a personal computer]. Tokyo: Kyoritsu Shuppan. Okada, A., & lmaizumi, T. (1997). Asymmetric
multidimensional scaling of two-mode threeway proximities. Journal of Classification, 14, 195-224. Pan, G. C., & Harris, D. P. (1991). A new multidimensional scaling technique based upon association
of triple objects-Pijk and its application to the analysis of geochemical data. Journal of Mathematical Geology, 23, 861-886. Pliner, V. (1996). Metric unidimensional scaling and global optimization.
Journal of Class!lication, 13, 3-18. Poole, K. T. (1990). Least squares metric, unidimensional scaling of multivariate linear models. Psychometrika, 55, 123-149. Pruzansky, S. (1975). How to use
SINDSCAL: A computer program for individual differences in multidimensional scaling. Murray Hill, NJ: AT&T Bell Laboratories. Pruzansky, S., Tversky, A., & Carroll, J. D. (1982). Spatial versus tree
representations of proximity data. Psychometrika, 47, 3-24. Ramsay, J. O. (1977a). Monotonic weighted power transformations to additivity. Psychometrika, 42, 83-109. Ramsay, J. O. (1977b). Maximum
likelihood estimation in multidimensional scaling. Psychometrika, 42, 241-266. Ramsay, J. O. (1978a). Confidence regions for multidimensional scaling analysis. Psychometrika, 43, 145-160.
Multidimensional Scaling
Ramsay, J. O. (1978b). MULTISCALE: Four programs jor multidimensional scaling by the method of maximum likelihood. Chicago: National Educational Resources. Ramsay, J. O. (1980). Some small sample
results for maximum likelihood estimation in multidimensional scaling. Psychometrika, 45, 139-144. Ramsay, J. O. (1981). MULTISCALE. In S. S. Schiffman, M. L. Reynolds, & F. W. Young (Eds.),
Introduction to multidimensional scaling: Theory, method and applications (pp. 389-405). New York: Academic Press. Ramsay, J. O. (1982a). M U L T I S C A L E II manual. Mooresville, IN: International
Educational Services. Ramsay, J. O. (1982b). Some statistical approaches to multidimensional scaling data [with discussion]. Journal of the Royal Statistical Society A, 145, 285-312. Ramsay, J. O.
(1983). MULTISCALE: A multidimensional scaling program. American Statistician, 37, 326-327. Ramsay, J. O. (1988). Monotone splines in action. Statistical Science, 3, 425-441. Rodieck, R. W. (1977).
Metric of color borders. Science, 197, 1195-1196. Rohlf, F. J., & Slice, D. (1990). Extensions of the Procrustes method for the optimal superimposition of landmarks. Systematic Zoology, 39, 40-59.
Rosenberg, S. (1982). The method of sorting in multivariate research with applications selected from cognitive psychology and person perception. In N. Hirschberg & L. Humphreys (Eds.), Multivariate
applications in the social sciences (pp. 117-142). Mahwah, NJ: Erlbaum. Rosenberg, S., & Kim, M. P. (1975). The method of sorting as a data-gathering procedure in multivariate research. Multivariate
Behavioral Research, 10, 489-502. Rounds, J., & Tracey, T.J. (1993). Prediger's dimensional representation of Holland's RIASEC circumplex. Journal of Applied Psychology, 78, 875-890. Rounds, J.,
Tracey, T. J., & Hubert, L. (1992). Methods for evaluating vocational interest structural hypotheses. Journal of Vocational Behavior, 40, 239-259. Saito, T. (1991). Analysis of asymmetric proximity
matrix by a model of distance and additive terms. Behaviormetrika, 29, 45-60. Saito, T. (1993). Multidimensional scaling for asymmetric proximity data. In R. Steyer, K. F. Wender, & K. F. Widaman
(Eds.), Psychometric methodology (pp. 451-456). Stuttgart: Gustav Fischer. Saito, T., & Takeda, S. (1990). Multidimensional scaling of asymmetric proximity: Model and method. Behaviormetrika, 28,
49-80. Sattath, S., & Tversky, A. (1977). Additive similarity trees. Psychometrika, 42, 319-345. Sattath, S., & Tversky, A. (1987). On the relation between common and distinctive feature models.
Psychological Review, 94, 16-22. Shepard, R. N. (1962a). The analysis of proximities: Multidimensional scaling with an unknown distance function. I. Psychometrika, 27, 125-140. Shepard, R. N.
(1962b). The analysis of proximities: Multidimensional scaling with an unknown distance function. II. Psychometrika, 27, 219-246. Shepard, R. N. (1964). Attention and the metric structure of the
stimulus space. Journal of Mathematical Psychology, 1, 54-87. Shepard, R. N. (1972). A taxonomy of some principal types of data and of multidimensional methods for their analysis. In R. N. Shepard,
A. K. Romney, & S. B. Nerlove (Eds.), . Multidimensional scaling: Theory and applications in the behavioral sciences: Vol. I. Theory (pp. 24-47). New York: Seminar Press. Shepard, R. N. (1974).
Representation of structure in similarity data: Problems and prospects. Psychometrika, 39, 373-421. Shepard, R. N. (1978). The circumplex and related topological manifolds in the study of perception.
In S. Shye (Ed.), Theory construction and data analysis in the behavioral sciences (pp. 29-80). San Francisco: Jossey-Bass.
J. Douglas Carroll and Phipps Arabie
Shepard, R. N. (1987). Toward a universal law of generalization. Science, 237, 1317-1323. Shepard, R. N. (1988). Toward a universal law of generalization [Letter to editor]. Science, 242, 944.
Shepard, R. N., & Arabie, P. (1979). Additive clustering: Representation of similarities as combinations of discrete overlapping properties. Psychological Review, 86, 87-123. Shiina, K. (1986). A
maximum likelihood nonmetric multidimensional scaling procedure for word sequences obtained in free-recall experiments. Japanese Psychological Research, 28 (2), 53-63. Shoben, E. J., & Ross, B. H.
(1987). Structure and process in cognitive psychology using multidimensional scaling and related techniques. In R. R. Ronning, J. A. Glover, J. C. Conoley, & J. C. Witt (Eds.), The it!fluence of
cognitive psychology o,1 testing (pp. 229-266). Mahwah, NJ: Erlbaum. Srinivasan, V. (1975). Linear programming computational procedures for ordinal regression. Journal of the Association for
Computing Machinery, 23, 475-487. Steenkamp, J.-B. E. M., van Trijp, H. C. M., & ten Berge, J. M. F. (1994). Perceptual mapping based on idiosyncratic sets of attributes. Journal of Marketing
Research, 31, 15-27. Stevens, S. S. (1972). A neural quantum in sensory discrimination. Science, 177, 749-762. Suppes, P., Krantz, D. M., Luce, R. D., & Tversky, A. (1989). Foundations q]
measureme,lt: Vol. II. Geometrical, threshold, and probabilistic representations. New York: Academic Press. Takane, Y. (1981). MDSORT: A special-purpose multidimensional scaling program for sorting
data. Behavior Research Methods & Instrumentation, 13, 698. Takane, Y. (1982). The method of triadic combinations: A new treatment and its application. Behaviormetrika, 11, 37-48. Takane, Y. (1989).
Ideal point discriminant analysis and ordered response categories. Behaviormetrika, 26, 31-46. Takane, Y., Bozdogan, H., & Shibayama, T. (1987). Ideal point discriminant analysis. Psychometrika, 52,
371-392. Takane, Y., & Carroll, J. D. (1981). Nonmetric maximum likelihood multidimensional scaling from directional rankings of similarities. Psychometrika, 46, 389-405. Takane, Y., & Sergent, J.
(1983). Multidimensional scaling models for reaction times and same-different judgments. Psychometrika, 48, 393-423. Takane, Y., & Shibayama, T. (1986). Comparison of models for stimulus recognition
data. In J. de Leeuw, W. Heiser, J. Meulman, & F. Critchley (Eds.), Multidimensional data analysis (pp. 119-138, 147-148). Leiden: I)SWO-Press. Takane, Y., Young, F. W., & de Leeuw, J. (1977).
Nonmetric individual differences multidimensional scaling: An alternating least squares method with optimal scaling features. Psychometrika, 42, 7-67. Tansley, B. W., & Boynton, R. M. (1976). A line,
not a space, represents visual distinctness of borders formed by different colors. Science, 191,954-957. Tansley, B. W., & Boynton, R. M. (1977). Letter in reply to R. W. Rodieck. Science, 197, 1196.
Tartter, V. C. (in press). Language processes (2nd ed.). Newbury Park, CA: Sage. ten Berge, J. M. F. (1977). Orthogonal procrustes rotation for two or more matrices. Psychometrika, 42, 267-276. ten
Berge, J. M. F. (1988). Generalized approaches to the maxbet problem and the maxdiff problem, with applications to canonical correlations. Psychometrika, 53, 487-494. ten Berge, J. M. F., Kiers, H.
A. L., & Commandeur, J. j. F. (1993). Orthogonal Procrustes rotation for matrices with missing values. British Journal qf Mathematical and Statistical Psychology, 46, 119-134. ten Berge, J. M. F., &
Knol, D. L. (1984). Orthogonal rotations to maximal agreement for two or more matrices of different column order. Psychometrika, 49, 49-55.
Multidimensional Scaling
Tobler, W. (1979). Estimation of attractivities from interactions. Environment and Planning A, 11, 121-127. Torgerson, W. S. (1952). Multidimensional scaling: I. Theory and method. Psychometrika, 17,
401-419. Torgerson, W. S. (1958). Theory and methods of scaling. New York: Wiley. Tracey, T. J., & Rounds, J. (1993). Evaluating Holland's and Gati's vocational-interest models: A structural
meta-analysis. Psychological Bulletin, 1I3, 229-246. Tucker, L. R (1960). Intra-individual and inter-individual muhidimensionality. In H. Gulliksen & S. Messick (Eds.), Psychological scaling: Theory
and Applications (pp. 155-167). New York: Wiley. Tucker, L. ,R (1964). The extension of factor analysis to three-dimensional matrices. In N. Frederiksen & H. Gulliksen (Eds.), Contributions to
mathematical psychology (pp. 109-127). New York: Holt, Rinehart, and Winston. Tucker, L. R (1972). Relations between multidimensional scaling and three-mode factor analysis. Psychometrika, 37, 3-27.
Tucker, L. R, & Messick, S.J. (1963). An individual difference model for multi-dimensional scaling. Psychometrika, 28, 333-367. Tversky, A. (1977). Features of similarity. Psychological Review, 84,
327-352. Van Cutsem, B. (Ed.). (1994). Class!fication and dissimilarity analysis. Heidelberg, Germany: Springer-Verlag. Waller, N. G., Lykken, D. T., & Tellegen, A. (1995). Occupational interests,
leisure time interests, and personality: Three domains or one? Findings from the Minnesota Twin Registry. In D. Lubinski & R. V. Dawis (Eds.), Assessing individual differences in human behavior: New
concepts, methods, and findings (pp. 232-259). Palo Alto, CA: Consulting Psychologists Press. Weeks, D. G., & Bentler, P. M. (1982). Restricted muhidimensional scaling models for asymmetric
proximities. Psychometrika, 47, 201-208. Weinberg, S. L., & Carroll, J. D. (1992). Multidimensional scaling: An overview with applications in educational research. In B. Thompson (Ed.), Advances in
social science methodology (pp. 99-135). Greenwich, CT: JAI Press. Weinberg, S. L., Carroll, J. D., & Cohen, H. S. (1984). Confidence regions for INDSCAL using the jackknife and bootstrap techniques.
Psychometrika, 49, 475-491. Weinberg, S. L., & Menil, V. C. (1993). The recovery of structure in linear and ordinal data: INDSCAL versus ALSCAL. Multivariate Behavioral Research, 28, 215-233.
Weisberg, H. F. (1974). Dimensionland: An excursion into spaces. AmericanJournal of Political Science, 18, 743-776. Wilkinson, L. (1994). SYSTAT for DOS: Advanced applications, Version 6 edition.
Evanston, IL: Systat. Winsberg, S., & Carroll, J. D. (1989a). A quasi-nonmetric method for multidimensional scaling of muhiway data via an extended INDSCAL model. In R. Coppi & S. Bolasco (Eds.),
Multiway data analysis (pp. 405-414). Amsterdam: North-Holland. Winsberg, S. & Carroll, J. D. (1989b). A quasi-nonmetric method of multidimensional scaling via an extended Euclidean model.
Psychometrika, 54, 217-229. Winsberg, S., & Ramsay, J. O. (1980). Monotonic transformations to additivity using splines. Biometrika, 67, 669-674. Winsberg, S., & Ramsay, J. O. (1981). Analysis of
pairwise preference data using B-splines. Psychometrika, 46, 171-186. Winsberg, S., & Ramsay, J. O. (1983). Monotone spline transformations for dimension reduction. Psychometrika, 48, 575-595. Wish,
M., & Carroll, J. D. (1974). Applications of individual differences scaling to studies of human perception and judgment. In E. C. Carterette & M. P. Friedman (Eds.), Handbook
J. Douglas Carroll and Phipps Arabie
of perception: Psychophysicaljudgment and measurement (Vol. 2, pp. 449-491). New York: Academic Press. Wold, H. (1966). Estimation of principal components and related models by iterative least
squares. In P. R. Krishnaiah (Ed.), Multivariate analysis (pp. 391-420). New York: Academic Press. Young, F. W. (1975). Methods for describing ordinal data with cardinal models. Journal of
Mathematical Psychology, 12, 416-436. Young, F. W. (1984a). The general Euclidean model. In H. G. Law, C. W. Snyder, Jr., j. A. Hattie, & R. P. McDonald (Eds.), Research methodsfor multimode data
analysis (pp. 440469). New York: Praeger. Young, F. W. (1984b). Scaling. Annual Review of Psychology, 35, 55-81. Young, F. W., & Lewyckyj, R. (1981). ALSCAL.4 user'sguide. Unpublished manuscript, L.
L. Thurstone Psychometric Laboratory, University of North Carolina, Chapel Hill. Zielman, B. (1993). Directional analysis of three-way skew-symmetric matrices. In O. Opitz, B. Lausen, & R. Klar
(Eds.), h~rmation and class!l]cation (pp. 156-161). New York: Springer-Verlag. Zielman, B., & Heiser, W.J. (1993). Analysis of asymmetry by a slide-vector. Psychometrika, 58, 101-114. Zielman, B., &
Heiser, W. J. (1994). Models for asymmetric proximities. Internal Report RR-94-04. Leiden: Department of Data Theory, University of Leiden.
Stimulus Categorization F. Gregory Ashby W. Todd Maddox
The bacterium E. coli tumbles randomly in a molecular sea. When it encounters a stream of molecules that it categorizes as a nutrient, it suppresses tumbling and swims upstream to the nutrient
source. A recently inseminated female mouse sniffs urine near her nest. If she categorizes it as from an unfamiliar male mouse, implantation and pregnancy are prevented (Bruce, 1959; Parkes & Bruce,
1962). A man views a long sequence of portraits taken from high school yearbooks. Even though he graduated almost 50 years ago, he is remarkably accurate at deciding whether an arbitrary face belongs
to the category of his own high school classmates (Bahrick, Bahrick, & Wittlinger, 1975). All organisms divide objects and events in the environment into separate classes or categories. If they did
not, they would die and their species would become extinct. Therefore, categorization is among the most important decision tasks performed by organisms (Ashby & Lee, 1993). Technically, a
categorization or classification task is one in which there are more stimuli than responses. As a result, a number of stimuli are assigned the same response. In contrast, an identification task is
one in which there is a unique response for every stimulus. For example, many humans are in the category "women" and many objects are in the category "bells," but only one human is identified as
"Hillary Clinton" and only one object is Measurement,Judgment, and Decision Making Copyright 9 1998 by Academic Press. All rights of reproduction in any form reserved.
F. Gregory Ashby and W. Todd Maddox
identified as "the Liberty Bell." Although the theories and basic phenomena associated with categorization and identification are similar, this chapter focuses on categorization. A categorization
task is one in which the subject assigns a stimulus to one of the relevant categories. Many other tasks require the subject to access stored category information but not to make a
categorization_judgment. For example, in a typicality rating task the subject sees a category exemplar (i.e., a stimulus belonging to the category) and rates how typical or representative of the
category it is. Other experiments might ask the subject to recall all the exemplars of a particular category. Although these related paradigms provide valuable information about category
representation, space limitations prevent us from considering them in detail. Instead, we will focus on the standard categorization experiment. Another important distinction is between categories and
concepts. Although these terms are sometimes used interchangeably, we define a category as a collection of objects belonging to the same group and a concept as a collection of related ideas. For
example, trees form a category and the many alternative types of love form a concept. When Ann Landers tells a reader that he is in lust rather than in love, she is doing something very similar to
categorization. Many of the categorization theories discussed here make definite predictions about the cognitive processes required for such a judgment. Even so, the representations of categories and
concepts are probably quite different and a discussion of the two is beyond the scope of this chapter. I. THE CATEGORIZATION EXPERIMENT This discussion may leave the impression that the focus of this
chapter is narrow. However, the standard categorization experiment has many degrees of freedom, which can result in a huge variety of tasks. Some prominent options available to the researcher
designing a categorization task are listed in Table 1. The first choice is the type of stimuli selected. One can choose stimuli that vary continuously along the relevant stimulus dimensions or that
only take on some number of discrete values. The most limiting case is binary-valued dimensions. In many such experiments the two levels are "presence" and "absence." Several categorization theories
make specific predictions only in the special case where the stimulus dimensions are binary valued. This is ironic because in natural settings binary-valued stimulus dimensions are rare, if they
exist at all. For example, in one popular experimental paradigm that consistently uses binary-valued dimensions, subjects learn that a patient received a battery of medical tests, that the outcome of
each test is either positive or negative, and that a certain pattern of test results is characteristic of a particular disease. The subjects then
4 Stimulus Categorization TABLE 1 ,
Options in the Design of a Categorization Experiment
Experimental components
Stimulus dimensions
Continuous vs. discrete vs. binary valued Separable vs. integral Overlapping vs. nonoverlappmg Few vs. many exemplars Linearly vs. nonlmearly separable Normal vs. nonnormal category distributions
Supervised vs. unsupervised vs. free sorting Well defined vs. partially defined vs. undefined categories
Category structure
Instructions to subjects
discover the outcome of a set of tests and make a diagnosis. Is this realistic? How many medical tests give binary-valued results? For example, high blood pressure could indicate heart disease, but
blood pressure does not have either a single high value or a single low value. Instead, it is continuous valued. A physician might decide on the basis of some continuous-valued blood pressure level
that a patient has high blood pressure, but then it is the decision that is binary valued, not the percept. Even a simple home pregnancy test is not binary valued. For a variety of reasons, the
testing material will display a continuum of hues, even if the woman is pregnant. When selecting stimuli, a second choice is about the interaction between pairs of stimulus dimensions. If the
dimensions arc separable it is easy to attend to one and ignore the other, whereas if they are integral it is either difficult or impossible to do so (e.g., Ashby & Maddox, 1994; Ashby & Townsend,
1986; Garner, 1974; Maddox, 1992). Prototypical separable dimensions are hue and shape and prototypical integral dimensions are hue and brightness (e.g., Garner, 1977; Garner & Fclfoldy, 1970; Hyman
& Well, 1968). Another set of options concerns the construction of the contrasting categories. For example, they can be overlapping or nonoverlapping. Overlapping categories have at least one
stimulus that is sometimes a member of one category and sometimes a member of another (also called probabilistic categories). Thus, whereas perfect performance is possible with nonoverlapping
categories, if the categories are overlapping, even the optimal classifier will make errors. Although much of the empirical work in categorization has used nonoverlapping categories, many natural
categories are overlapping. For example, a person might look like the prototype of one ethnicity but be a member of another. Overlapping categories are also theoretically important because they
provide a strong test of categorization theories. Virtually all theories of categorization can account for the kind of error-free performance that occurs when subjects categorize typewritten
characters as x's or
F. Gregory Ashby and W. Todd Maddox
o's (nonoverlapping categories), but only a few (if any) can account for the errors that occur when subjects try to categorize handwritten characters as c's or a's (overlapping categories). When
designing the contrasting categories, the experimenter must also decide how many exemplars to place in each category. This factor may have a crucial effect on the strategy the subject uses to solve
the categorization problem. With only a few exemplars in each category the subject literally could memorize the correct response to every stimulus, but if the categories contain many exemplars, the
subject might be forced to use a more efficient strategy. The experimenter must also decide where to place the category exemplars in the stimulus space. Many choices are possible. Often, exemplars
are positioned so that a particular decision rule maximizes categorization accuracy. For example, the exemplars might be positioned so that a dimensional rule is optimal (i.e., ignore all stimulus
dimensions but one). One important choice, which selects between broad classes of rules, is whether to make a pair of categories linearly or nonlinearly separable. If the categories are linearly
separable, then categorization accuracy is maximized by a rule that compares a linear combination of the stimulus dimensional values to some fixed criterion value. One response is given if the
weighted sum exceeds the criterion and the other response is given if it does not. If a pair of categories is nonlinearly separable, then no such linear combination exists. The distinction between
linearly and nonlincarly separable categories is important because several theories predict that linearly separable categories should be significantly easier to categorize than nonlincarly separable
categories. Another prominent solution to the problem of how to position the exemplars in the stimulus space is to allow their position to bc normally distributed on each stimulus dimension (Ashby &
Gott, 1988; Ashby & Maddox, 1990, 1992; Kubovy & Healy, 1977; Lee, 1963; Lee & Janke, 1964, 1965). With normally distributed exemplars, the categories arc always linearly or quadratically separable.
After the stimuli are selectcd and the categories are constructed, the experimenter must decide what instructions and feedback to give the subject. In a supervised task, the subject is told the
correct response at the end of each trial. In an unsupervised task, no feedback is given after each response, but the subject is told at the beginning of the experiment how many categories are
relevant. Finally, in a free sorting (or clustering) task, there is no trialby-trial feedback and the subject is given no information about the number of relevant categories. Instead, the subject is
told to form his or her own categories, using as many as seem required. A task can be supervised only if an objectively correct response can be identified on every trial. In such a case, we say that
the categories are well
Stimulus Categorization
defined. 1 O f course, well-defined categories may also be used in an unsupervised or free sorting task, but sometimes these tasks are run with categories in which no objectively correct response
exists on any trial. Such undefined categories are quite common. For example, suppose a subject is shown color patches of varying hue and is asked to categorize them according to whether they make
the subject feel happy or sad. Because the subject's affective state is unobservable, there is no way to decide which response is correct. Thus, this experiment is an example of unsupervised
categorization with undefined categories. (It is not free sorting because the subject was told that the only possible categories are happy and sad.) Finally, partially defined categories are those
for which a correct response is identified on some but not on all trials. The most common use of partially defined categories is in experiments that use training and transfer conditions. During the
training phase, feedback is given on every trial, but during the transfer phase, no feedback is given. If a stimulus is presented for the first time during the transfer phase, it therefore will
usually have no objectively correct response. II. C A T E G O R I Z A T I O N T H E O R I E S Categorization theories come in many different types and are expressed in different languages. This makes
them difficult to compare. In spite of their large differences, however, they all make assumptions about (1) representation, (2) category access, and (3) response selection. The representation
assumptions describe the perceptual and cognitive representation of the stimulus and the exemplars of the contrasting categories. The response selection assumptions describe how the subject selects a
response after the relevant information has been collected and the requisite computations have been performed. The category access assumptions delineate the various categorization theories. These
assumptions describe the information that must be collected from the stored category representations and the computations that must be performed on this information before a response can be made. At
least five different kinds of theories have been popular. The classical theory assumes that a category can be represented as a set of necessary and sufficient conditions, so categorization is a
process of testing whether the stimulus possesses each of these conditions (e.g., Bruner, Goodnow, & Austin, 1956; 1 Our use of the term well defined is different from that of Neisser (1967), who
distinguished between well- and ill-defined categories. According to Neisser, well-defined categories are structured according to simple logical rules, whereas ill-defined categories are not. These
definitions are somewhat ambiguous because the term simple is not rigorously defined. Rules that are easily verbalized are usually called simple, but, for example, it is unclear whether a rule that
can be verbalized, but not easily, is also simple.
F. Gregory Ashby and W. Todd Maddox
Smith & Medin, 1981). Prototype theory assumes that the category representation is dominated by the prototype, or most typical member, and that categorization is a process of comparing the similarity
of the stimulus to the prototype of each relevant category (Posner & Keelc, 1968, 1970; Reed, 1972; Rosch, 1973, 1977). Feature-frequency theory assumes the category representation is a list of the
features contained in all exemplars of the category, along with their relative frequency of occurrence (Estes, 1986a; Franks & Bransford, 1971; Reed, 1972). Categorization is a process of analyzing
the stimulus into its component features and computing the likelihood that this particular combination of features was generated from each of the relevant categories. Exemplar theory assumes the
subject computes the similarity of the stimulus to each stored exemplar of all relevant categories and selects a response on the basis of these similarity computations (Brooks, 1978; Estes, 1986a;
Hintzman, 1986; Medin& Schaffer, 1978; Nosofsky, 1986). Finally, decision bound theory (also called general recognition theory) assumes the subject constructs a decision bound that partitions the
perceptual space into response regions (not necessarily contiguous), one for each relevant category. On each trial, the subject determines the region in which the stimulus representation falls, and
then emits the associated response (Ashby & Gott, 1988; Ashby & Lee, 1991, 1992; Ashby & Townsend, 1986; Maddox & Ashby, 1993). Much of the work on testing and comparing these theories has focused on
response accuracy. This is a good dependent variable because it has high ecological validity and is easy to estimate. On the other hand, response accuracy is a fairly crude, global measure of
performance. In the language of Marr (1982), response accuracy is good at testing between models written at the computational level, but it is poor at discriminating between models written at the
algorithmic level. This focus on response accuracy has not yet been a serious problem because the most popular categorization models are computational rather than algorithmic. That is, they specify
what is computed, but they do not specify the algorithms that perform those computations. Currently, however, there is an awakening interest in algorithmic level descriptions of the categorization
process. A test between algorithmic level models often requires a dependent variable more sensitive than overall accuracy to the microstructure of the data. In the categorization literature,
algorithmic level models are most frequently tested against trial-by-trial learning data, although response times could also be used. The most popular architecture within which to implement the
various algorithms that have been proposed has been the connectionist network and virtually all of the current network models instantiate some version of feature-frequency theory (e.g., Gluck &
Bower, 1988) or exemplar theory (e.g., Estes, 1993, 1994; Kruschke, 1992). It is important to realize, however, that network versions
4 Stimulus Categorization
of classical, prototype, or decision bound theories could also be constructed. Thus, there is no such thing as the connectionist theory of categorization. Rather, connectionist networks should be
viewed as an alternative architecture via which any computational theory of categorization can be expressed at the algorithmic level. The next three sections of this chapter examine the
representation, response selection, and category access assumptions in turn. This provides a common language from which to describe and formally compare the various theories. Section VI reviews the
empirical tests of the various theories and the last section identifies some important unsolved problems. III. STIMULUS, EXEMPLAR, A N D CATEGORY REPRESENTATION Figure 1 illustrates the relations
between the various theories of stimulus and category representation. The most fundamental distinction is whether the theories assume numeric or nonnumeric representation. Nonnumeric models assume a
symbolic or linguistic representation. These models assume that stimuli and category exemplars are described by a generative system of rules, which might be given by a production system or a grammar.
Each category is associated with a unique set of rules, so categorization is equivalent to determining which set of rules generated the stimulus. Early proponents of nonnumeric representation in
psychology were Allen Newell and Herbert Simon (e.g., Newell & Simon, 1972; see, also, Anderson, 1975; Klahr, Langley, & Neches, 1987). Numeric representation is of two types. Dimensional theories
assume a geometric representation that contains a small number of continuous-valued dimensions. The most widely known examples in psychology are multidimensional scaling (MDS; Kruskal, 1964a, 1964b;
Shepard, 1962a, 1962b; Torgerson, 1958; Young & Householder, 1938) and signal detection theory (Ashby & Townsend, 1986; Green & Swets, 1966). Feature theories assume
! Representati~ I
numeric featural
dimensional point
FIGURE 1
Hierarchical relations among theories of stimulus representation.
F. Gregory Ashby and W. Todd Maddox
the stimulus can be represented as a set of features, where a feature is either present or absent and often has a nested relation that is naturally represented in a treelike connected graph. Perhaps
the most notable feature models in psychology were developed by Amos Tvcrsky (Corter & Tversky, 1986; Sattath & Tversky, 1977; Tvcrsky, 1972, 1977). Although some argue that feature models are
nonnumeric (e.g., Pao, 1989), we classify them as numeric because it is usually possible to depict feature representations geometrically by defining a feature as a binary-valued dimension. In this
case, each dimension contributes only one bit of information (i.e., presence or absence), so the resulting perceptual space frequently has many dimensions. With binary-valued dimensions, the natural
distance measure is the Hamming metric, defined as the number of features on which the two stimuli disagree. The dissimilarity measure proposed by Tversky (1977) in his feature contrast model,
generalizes Hamming distance. Assumptions about stimulus representation are impossible to test in isolation. At the very least, extra assumptions must be added about how the subject uses the
representation. One obvious choice is to assume that representations that are close together are similar or related. A number of attempts to criticize numeric representation, and especially
dimensional representation, have been based on this assumption. As we will see, however, a critical point of contention is how one should define "close together." Most formal theories of
categorization assume a numeric representation. In a number of exemplar models, application is restricted to experiments that use binary-valued stimuli, so the stimulus and category representations
are featural. This includes the context model (Medin & Schaffer, 1978), the array-similarity model (Estes, 1986a), and several network models (Estes, 1993, 1994; Gluck & Bower, 1988; Hurwitz, 1990).
Most other formal models assume a dimensional representation. Most of these use a multidimensional scaling (MDS) representation that assumes (1) the stimuli and category exemplars are represented as
points in a multidimensional space and (2) stimulus similarity decreases with the distance between the point representations. Exemplar models based on an MDS representation include the generalized
context model (GCM: Nosofsky, 1986) and ALCOVE (Kruschke, 1992), whereas MDS-based prototype models include the fuzzy logical model of perception (FLMP; Massaro & Friedman, 1990) and the comparative
distance model (Reed, 1972). The assumption that similarity decreases with psychological distance is controversial. A distance-based perceptual representation is valid only if the psychological
distances satisfy a set of distance axioms (Ashby & Perrin, 1988; Tversky, 1977). These include the triangle inequality, symmetry, minireality, and that all self-distances are equal. Unfortunately,
there is abundant empirical evidence against these axioms. For example, Tversky (1977) reported that subjects rate the similarity of China to North Korea to be less
4 Stimulus Categorization
than the similarity of North Korea to China, an apparent violation of symmetry (see also Krumhansl, 1978). The triangle inequality holds if the psychological distance between stimuli i and j plus the
distance between stimuli j and k is greater than or equal to the distance between stimuli i and k. Although the triangle inequality is difficult to test empirically, Tversky and Gati (1982) proposed
an empirically testable axiom, called the corner inequality, that captures the spirit of the triangle inequality. Tversky and Gati (1982) tested the corner inequality and found consistent violations
for stimuli constructed from separable dimensions. It is important to note, however, that although these results are problematic for the assumption that similarity decreases with psychological
distance, they are not necessarily problematic for the general notion of dimensional representation, or even for the point representation assumption of the MDS model. For example, Krumhansl (1978)
argued that similarity depends not only on the distance between point representations in a low dimensional space, but also on the density of representations around each point. Her distance-density
model can account for violations of symmetry but not for violations of the triangle inequality. Nosofsky (1991b) argued that many violations of the distance axioms are due to stimulus and response
biases and not to violations of the MDS assumptions. Decision bound theory assumes a dimensional representation, but one that is probabilistic rather than deterministic. A fundamental postulate of
the theory is that there is trial-by-trial variability in the perceptual information obtained from every object or event (Ashby & Lee, 1993). Thus, a stimulus is represented as a multivariate
probability distribution. The variability is assumed to come from many sources. First, physical stimuli are themselves intrinsically variable. For example, it is well known that the number of photons
emitted by a light source of constant intensity and constant duration varies probabilistically from trial to trial (i.e., it has a Poisson distribution; Geisler, 1989; Wyszecki & Stiles, 1967).
Second, there is perireceptor noise in all modalities. For example, in vision, the amount of light reflected off the cornea varies probabilistically from trial to trial. Third, there is spontaneous
activity at all levels of the central nervous system that introduces more noise (e.g., Barlow, 1956, 1957; Robson, 1975). One advantage of probabilistic representation is that decision bound theory
is not constrained by any of the distance axioms. When stimuli are represented by probability distributions, a natural measure of similarity is distributional overlap (Ashby & Perrin, 1988). The
distributional overlap similarity measure contains MDS Euclidean distance measures of similarity as a special case but, unlike the distance measures, is not constrained by the distance axioms (Ashby
& Perrin, 1988; Perrin, 1992; Perrin & Ashby, 1991). In a categorization task, the stimulus is usually available to the subject up to the time that a response is made. Thus, long-term memory has
little or
F. Gregory Ashby and W. Todd Maddox
no effect on the representation of the stimulus. In contrast, exemplars of the competing categories are not available, so a decision process requiring exemplar information must access the exemplar
representations from memory. As a consequence, the representation of category exemplars is affected critically by the workings of memory. Therefore, unlike a theory of stimulus representation, a
complete theory of exemplar representation must model the effects of memory. Nevertheless, most catcgorization theories represent stimuli and exemplars identically. Recently, a fcw attempts have been
made to model the effects of memory on exemplar representation, but these have been mostly limited to simple models of trace-strength decay (e.g., Estes, 1994; Nosofsky, Kruschkc, & McKinley, 1992).
Clearly, more work is needed in this area. Another area where more sophisticatcd modeling is needed is in category representation. In exemplar theory, a category is represented simply as the union or
set of representations of all cxcmplars belonging to that category. Prototype theory assumes the category representation is dominated by the category prototype. Feature-frequcncy thcory assumes the
category reprcsentation is a list of all features found in the category exemplars. Classical theory assumes a category is represented by a set of necessary and sufficient conditions required for
category mcmbcrship. Thus, there is considerable disagreement among the theories about how much consolidation of the category representation is performed by thc memory processes over time. Exemplar
theory takes the extreme view that there is little or no consolidation, whereas classical theory posits so much consolidation that exemplar information is no longer available. Although decision bound
theory makes no concrete assumptions about category representation, several applications have tested the hypothesis that subjects assume categories are normally distributed (e.g., Ashby & Gott, 1988;
Ashby & Lee, 1991; Ashby & Maddox, 1990, 1992; Maddox & Ashby, 1993). There is good evidence that many natural categories share properties of the normal distribution or at least that subjects assume
that they do. First, natural categories generally contain a large number of exemplars (e.g., there are many trees). Second, the dimensions of many natural categories are continuous valued. Third,
many natural catcgories overlap. Finally, there is evidence that people naturally assume category exemplars are unimodally and symmetrically distributed around some prototypical value (Fried &
Holyoak, 1984; Flannagan, Fried, & Holyoak, 1986). As early as 1954, Black argued that "if we examine instances of the application of any biological term, we shall find ranges, not classes--specimens
(i.e., individuals or species) arranged according to the degree of their variation from certain typical or 'clear' cases" (p. 28). The normal distribution has all these propertics. It assumes an
unlimited number of exemplars, dimensions that are
4 Stimulus Categorization
continuous valued, a small number of atypical exemplars (so it overlaps with other nearby categories), and it is unimodal and symmetric. According to this interpretation, subjects initially assume
the exemplars of an unfamiliar category have a multivariate normal distribution in stimulus space. Gaining experience with a category is a process of estimating the mean exemplar value on each
dimension, the variances, and the correlations between dimensions. These estimates allow the subject to compute the likelihood that any stimulus belongs to this category. In fact, subjects need not
even assume normality. Suppose they estimate the exemplar means, variances, correlations, and category base rates and then try to infer the correct distribution. If they do not know the appropriate
family of distributions, it turns out that the multivariate normal is an excellent choice because it takes maximal advantage of the information available (technically, it is the maximum entropy
inference; Myung, 1994). Given estimates of the category means, variances, correlations, and base rates, to infer that the category distribution is anything other than normal requires extra
assumptions. In other words, the normal distribution is the appropriate noncommittal choice in such situations. Thus, the multivariate normal distribution is an attractive model of category
representation (Ashby, 1992a; Fried & Holyoak, 1984; Flannagan, Fried, & Holyoak, 1986). IV. RESPONSE SELECTION There are two types of response selection models. Deterministic models assume that, if
on different trials the subject receives the same perceptual information and accesses the same information from memory, then the subject will always select the same response. Probabilistic models
assume the subject always guesses, although usually in a sophisticated fashion. In other words, if the evidence supports the hypothesis that the stimulus belongs to category A, then a deterministic
model predicts that the subject will respond A with probability 1, whereas a probabilistic model predicts that response A will be given with probability less than 1 (but greater than 0.5). In many
categorization experiments, observable responding is not deterministic. It is not uncommon for a subject to give one response the first time a stimulus is shown and a different response the second
time, even if the subject is experienced with the relevant categories (e.g, Estes, 1995). It is important to realize that such data do not necessarily falsify deterministic response selection models.
The observable data may be probabilistic because of noise in the subject's perceptual and memory systems. For example, perceptual noise may cause the subject to believe that a different stimulus was
presented, a stimulus belonging to the incorrect category. Thus, the distinction between deterministic and probabilistic response selection models
F. Gregory Ashby and W. Todd Maddox
does not apply at the observable level of the data but at the unobservable level of the subject's decision processes. The question of whether response selection is deterministic or probabilistic is
not limited to categorization tasks but may be asked about any task requiring an overt response from the subject. In many tasks, the evidence overwhelmingly supports deterministic response selection.
For example, if subjects are asked whether individual rectangles are taller than they are wide or wider than they are tall, then, even at the data level, responding is almost perfectly deterministic
(Ashby & Gott, 1988). In other tasks, the evidence overwhelmingly supports probabilistic response selection. In a typical probability matching task, a subject sits in front of two response keys. The
right key is associated with a red light and the left key is associated with a green light. On each trial, one of these two lights is turned on. The red light is turned on with probability p and the
green light is turned on with probability 1 - p. The subject's task is to predict which light will come on by pressing the appropriate button. Consider the case in which p is considerably greater
than one-half. A deterministic rule predicts that the subject will always press the right key. This choice also maximizes the subject's accuracy. However, the data clearly indicate that subjects
sometimes press the right key and sometimes the left. In fact, they approximately match the objective stimulus presentation probabilities by pressing the right key on about 100p% of the trials (e.g.,
Estes, 1976; Herrnstein, 1961, 1970). This behavior is known as probability matching. Therefore, the consensus is that humans use deterministic response selection rules in some tasks and
probabilistic rules in other tasks. In categorization however, the controversy is still unresolved. It is even possible that subjects use deterministic rules in some categorization tasks and
probabilistic rules in others (Estes, 1995). Virtually all models assuming a probabilistic response selection rule, assume a rule of the same basic type. Consider a categorization task with
categories A and B. Let S,j denote the strength of association between stimulus i and categoryJ (j = A or B). The algorithm used to compute this strength will depend on the specific categorization
theory. For example, in some prototype models, SiA is the similarity between stimulus i and the category A prototype. In an exemplar model, SiA is the sum of the similarities between the stimulus and
all exemplars of category A. In many connectionist models, SiA is the sum of weights along paths between nodes activated by the stimulus and output nodes associated with category A. Virtually all
categorization models assuming a probabilistic response selection rule assume the probability of responding A on trials when stimulus i is presented equals
~ASiA P(RAli) ~ ~ASiA q- ~BSiB
4 Stimulus Categorization
where 13j is the response bias toward categoryd (with 13j - 0). Without loss of generality, one can assume that 1313= 1 - 13A. In many categorization models the response biases are set to 13A = 1313=
0.5. Equation (1) has various names. It was originally proposed by Shepard (1957) and Luce (1963), so it is often called the Luce-Shepard choice model. But it is also called the similarity-choice
model, the biased-choice model, or the relative-goodness rule. If S;A is interpreted as the evidence favoring category A, and if there is no response bias, then Eq. (1) is also equivalent to
probability matching. Deterministic decision rules are also of one basic type. Let h(i) be some function of the stimulus representation with the property that stimulus i is more likely to be a member
of category A when h(i) is negative and a member of category B when h(i) is positive. For example, in prototype or exemplar models h(i) might equal Siz3 - SiA. The deterministic decision rule is to
respond A if h(i) < a + e~; respond B if h(i) > a + ec.
In the unlikely event that h(i) exactly equals 8 + e~, the subject is assumed to guess. As with the 13 parameter in the similarity-choice model, 8 is a response bias. Response A is favored when ~ > 0
and response B is favored when ~ < 0. The random variable ec represents criterial noise; that is, variability in the subject's memory of the criterion 8. It is assumed to have a mean of 0 and a
variance of cr2 and is usually assumed to be normally distributed (e.g., Maddox & Ashby, 1993). Although the similarity-choice model and the deterministic decision rule of Eq. (2) appear very
different, it is well known that the similarity-choice model is mathematically equivalent to a number of different deterministic decision rules (e.g., Marley, 1992; Townsend & Landon, 1982). Ashby
and Maddox (1993) established another such equivalence that is especially useful when modeling categorization data. Suppose the subject uses the deterministic response selection rule of Eq. (2) and
he or she defines the discriminant function h(i) as h(i) = log(S/R) - log(SiA). Assume the criterial noise ec has a logistic distribution. Ashby and Maddox (1993) showed that under these conditions
the probability of responding A on trials when stimulus / i s presented is equal to
P(RA Ii) =
[3A(S;A)~ + [3B(S;.)~
where -rr = v.~'~r
13A = 1
e~ e~ "
F. Gregory Ashby and W. Todd Maddox
In other words, the probability matching behavior of the similarity-choice model, which results when y = 1, is indistinguishable from a deterministic decision rule in which % = "rr/V3. On the other
hand, if the subject uses a deterministic decision rule, but 0re < " r r / ~ , then there is an equivalent probabilistic decision rule of the Eq. (3) type in which y > 1. In this case, the observable
responding is less variable than predicted by probability matching and the subject is said to be overmatching (Baum, 1974). Similarly, if o'~ > "rr/V'-3, then there is an equivalent probabilistic
decision rule in which y < 1. In this case, the observable responding is more variable than predicted by probability matching and the subject is said to be undermatching (Baum, 1974). These results
indicate that for any deterministic response selection rule there is a probabilistic rule that is mathematically equivalent (and vice versa). Despite this fact, there is some hope for discriminating
between these two strategies. This could be done by fitting the Eq. (3) model to categorization data from a wide variety of experiments and comparing the resulting estimates of y. For example,
suppose a subject is using the deterministic rule of Egz (2). If so, there is no good reason to expect % to turn out to equal -rr/V3 exactly (the value equivalent to probability matching). Also, it
is reasonable to expect % to vary with the nature of the stimuli and the complexity of the rule that separates the contrasting categories. Therefore, if the estimates of y are consistently close to
1.0 or even if they are consistently close to any specific value, then a probabilistic rule is more likely than a deterministic rule. On the other hand, if the y estimates vary across experiments and
especially if they are larger in those tasks where less criterial noise is expected, then a deterministic rule is more likely than a probabilistic rule. Another possibility for testing between
deterministic and probabilistic response selection rules is to examine the y estimates as a function of the subject's experience in the task. With probabilistic rules, y might change with experience,
but there is no reason to expect a consistent increase or decrease with experience. On the other hand, as the subject gains experience in the task, criterial noise should decrease because the
strength of the sub.ject's memory trace for the rule that separates the contrasting categories should increase. Thus, deterministic decision rules predict a consistent increase of y with experience
(Koh, 1993). A number of studies explicitly tried to test whether subjects use deterministic or probabilistic response selection rules in a simple type of categorization experiment called the
numerical decision task (Hammerton, 1970; Healy & Kubovy, 1977; Kubovy & Healy, 1977; Kubovy, Rapoport, & Tversky, 1971; Lee & Janke, 1964, 1965; Ward, 1973; Weissmann, Hollingsworth, & Baird, 1975).
In these experiments, stimuli are numbers and
4 Stimulus Categorization
two categories are created by specifying two different normal distributions. On each trial, a number is sampled from one of the distributions and shown to the subject. The subject's task is to name
the category (i.e., the distribution) from which it was drawn. In general, these studies have favored deterministic rules over probabilistic rules. For example, Kubovy et al. (1971) found that a
fixed cutoff accounted for the data significantly better than a probability matching model, even when the probability matching model was allowed a response bias parameter. Maddox and Ashby (1993) fit
the Eq. (3) response selection model to data from 12 different categorization experiments. Category similarity Sj was computed from a powerful exemplar model. In several of these experiments the
stimuli were rectangles. The category prototypes for two of these experiments are shown in Figure 2. In both cases the contrasting categories were linearly separable. In the first case, the rule that
maximized accuracy was as follows: Respond A if the stimulus rectangle is higher than it is wide. Respond/3 if it is wider than it is high.
I I
FIGURE 2
Category prototypes for two categorization experiments.
F. Gregory Ashby and W. Todd Maddox
In the second case, the optimal rule was as follows: Respond A if the height plus the width is greater than some criterion amount. Respond B if it is less than this criterion amount. A major
difference between these two tasks is that the second task requires the subject to maintain a criterion in memory, whereas the first task does not. Thus, if the subject is using a deterministic
response selection rule there should be virtually no criterial noise in the first task but a significant amount in the second, and as a result ~/should be much larger in the first task than the
second. On the other hand, the Eq. (3) probabilistic rule must predict that ~/is the same in the two tasks. The stimuli were the same, the instructions were the same, and optimal accuracy was the
same. The categories even had the same amount of variability in the two tasks. As it turned out however, in the first task the median ~/estimate was 2.59, whereas in the second task it was 1.00 (the
median was taken across subjects). This is compelling evidence that subjects used a deterministic response selection rule in these tasks. Estes (1995) argued that subjects may have used deterministic
response selection rules in these tasks because "when stimuli are defined on only one or two sensory dimensions, subjects can discover a criterion that defines category membership (e.g., all angles
greater than 45 ~ belong to Category A) and recode stimuli in terms of their relation to the criterion, whereas with complex, multiattribute stimuli such recoding may be difficult or impossible" (p.
21). Several other data sets fit by Maddox and Ashby (1993) provide at least a partial test of this hypothesis. Six experiments used categories in which the optimal decision rule was highly nonlinear
(it was quadratic). In at least four of these cases, there was no straightforward verbal description of this rule, so it would be extremely difficult for subjects to perform the kind of recoding that
Estes describes. The subjects in these experiments all completed several experimental sessions and Maddox and Ashby (1993) fit the data of each individual session separately. Across the six
experiments, the median ~ estimates (computed across subjects) ranged from 1.13 to 4.29 on the first experimental session and from 1.51 to 5.67 on the last session. Thus, even when the subjects were
inexperienced, observable responding was less variable than predicted by probability matching. More interesting however, is a comparison of the ~/estimates from the first session to the last. In all
six experiments, the estimated value of ~/increased from the first session to the last. These results favor the deterministic response selection hypothesis. It is true however, that the stimuli in
these six experiments varied on only two physical dimensions. Stimuli were rectangles that varied in height and width or circles that varied in size and orientation of a radial line. Thus, the Maddox
and Ashby (1993) results do
4 Stimulus Categorization
not rule out the possibility that subjects switch to a probabilistic response selection rule when the stimuli vary on many dimensions. Koh (1993) also examined the effects of experience on the amount
of variability in observable responding. Her stimuli were lines that varied in length and orientation and she used categories that were overlapping and linearly separable. As a measure of response
variability, she estimated a parameter that is essentially equivalent to ~/. For all subjects that were able to learn the task, ~/increased with practice and eventually asymptoted at values
significantly larger than 1. Perhaps more interesting, however, was that when the model was refit after the data were averaged across subjects, the best fitting value of ~/was very close to 1. In
other words, although almost all subjects were overmatching, the averaged data satisfied probability matching. This apparent paradox occurred because, although each subject consistently used some
decision rule, different subjects settled on different rules. Thus, there was no single rule that described the averaged data. This discussion indicates that experimental conditions can have a large
effect on whether the resulting data seem to support deterministic or probabilistic response selection rules. Some of the more important experimental factors are listed in Table 2. Responding will
usually be less variable if the rule that maximizes categorization accuracy has a simple verbal description, if the subjects are highly practiced, and if single subject analyses are performed. Also,
any factors that reduce perceptual or criterial noise should make responding less variable. Perceptual noise can be reduced by using high contrast rather than low contrast displays and response
terminated rather than tachistoscopic presentation. Criterial noise can be reduced if the task uses an external rather than an internal criterion or referent. On the other hand, responding will
usually be more variable if the optimal rule has no straightforward verbal description, subjects are inexperienced in the task, and the data are averaged across subjects. Although these experimental
factors might affect the appearance of the data, there is no reason to believe that they will induce a subject to switch, TABLE 2
Experimental Conditions Most Likely to Produce Data That Appear to Support Deterministic or Probabilistic Response Selection Rules
Deterministic response selection
Probabilistic response selection
Optimal rule is simple (optimal rule has verbal analogue) Optimal rule uses external criterion (limited memory requirement) Experienced subjects Single subject analyses
Optimal rule is complex (optimal rule has no verbal analogue) Optimal rule uses internal criterion (extensive memory requirement) Inexperienced subjects Averaging across subjects
F. Gregory Ashby and W. Todd Maddox
say, from a deterministic to a probabilistic response selection rule. Until there is good evidence to the contrary, the simplest hypothesis is that subjects use the same type of response selection
rule in virtually all categorization tasks. Because of the identifiability problems, deterministic and probabilistic rules are difficult but not impossible to discriminate between. Currently, the
best available evidence favors deterministic rules, but the debate is far from resolved.
V. CATEGORY ACCESS This section reviews five major theories about the type of category information that is accessed and the computations that arc performed on that information during categorization.
Before beginning however, it is instructive to consider the optimal solution to the category access problem. The optimal classifier uses the decision rule that maximizes categorization accuracy.
Consider a task with two categories A and B. Suppose the stimulus is drawn from category A with probability P(C,~i) and from category B with probability P(C/3). Let f,~(i) and ft3(i) be the
likelihood that stimulus i is a member of category A or B, respectively. Then on trials when stimulus i is presented, the optimal classifier uses the deterministic rule: >
S,(i) fB(i)
P(Ct~) then respond A
_ P(CB) then guess
p(c ) P(c )
< P(Q-t) then respond B . The set of all stimuli for which fA(i)/f,3(i) = P(C~)/P(CA) is a decision bound because it partitions the perceptual space into response regions. In general, the optimal
decision bound can have any shape, but if each category representation is a multivariate normal distribution, then the optimal decision bound is always linear or quadratic. A subject who would like
to respond optimally in a categorization task must solve several problems. First, in a real experiment, the subject will have experience with only a limited sample of exemplars from the two
categories. Therefore, even with perfect memory and an error-flee perceptual system it is impossible to estimate perfectly the category likelihoods fA(i) and f~(i). At best, the subject could compute
imperfect estimates of fA(i) and fR(i) (and also of the base rates) and use these in the optimal decision rule (Ashby & Alfonso-Reese, 1995). In this case, the subject's decision bound will not agree
with the optimal bound. Assuming the subject chooses this path, the next problem is to select an estimator.
4 Stimulus Categorization
In statistics, the likelihoods fA(i) and ft3(i) arc called probability density functions. Density function estimators are either parametric or nonparametric. Parametric estimators assume the unknown
density function is of a specific type. In our language, this is equivalent to assuming some a priori category structure. For example, if the subject assumes that the category A distribution is
normal, then the best method of estimating fA (i) is to estimate separately the category mean and variance and insert these estimates into the equation that describes the bell-shaped normal density
function (assuming only one relevant dimension). Nonparametric estimators make few a priori assumptions about category structure. The best known example is the familiar relative frequency histogram,
but many far superior estimators have been discovered (e.g., Silverman, 1986). We will return to the idea of categorization as probability density function estimation later in this section. Note that
a subject who uses the optimal decision rule with estimates of fA(i) andfB(i) need not retrieve any exemplar information from memory. No matter what estimators are used, the updating required after
each new stimulus could be done between trials. If it is, then when a new stimulus is presented the estimators would be intact and the two relevant likelihoods, that is, the estimates offA(i ) and fu
(i) could be retrieved directly. We turn now to an overview of the five major theories and then discuss the many empirical comparisons that have been conducted.
A. Classical Theory The oldest theory of categorization is classical theory, which dates back to Aristotle, but in psychology was popularized by Hull (1920). Much of the recent work on classical
theory has been conducted in psycholinguistics (Fodor, Bever, & Garrett, 1974; Miller & Johnson-Laird, 1976) and psychological studies of concept formation (e.g., Bourne, 1966; Bruner, Goodnow, &
Austin, 1956). Classical theory makes unique assumptions about category representation and about category access. All applications of classical theory have assumed a deterministic response selection
rule. First, the theory assumes that every category is represented as a set of singly necessary and jointly sufficient features (Smith & Medin, 1981). A feature is singly necessary if every member of
the category contains that feature. For example, "four sides" is a singly necessary feature of the "square" category because every square has four sides. A set of features are jointly sufficient if
any entity that contains the set of features is a member of the category. The features (1) four sides, (2) sides of equal length, (3) equal angles, and (4) closed figure are jointly sufficient to
describe the "square" category, because every entity with these four attributes is a square.
F. Gregory Ashby and W. Todd Maddox
The category access assumptions follow directly from the representation assumptions. When a stimulus is presented, the subject is assumed to retrieve the set of necessary and sufficient features
associated with one of the contrasting categories. The stimulus is then tested to see whether it possesses exactly this set of features. If it does, the subject emits the response associated with
that category. If it does not, the process is repeated with a different category. Although classical theory accurately describes many categorization tasks (e.g., classifying squares versus
triangles), the theory is associated with a number of predictions that are known to be false. First, classical theory excludes categories that are defined by disjunctive features, whereas subjects
can learn tasks in which the optimal rule is disjunctive, such as the biconditional or exclusive-or problems (e.g., Bourne, 1970; Bruner, Goodnow, & Austin, 1956; Haygood & Bourne, 1965). Second, it
is difficult to list the defining features of many categories. For example, Wittgenstein (1953) argued that no set of necessary and sufficient features exists for the category "game." Some games have
a "winner" (e.g., football), but others do not (e.g., ring-around-the-rosie). The third, and perhaps strongest, evidence against classical theory, is the finding that categories possess graded
structure--that is, members of a category vary in how good an example (or how typical) they are of the category. Graded structure has been found in nearly every category (see Barsalou, 1987, for a
review). For example, when asked to judge the typicality of different birds, subjects reliably rate the robin as very typical, the pigeon as moderately typical, and the ostrich as atypical (Mervis,
Catlin, & Rosch, 1976; Rips, Shoben, & Smith, 1973; Rosch, 1973). In addition, if subjects are asked to verify whether a stimulus belongs to a particular category, response accuracy increases and
response time decreases as typicality increases (although only on YES trials; e.g., Ashby, Boynton, & Lee, 1994; Rips et al., 1973; Rosch, 1973). Interesting typicality effects have also been found
in the developmental literature. For example, typical category members are learned first by children (Rosch, 1973; Mervis, 1980) and are named first when children are asked to list members of a
category (Mervis et al., 1976). Classical theory, on the other hand, predicts that all members of a category are treated equally, because they all share the same set of necessary and sufficient
features. Note that all three of these criticisms are directed at the category representation assumptions of classical theory, not at the category access assumptions. Thus, none of these results rule
out the possibility that the category access assumptions of classical theory are basically correct. Especially relevant to this observation is the fact that most of the data on which the criticisms
are based were not collected in categorization tasks (but rather, e.g., in typicality rating tasks). The major exception is the fact that subjects
4 Stimulus Categorization
can learn categorization tasks in which the optimal rule is disjunctive. The simplest way to handle this is to generalize the classical theory to allow a category to be defined as the union of
subcategories, 2 each of which is defined by a set of necessary and sufficient features (e.g., Ashby, 1992a, 1992b; Smith & Medin, 1981; see also, Huttenlocher & Hedges, 1994). For example, the
category "games" could be defined as the union of "competitive games" and "noncompetitive games." A classical theorist could respond to the other criticisms by arguing that an exemplar-based graded
category representation exists, and whereas this graded representation is used in recall and typicality rating tasks, it is not accessed on trials of a categorization task. Instead, when
categorizing, the subject only needs to retrieve the categorization rule, which according to classical theory is a list of necessary and sufficient features for each relevant category or subcategory.
This more sophisticated version of classical theory can only be falsified by data from categorization experiments. As it turns out, such data is not difficult to collect (e.g., Ashby & Gott, 1988;
Ashby & Maddox, 1990), but as we will see, the notion that some category related tasks access a graded category representation whereas other such tasks do not is more difficult to disconfirm.
B. Prototype Theory The abundant evidence that category representations have a graded structure led to the development of prototype theory (e.g., Homa, Sterling, & Trepel, 1981; Posner & Keele, 1968,
1970; Reed, 1972; Rosch, 1973; Rosch, Simpson, & Miller, 1976). Instead of representing a category as a set of necessary and sufficient features, prototype theory assumes that the category
representation is dominated by the prototype, which is the most typical or ideal instance of the category. In its most extreme form, the prototype is the category representation, but in its weaker
forms, the category representation includes information about other exemplars (Busemeyer, Dewey, & Medin, 1984; Homa, Dunbar, & Nohre, 1991; Shin & Nosofsky, 1992). In all versions however, the
prototype dominates the category representation. Much of the early work on prototype theory focused on recall and typicality rating experiments; that is, on tasks other than categorization. Two
alternative prototype models have been developed for application to categorization tasks. The first, developed by Reed (1972), assumes a multidimensional scaling (MDS) representation of the stimuli
and category prototypes. On each trial, the subject is assumed to compute the psychological distance between the stimulus and the prototype of each relevant category. Reed's More formally, the
distribution of exemplars in the superordinate category is a probability mixture of the exemplars in the subordinate categories.
F. Gregory Ashby and W. Todd Maddox
model assumed a deterministic response selection rule (i.e., respond with the category that has the nearest prototype), but versions that assume probabilistic response selection have also been
proposed (e.g., Ashby & Maddox, 1993; Nosofsky, 1987; Shin & Nosofsky, 1992). We refer to all of these as MDS-prototype models. The other prominent prototype model is called the fuzzy-logical model
of perception (FLMP; Cohen & Massaro, 1992; Massaro & Friedman, 1990). The FLMP assumes a featural, rather than a dimensional, representation of the stimuli and category prototypes. It also assumes
that a stimulus is compared to each prototype by computing the fuzzy-truth value (Zadeh, 1965) of the proposition that the two patterns are composed of exactly the same features. 3 Recently,
Crowther, Batcheldcr, and Hu (1995) questioned whether the fuzzy-logical interpretation purportedly offered by the FLMP is warranted. Response selection in the FLMP is probabilistic [the Eq. (1)
similarity-choice model with all response biases set equal]. Although the FLMP appears to be quite different from the MDSprototype model, Cohen and Massaro (1992) showed that the two models make
similar predictions. Although prototype theory was seen as a clear improvement over classical theory, it quickly began to suffer criticisms of its own. If the prototype is the only item stored in
memory, then all information about category variability and correlational structure is lost. Yet several lines of research suggested that nonprototypical category exemplars can have a pronounced
effect on categorization performance (e.g., Brooks, 1978; Hayes-Roth & Hayes-Roth, 1977; M e d i n & Schaffer, 1978; Medin & Schwanenflugcl, 1981; Neumann, 1974; Reber, 1976; Rcber & Allen, 1978;
Walker, 1975). In particular, subjects are highly sensitive to the correlational structure of the categories (e.g., Ashby & Gott, 1988; Ashby & Maddox, 1990, 1992; Medin, Altom, Edelson, & Freko,
1982; Medin & Schwanenflugel, 1981; Nosofsky, 1986, 1987, 1989). Note that this criticism is directed at the category access assumptions of prototype theory, not at the category representation
assumptions. Rosch (1975, 1978) understood the importance of the criticisms against prototype theory and in an attempt to strengthen the theory argued that almost all categories contain multiple
prototypes. In fact, she argued that "in only some artificial categories is there by definition a literal single prototype" (p. 40). For example, both robin and sparrow seem to be prototypes for the
category "bird." In this spirit, Anderson (1990, 1991) proposed a multiple prototype model, called the rational model. The rational model 3 The overall fuzzy-truth value of this proposition is equal
to the product of the fuzzy-truth values of the propositions that each specific feature of the stimulus is equal to the analogous feature in the prototype (Massaro & Friedman, 1990). Fuzzy-truth
value has many of the properties of similarity, so the FLMP product rule is analogous to the assumption that similarity is multiplicative across dimensions (Nosofsky, 1992b).
4 Stimulus Categorization
assumes that the category representation is a set of clusters of exemplars, each of which is dominated by a prototype. The probability that an exemplar is grouped into a particular cluster is
determined by (1) the similarity of the exemplar to the cluster's prototype and (2) a prior probability that is determined by the number of exemplars in each cluster and by the value of a coupling
parameter. When presented with a stimulus to be categorized, the subject is assumed to compute the similarity between the stimulus and the prototype of each cluster and to select a response on the
basis of these similarity computations. Few empirical tests of this model have been conducted (however, see Ahn & Medin, 1992; Nosofsky, 1991a).
C. Feature-Frequency Theory Feature-frequency theory (Estes, 1986a; Franks & Bransford, 1971; Reed, 1972) has its roots in feature-analytic models of pattern perception, which assume that a visual
stimulus is perceived as the set of its constituent features (Geyer & DeWald, 1973; Gibson, Osser, Schiff, & Smith, 1963; Townsend & Ashby, 1982). A key assumption of feature-analytic models is
featuresampling independence, which states that the probability of perceiving features f~ and fb equals the probability of perceiving feature fa times the probability of perceiving feature 2';,
(Townsend & Ashby, 1982; Townsend, Hu, & Ashby, 1981). In other words, feature-analytic models assume separate features are perceived independently. Townsend and Ashby (1982) found strong evidence
against this assumption and, as a consequence, featureanalytic models of pattern perception are no longer popular. Feature-frequency theories of categorization borrow their stimulus representation
assumptions from the feature-analytic models of pattern perception. Suppose stimulus i is constructed from features fl, ~ . . . . . f,,. Then the strength of association of stimulus i to category J,
denoted as before by S,j, is assumed to equal
cj) =
(cj) 1]
where/3(Cj) is the subject's estimate of the a priori probability that a random stimulus in the experiment is from category j, P(ilcj) is an estimate of the probability (or likelihood) that the
presented stimulus is from category J, and P(fklCj) is an estimate of the probability (or likelihood) that featurefk occurs in an exemplar from categoryJ. The latter equality holds because of the
sampling independence assumption. Some feature-frequency models assume the probabilistic response selection rule of Eq. (1) (e.g., Estes, 1986a; Gluck & Bower, 1988), and some assume the
deterministic rule of Eq. (2)[with h(i)= S;n- SiA, e.g., Reed, 1972].
F. Gregory Ashby and W. Todd Maddox
Feature-frequency theory can take on many forms depending on what assumptions are made about how the subject estimates the feature frequencies, that is, the/5(fklCj). The original, and perhaps the
most natural, interpretation is that the category J representation is a list of the features occurring in all exemplars of category J, along with the relative frequency with which each feature
occurred (Franks & Bransford, 1971; Reed, 1972). Another possibility is that the feature frequencies of the presented stimulus are estimated by doing a feature-by-feature comparison of the stimulus
to the category prototypes. This type of feature-frequency model is equivalent to a prototype model that assumes feature-sampling independence. A third possibility is that the category J
representation is the set of all exemplars that belong to category J. To estimate P(fklCj) the subject scans the list of stored category J exemplars and computes the proportion that contain
featurefk. This interpretation leads to a special case of exemplar theory (Estes, 1986a). Gluck (1991) argued against the feature-frequency model offered by Gluck and Bower (1988; i.e., the adaptive
network model) on the basis of its failure to account for the ability of subjects to learn nonlinearly separable categories. It is important to note that not all feature-frequency models are so
constrained. For example, suppose all features are continuous-valued and that the subject assumes the values of each feature are normally distributed within each category, with a mean and variance
that varies from category to category. To estimate /3(3~1Cj), the subject first estimates the mean and variance of the values of feature fk within category J and then inserts these estimates into the
equation for the probability density function of a normal distribution. Then categories A and /3 are nonlinearly separable and the subject will learn these categories (i.e., respond with a nonlinear
decision bound) if the following conditions hold: (1) all feature values are normally distributed, (2) the values of all feature pairs are statistically independent (so that feature sampling
independence is valid), (3) the values of at least one feature have different variances in the two categories, and (4) the subject uses the response selection rule of the optimal classifier, i.e.,
Eq. (4). Given the strong evidence against the feature-sampling independence assumption (e.g., Townsend & Ashby, 1982) and the fact that so many of the feature-frequency models are special cases of
the more widely known categorization models, we will have little else to say about feature-frequency theory.
D. Exemplar Theory Perhaps the most popular approach to dealing with the criticisms against prototype theory is embodied in exemplar theory (Brooks, 1978; Estes, 1986a, 1994, Hintzman, 1986; Medin&
Schaffer, 1978; Nosofsky, 1986). Exemplar theory is based on two key assumptions:
4 Stimulus Categorization
1. Representation: a category is represented in memory as the set of representations of all category exemplars that have yet been encountered. 4 2. Access: categorization decisions are based on
similarity comparisons between the stimulus and the memory representation of every exemplar of each relevant category.
Two aspects of these assumptions are especially controversial. First, the assumption that categorization depends exclusively on exemplar or episodic memory has recently been called into question. A
series ofneuropsychological studies has shown that amnesic patients, with impaired episodic memory, can perform normally on a number of different category learning tasks (Knowhon, Ramus, & Squire,
1992; Knowlton & Squire, 1993; Kolodny, 1994). Second, the assumption that the similarity computations include every exemplar of the relevant categories is often regarded as intuitively unreasonable.
For example, Myung (1994) argued that "it is hard to imagine that a 70 year-old fisherman would remember ever instance of fish that he has seen when attempting to categorize an object as a fish" (p.
348). Even if the exemplar representations are not consciously retrieved, a massive amount of activation is assumed by exemplar theory. One possibility that would retain the flavor of the exemplar
approach is to assume that some random sample of exemplars are drawn from memory and similarity is only computed between the stimulus and this reduced set. However, one advantage of exemplar theory
over prototype theory is that it can account for the observed sensitivity of people to the correlational structure of a category (Medin et al., 1982; Medin & Schaffer, 1978). It displays this
sensitivity because the entire category is sampled on every trial. If only a subset of the category exemplars are sampled, the resulting model must necessarily be less sensitive to correlational
structure. Ennis and Ashby (1993) showed that if only a single random sample is drawn, then exemplar models are relatively insensitive to correlational structure. To date, no one has investigated the
question of how small the random sample can be, before adequate sensitivity to correlation is lost. Many different exemplar models have been proposed. Figure 3 presents the hierarchical relations
among some of these, as well as the relations among other types of categorization models. Models that are higher up in the tree are more general, whereas those below are special cases. Perhaps the
most prominent of the early exemplar models are the context model (Medin & Schaffer, 1978), the array-similarity model (Estes, 1994; also called the basic exemplar-memory model, Estes, 1986a), and
the generalized context model (GCM; Nosofsky, 1985, 1986). In addition to the two This assumption does not preclude the possibility of decay in the information with time or that only partial exemplar
information is stored.
F. Gregory Ashby and W. Todd Maddox
! Feature Frequency! I '=xen,p,ar J
I Decision Bound
/ / i x-s,m H O ALCOVE
GCM I Context
onal GLC
Array-Sire Sim-Net
/ OC
FIGURE 3 Hierarchical relations among models of category access. (Adaptive-Net = Adaptive Network Model, H P U = Hidden Pattern Unit Network Model, Sim-Net = Similarity-Network Model, Ex-Sim =
Exemplar-Similarity Model, G C M = Generalized Context Model, Array-Sim = Array-Similarity Model, FLMP = Fuzzy Logical Model of Perception, MDS = MDS Prototype Model, G Q C = General Quadratic
Classifier, GLC = General Linear Classifier, O C = Optimal Classifier, IDC - Independent Decisions Classifier.)
assumptions listed earlier, these models all assume the probabilistic response selection rule of the similarity-choice modelmthat is, Eq. (1)--although the context and array-similarity models allow
no response bias. These latter two models also assume that all psychological dimensions are binary valued, and that the similarity between a stimulus and a stored exemplar is multiplicative across
dimensions. 5 In other words, if there are m dimensions, the similarity between stimulus i and stored exemplar j equals t/1
S O- = I-I sk(i, j ) , k=l
where sk(i, j) is the similarity between the dimension k values of stimulus i and exemplar j. From Assumption (2), all exemplar models assume the overall similarity of stimulus i to category J equals
So .
5 It is possible to define an exemplar model in which similarity is additive. However, as shown by Nosofsky (1992b), an additive similarity exemplar model is equivalent to an additive similarity
prototype model.
Stimulus Categorization
In the context model, the component similarity function is defined as
5k(i ' J)
1, [ qk,
if/ = j ifi r
The array-similarity model 6 uses the same definition, except the similarity parameter qk is assumed to be the same on every dimension (i.e., q = ql = q2 -- . . . - q,,). The generalized context
model (GCM) assumes continuous-valued dimensions. Similarity is defined flexibly and, as a result, only some versions of the G C M assume similarity is multiplicative across dimensions (Nosofsky,
1984). In all versions, however, the component similarity function, that is, sk(i, j), decreases symmetrically with distance from the stimulus representation. The model has two types of parameters
that can stretch or shrink the psychological space. An overall discriminability parameter, c, expands or contracts the space uniformly in all directions. The attention weight parameters, wk,
selectively stretch or shrink dimension k only. As a subject gains experience with a stimulus set, individual stimuli begin to look more distinct and, as a result, the similarity between a fixed pair
of stimuli should decrease with experience. The GCM models this phenomenon by increasing the overall discriminability parameter c with the subject's level of experience (e.g., Nosofsky et al., 1992).
Increasing a specific attention weight, say, w k, stretches dimension k relative to the other psychological dimensions. This selective stretching acts to decrease the dimension k component
similarities. The idea is that with more attention allocated to dimension k, the subject is better able to discriminate between stimuli that have different values on that dimension. Although the
context model has no attention weight or overall discriminability parameters, it is able to mimic the effects of these parameters by changing the magnitudes of the component similarity parameters
(i.e., the qk). Decreasing all qk by the same proportion is equivalent to increasing overall discriminability, and decreasing a single qk is equivalent to increasing a single attention weight. Under
this interpretation, the assumption of the array-similarity model that all qk are equal is equivalent to assuming that equal amounts of attention are allocated to all stimulus dimensions. The context
and generalized context models have been used to account for asymptotic categorization performance from tasks in which the categories (1) were linearly or nonlinearly separable (Medin &
Schwanenflugel, 1981; Nosofsky, 1986, 1987, 1989), (2) differed in baserate (Medin & Edelson, 1988), (3) contained correlated or uncorrelated features (Medin et al., 6 Hintzman's (1986) MINERVA2 is
similar to the context and array-similarity models. All three make identical representation assumptions, although MINERVA2 does not assume a multiplicative similarity rule.
F. Gregory Ashby and W. Todd Maddox
1982), (4) could be distinguished using a simple verbal rule (or a conjunction of simple rules; Nosofsky, Clark, & Shin, 1989), and (5) contained differing exemplar frequencies (Nosofsky, 1988a). The
array-similarity model was developed primarily to predict category learning. The model has been applied to learning data in which the categories (1) were defined by independent or correlated features
(Estes, 1986b) and (2) differed in base rate (Estes, Campbell, Hatsopoulis, & Hurwitz, 1989). Recently, Estes (1994; see also Nosofsky et al., 1992) elaborated the model to predict a wider range of
category learning phenomena. In experiments where the stimuli are constructed from continuous-valued dimensions, unique parameter estimation problems are encountered. For example, in the GCM, the
coordinates in psychological space of every category exemplar are free parameters (as well as the attention weights, overall discriminability, and response biases). Estimation of all these parameters
requires many more degrees of freedom than are found in a typical categorization experiment. Nosofsky (1986) discovered an interesting solution to this problem. In a typical application, the
coordinates of the stimuli in the psychological space are first estimated from data collected in a similarity judgment or stimulus identification task. Next, a recognition memory, typicality rating,
or categorization task is run with the same stimuli. The GCM is then fit to this new data under the assumption that the stimulus coordinates are the same in the two experiments (see Nosofsky, 1992a
for a review). Ashby and Alfonso-Reese (1995) showed that the context model, the array-similarity model, and the GCM are all mathematically equivalent to a process in which the subject estimates the
category likelihoods with a powerful nonparametric probability density estimator that is commonly used by statisticians (i.e., a Parzen, 1962, kernel estimator). This means that with a large enough
sample size, these models can recover any category distribution, no matter how complex. The only requirements are that the subject does not completely ignore any stimulus dimensions and that overall
discriminability slowly increases with sample size (as in Nosofsky et al., 1992). Thus, in most applications, the only suboptimality in these exemplar models that cannot be overcome with training is
that they assume a probabilistic decision rule instead of the deterministic rule of the optimal classifier. In other words, the assumptions of exemplar theory, as embodied in the context model, the
array-similarity model, or the GCM, are equivalent to assuming that the subject estimates all the relevant category distributions with an extremely powerful probability density estimator. The
estimator is so powerful that it is bound to succeed, so these exemplar models predict that subjects should eventually learn any categorization problem, no matter how complex. Recently, there has
been a surge of interest in developing and testing models of category learning. The context, array-similarity, and generalized
4 Stimulus Categorization
context models provide adequate descriptions of asymptotic categorization performance, but these models are severely limited in their ability to account for the dynamics of category learning. The
models have two main weaknesses. First, they all assume that the memory strength of an item presented early in the learning sequence remains unchanged throughout the course of the experiment. Thus,
the influence of early items on performance during the last few trials is just as strong as the influence of items presented late in the learning sequence. Yet recency effects are well established in
the memory literature--that is, a recently presented item will have a larger effect on performance than an item presented early in learning. To account for recency effects in category learning,
exemplar theorists proposed that the memory strength of an exemplar decreases with the number of trials since it was last presented as a stimulus (Estes, 1993, 1994; Nosofsky et al., 1992). A second
problem with the context, array-similarity, and generalized context models is that they predict categorization response probabilities early in the learning sequence that are more extreme (i.e.,
closer to 0 and 1) than those observed in the empirical data. To sec this, consider a categorization task with two categories, A and B. If the first stimulus in the experiment is from category A,
then the models predict that the probability of responding A on the second trial is 1. The empirical data, on the other hand, suggest that early in the learning sequence, response probabilities are
close to 0.5. To deal with this weakness, exemplar theorists postulated that subjects enter a category learning task with some information already present in the memory array that they will use for
the representation of the contrasting categories. This background noise (Estes, 1993, 1994; Nosofsky et al., 1992) is assumed to be constant across categories and remains unchanged throughout the
learning sequence. Early in the sequence, exemplar information is minimal and the background noise dominates, so categorization response probabilities are near 0.5. As more exemplars are experienced,
the exemplar information in the memory array begins to dominate the background noise in the computation of the category response probabilities. Following Estes (1993, 1994), we will refer to context
or array-similarity models that have been augmented with memory decay and background-noise parameters as the exemplar-similarity model (Ex-Sim; also called the sequence-sensitive context model by
Nosofsky et al., 1992). Another set of category learning models have been implemented in connectionist networks (e.g., Estes, 1993, 1994; Estes et al., 1989; Gluck & Bower, 1988; Gluck, Bower, & Hee,
1989; Hurwitz, 1990; Kruschke, 1992; Nosofsky et al., 1992). These models assume that a network of nodes and interconnections is formed during learning. The nodes are grouped into layers and
information from lower layers feeds forward to the next higher layer. The input layer consists of nodes that correspond to individual
F. Gregory Ashby and W. Todd Maddox
features, or collections of features (possibly even complete exemplar patterns). The output layer has a node associated with each of the contrasting categories. The amount of activation of a category
(i.e., output) node is taken as the strength of association between the stimulus and that particular category. In most models, response selection is probabilistic. Whereas the exemplar-similarity
model learns through a gradual accumulation of exemplar information, the connectionist models learn by modifying the weights between nodes as a function of error-driven feedback. One of the earliest
connectionist models of category learning was Gluck and Bower's (1988) adaptive network model. This is a feature-frequency model instantiated in a two-layer network. Gluck et al. (1989) proposed a
configural-cue network model that generalizes the adaptive network model by including input layer nodes that correspond to single features, pairs of features, triples of features, and so on. Several
exemplar-based connectionist models have also been developed. Estcs (1993, 1994) proposed a two-layer connectionist model, called the similarity-network model (or Sim-Net), in which the input layer
consists of exemplar nodes only. The hidden pattern unit model (HPU; Hurwitz, 1990, 1994) and ALCOVE (Kruschke, 1992) are three-layer networks in which the input layer consists of stimulus feature
nodes, the second layer consists of" exemplar nodes, and the output layer consists of category nodes. Gluck and Bower (1988) reported data exhibiting a form of base-rate neglect that is predicted by
the adaptive network models, but which has proved troublesome for the exemplar-similarity models. Subjects were presented with a list of medical symptoms and were asked to decide whether the
hypothetical patient had one of two diseases. One of the diseases occurred in 75% of the hypothetical patients and the other disease occurred in 25% of the patients. After a training session,
subjects were asked to estimate the probability that a patient exhibiting a particular symptom had one of the two diseases. On these trials, Gluck and Bower found that subjects neglected to make full
use of the base-rate differences between the two diseases (see, also, Estes et al., 1989; Medin & Edelson, 1988; Nosofsky et al., 1992). This result is compatible with several different adaptive
network models and incompatible with the exemplar-similarity model. For several years, it was thought that ALCOVE could account for the Gluck and Bower form of base-rate neglect (Kruschke, 1992;
Nosofsky et al., 1992), but Lewandowsky (1995) showed that this prediction holds only under a narrow set of artificial circumstances. Thus, it remains a challenge for exemplar-based learning models
to account for the results of Gluck and Bower (1988). Although the exemplar-similarity model and the exemplar-based connectionist models have each had success, neither class has been found to be
uniformly superior. In light of this fact, Estes (1994) attempted to identify experimental conditions that favor one family of models over the other by
4 Stimulus Categorization
comparing their predictions across a wide variety of experimental situations. Although a review of this extensive work is beyond the scope of this chapter, some of the experimental conditions
examined by Estes (1994) include manipulations of category size, training procedure, category confusability, repetition and lag effects, and prototype learning. Estes (1986b; 1994; see also Estes &
Maddox, 1995; Maddox & Estes, 1995) also extended the models to the domain of recognition memory. Figure 3 shows Anderson's (1990, 1991) rational model as a special case of exemplar theory. This is
not exactly correct because the rational model does not assume that the subject automatically computes the similarity between the stimulus and all exemplars stored in memory. When the coupling
parameter of the rational model is zero, however, each exemplar forms its own cluster (Nosofsky, 1991a) and, under these conditions, the rational model satisfies our definition of an exemplar model.
If, in addition, the similarity between the category labels is zero, and the subject bases his or her decision solely on the stored exemplar information, then the rational model is equivalent to the
context model (Nosofsky, 1991a). Figure 3 also shows the prototype models to be special cases of the rational model. When the value of the coupling parameter in the rational model is one, and the
similarity between the category labels is zero, the rational model reduces to a multiplicative similarity prototype model (Nosofsky, 1991a), such as Massaro's (1987) fuzzy logical model of
E. Decision Bound Theory The final theoretical perspective that we will discuss is called decision bound theory or sometimes general recognition theory (Ashby, 1992a; Ashby & Gott, 1988; Ashby & Lee,
1991, 1992; Ashby & Maddox, 1990, 1992, 1993; Ashby & Townsend, 1986; Maddox & Ashby, 1993). As described in the representation section, decision bound theory assumes the stimuli can be represented
numerically but that there is trial-by-trial variability in the perceptual information associated with each stimulus, so the perceptual effects of a stimulus are most appropriately represented by a
multivariate probability distribution (usually a multivariate normal distribution). During categorization, the subject is assumed to learn to assign responses to different regions of the perceptual
space. When presented with a stimulus, the subject determines which region the perceptual effect is in and emits the associated response. The decision bound is the partition between competing
response regions. Thus, decision bound theory assumes no exemplar information is needed to make a categorization response; only a response label is retrieved. Even so, the theory assumes that
exemplar information is available. For example, in recall and typicality rating experiments, exemplar information must be accessed on every trial. Even in categorization tasks, the subject
F. Gregory Ashby and W. Todd Maddox
might use exemplar information between trials to update the decision bound. Different versions of decision bound theory can be specified depending on how the subject divides the perceptual space into
response regions. The five versions that have been studied are (1) the independent decisions classifier (IDC), (2) the minimum distance classifier (MDC), (3) the optimal classifier (OC), (4) the
general linear classifier (GLC), and (5) the general quadratic classifier (GQC). An example of each of these models is presented in Figure 4 for the special case in which the category exemplars vary
on two
' "
e 9
F I G U R E 4 Decision bounds from five different decision bound models. (a) independent decisions classifier, (b) minimum distance classifier, (c) optimal decision bound model, (d) general linear
classifier, and (e) general quadratic classifier.
4 Stimulus Categorization
perceptual dimensions. Figure 4 also assumes the category representations are bivariate normal distributions, but the models can all be applied to any category representation. The ellipses are
contours of equal likelihood for the two categories. Every point on the same ellipse is an equal number of standard deviation units from the mean (i.e., the category prototype) and is equally likely
to be selected if an exemplar is randomly sampled from the category. The exemplars of category A have about equal variability on perceptual dimensions x and y but the values on these two dimensions
are positively correlated. In category B there is greater variability on dimension x and the x and y values are uncorrelated. The independent decisions classifier (Figure 4a; Ashby & Gott, 1988;
Shaw, 1982) assumes a separate decision is made about the presence or absence of each feature (e.g., the animal flies or does not fly) or about the level of each perceptual dimension (e.g., the
stimulus is large or small). A categorization response is selected by examining the pattern of decisions across the different dimensions. For example, if category A was composed of tall, blue
rectangles, then the subject would separately decide whether a stimulus rectangle was tall and whether it was blue by appealing to separate criteria on the height and hue dimensions, respectively. If
the rectangle was judged to be both tall and blue, then it would be classified as a member of category A. Using a set of dimensional criteria is equivalent to defining a set of linear decision
bounds, each of which is parallel to one of the coordinate axes (as in Figure 4a). The decision rule of the independent decisions classifier is similar to the rule postulated by classical theory. In
both cases the subject is assumed to make a separate decision about the presence or absence of each stimulus dimension (or feature). Thus, classical theory is a special case of the independent
decisions classifier (Ashby, 1992a). Recently, Nosofsky, Palmeri, and McKinley (1994) proposed a model that assumes subjects use independent decisions bounds but then memorize responses to a few
exemplars not accounted for by these bounds (i.e., exceptions to the independent decisions rule). Category learning follows a stochastic process, so different subjects may adopt different independent
decisions bounds and may memorize different exceptions. The model was developed for binary-valued dimensions. It is unclear how it could be generalized to continuous-valued dimensions, since there is
an abundance of continuous-valued data that is incompatible with virtually all independent decisions strategies, even those that allow the subject to memorize exceptions (e.g., Ashby & Gott, 1988;
Ashby & Maddox, 1990, 1992). The minimum distance classifier (Figure 4b) assumes the subject responds with the category that has the nearest centroid. An A response is given to every stimulus
representation that is closer to the category A centroid, and a B response is given to every representation closer to the B centroid. The decision bound is the line that bisects and is orthogonal to
F. Gregory Ashby and W. Todd Maddox
line segment connecting the two category centroids. If the category centroid is interpreted as the prototype and if the similarity between the stimulus and a prototype decreases with distance, then
the minimum distance classifier is an MDS prototype model. The optimal decision bound model (Figure 4c) was proposed as a yardstick against which to compare the other models. It assumes the subject
uses the optimal decision rule, that is, Eq. (4), and that the category likelihoods are estimated without error. Suboptimality occurs only because of perceptual and criterial noise. The most
promising decision bound models are the general linear classifier (Figure 4d) and the general quadratic classifier (Figure 4e), which assume that the decision bound is s o m e line or quadratic
curve, respectively. These models are based on the premise that the subject uses the optimal decision rule and estimates the category density functions (i.e., the stimulus likelihoods) using a
parametric estimator that assumes thc category distributions are normal. The assumption of normality is made either because experience with natural categories causes subjccts to believe that most
categories are normally distributed or because subjects cstimate category means, variances, correlations, and base rates and infer a category distribution in the optimal fashion (see the
representation section). If the estimated category structures are reasonably similar, the resulting decision bound will be linear (hence the general linear classifier), and if the estimated
covariance structures are different, the bound will be quadratic (hence the general quadratic classifier). Specialized versions of the general linear classifier also have been developed to
investigate the optimality of human performance when category base rates are unequal (Maddox, 1995). One of the greatest strengths of decision bound theory is that it can be applied in a
straightforward fashion to a wide variety of cognitive tasks. For example, different versions of the theory have been developed for application to speeded classification (Ashby & Maddox, 1994; Maddox
& Ashby, 1996), identification and similarity judgment (Ashby & Lee, 1991; Ashby & Perrin, 1988), preference (Perrin, 1992), and same-different judgment (Thomas, 1994). In addition, decision bound
theory provides a powerful framework within which to study and understand interactions during perceptual processing (Ashby & Maddox, 1994; Ashby & Townsend, 1986; Kadlec & Townsend, 1992). Although
the theory allows for changes in the perceptual representation of stimuli across different tasks, it is assumed that the most important difference between, say, categorization and identification is
that the two tasks require very different decision bounds. Ashby and Lee (1991, 1992) used this idea successfully to account for categorization data from the results of an identification task that
used the same subjects and stimuli.
4 Stimulus Categorization
VI. EMPIRICAL C O M P A R I S O N S This section reviews empirical data collected in traditional categorization tasks that test the validity of the categorization theories described in section V. We
will also try to identify general properties of the stimuli, category structure, and training procedures that are most favorable to each theory. These properties are outlined in Table 3. Although it
is usually the case that conditions opposite to those described in Table 3 are problematic for the various theories, this is not always true. As a result, the conclusions expressed in Table 3 must be
interpreted with care. A. Classical T h e o r y
Classical theory assumes categorization is a process of testing a stimulus for the set of necessary and sufficient features associated with each relevant category. As described earlier, classical
models are a special case of the independent decisions classifier of decision bound theory (Ashby, 1992a, Ashby & Gott, 1988). Ashby and Gott (1988, Experiments 1 and 3) and Ashby and Maddox (1990,
Experiments 3 and 4; see, also Ashby & Maddox, 1992) tested the hypothesis that subjects always use independent decisions classification by designing tasks in which another strategy, such as minimum
distance classification, yielded higher accuracy than the independent decisions classifier. The results convincingly showed that subjects were
TABLE 3
Experimental Conditions Most Likely to Favor Particular Categorization Theories ,,
Experimental conditions
Stimuli constructed from a few separable dimensions Inexperienced subjects Optimal rule is independent decisions Taxonomic or logically defined categories
Stimuli constructed from many integral dimensions Inexperienced subjects Optimal rule is complex or minimum distance More than two categories
Experienced subjects Optimal rule is simple Few category exemplars
Decision Bound
Optimal rule is linear or quadratic
F. Gregory Ashby and W. Todd Maddox
not constrained to use the independent decisions classifier. Instead, the best first approximation to the data was the optimal decision bound model. Although classical theory is easily rejected, it
has a certain intuitive attractiveness. For many tasks it seems the correct theory (e.g., categorizing squares versus pentagons). What properties do tasks that seem to favor classical theory possess?
First, the task should use stimuli constructed from a few perceptually separable components. A pair of components are perceptually separable if the perceptual effect of one component is unaffected by
the level of the other, and they are perceptually integral if the perceptual effect of one component is affected by the level of the other (e.g., Ashby & Maddox, 1994; Ashby & Townsend, 1986; Garner,
1974; Maddox, 1992). Classical theory predicts that a subject's decision about a particular component is unaffected by the level of other components. Thus, the independent decisions postulated by
classical theory is a natural decision strategy when the stimulus dimensions are perceptually separable. Ashby and Maddox (1990) showed that experienced subjects are not constrained to use an
independent decisions strategy, even when the stimulus dimensions are separable. Independent decisions is rarely optimal, and as subjects gain experience in a task, their performance naturally
improves. Thus, a second experimental prerequisite is that the subjects are inexperienced (or unmotivated). One way to prevent a subject from gaining the kind of detailed category representation that
comes from experience is to withhold feedback on each trial as to the correct response. Indeed, in unsupervised categorization tasks, subjects almost always use simple dimensional rules of the type
assumed by classical theory (Ahn & Medin, 1992; Imai & Garner, 1965; Medin, Wattenmaker, & Hampson, 1987; Wattenmaker, 1992). Finally, there are a few rare cases in which the independent decisions
classifier is nearly optimal. For example, Ashby (1992a, Fig. 16.5, p. 473) proposed a task with normally distributed categories in which independent decisions is optimal. In such cases, we expect
the data of experienced, motivated subjects to conform reasonably well to the predictions of the independent decisions classifier (and hence, to classical theory). In most experiments, of course, no
effort is made to select categories for which independent decisions is optimal. If categories arc selected without regard to this property, then chances arc poor that independent decisions will be
optimal. The best chances, however, although still poor, occur with categories that are taxonomic (e.g., as are many in the animal kingdom) or logically defined (e.g., as is the category "square"),
because such categories are frequently defined by a list of characteristic features.
4 Stimulus Categorization
B. Prototype Theory Prototype theory assumes the strength of association between the stimulus and a category equals the strength of association between the stimulus and the category prototype. All
other exemplars or characteristics of the category are assumed to be irrelevant to the categorization process. Dozens of studies have shown convincingly that subjects are not constrained to use only
the category prototypes (e.g., Ashby & Gott, 1988; Ashby & Maddox, 1990, 1992; Maddox & Ashby, 1993; Medin & Schaffer, 1978; Medin & Schwanenflugel, 1981; Nosofsky, 1987, 1992a; Shin & Nosofsky,
1992). Some of these studies compared prototype models with exemplar models, others compared prototype models with decision bound models. In every case, the prototype models were rejected in favor of
either an exemplar or decision bound model. These studies show that prototype theory does not provide a complete description of human categorization performance, but they do not rule out the
possibility that the prototype has some special status within the category representation. For example, the prototype is sometimes classified more accurately (e.g., Homa & Cultice, 1984; Homa et al.,
1981) and seems less susceptible to memory loss with delay of transfer tests than other category exemplars (e.g., Goldman & Homa, 1977; Homa, Cross, Cornell, Goldman, & Schwartz, 1973; Homa &
Vosburgh, 1976). In addition, as described in sections V.A and V.B, large prototype effects are almost always found in recall and typicality rating experiments. Two recent categorization studies,
however, found few, if any, special advantages for the prototype (Ashby et al., 1994; Shin & Nosofsky, 1992). Ashby et al. (1994) examined categorization response time (RT) in five separate
experiments that used three different kinds of stimuli. No prototype effects on RT were found in any experiment. Hypotheses that assumed RT was determined by absolute or by relative distance to the
two prototypes were both rejected. The prototypes did not elicit the fastest responses and even among stimuli that were just as discriminative as the prototypes with respect to the categorization
judgment, the prototypes showed no RT advantage. The best predictor of the data was an assumption that RT decreased with distance from the decision bound. Thus, the fastest responses were to the most
discriminative stimuli (which are furthest from the bound). Shin and Nosofsky (1992) conducted a series of experiments using random dot stimuli that manipulated experimental factors such as category
size, time between training and test, and within-category exemplar frequency. Each of these manipulations have been found to affect categorization accuracy for the prototype (e.g., Goldman & Homa,
1977; Homa & Chambliss, 1975; Homa & Cultice, 1984; Homa, Dunbar, & Nohre, 1991; Homa
F. Gregory Ashby and W. Todd Maddox
et al., 1973, 1981; Homa & Vosburgh, 1976). Shin and Nosofsky (1992) replicated some prototype effects (e.g., increases in accuracy to the prototype with category size) but not others (e.g.,
differential forgetting for the prototype). In each experiment, Shin and Nosofsky tested a combined exemplar-prototype model, which contained a mixture parameter that determined the separate
contribution of exemplar and prototype submodels to the predicted response probability. The mixture model provided a significant improvement in fit over a pure exemplar model in only one case,
suggesting little, if any, contribution of the prototype abstraction process. How can the results of Ashby et al. (1994) and Shin and Nosofsky (1992) be explained in light of the large empirical
literature showing prototype effects? First, the failure to find prototype effects in categorization tasks is not necessarily damaging to prototype theory. Ashby and Maddox (1994) showed that in a
categorization task with two categories, the most popular versions of prototype theory predict no prototype effects. Specifically, in most prototype models the predicted probability of responding A
on trials when stimulus i is presented increases with S~A/S~t~,whereas the response time decreases. Ashby and Maddox (1994) showed that this ratio increases with the distance from stimulus i to the
minimum distance bound. Because the prototype is usually not the furthest stimulus from the decision bound in two-category tasks, prototype theory predicts that the highest accuracy and the fastest
responding will not be to the category prototypes, but to the stimuli that are furthest from the minimum distance bound. In a task with a single category, in which the subject's task is to decide
whether the stimulus is or is not a member of that category, the category prototype is often the furthest exemplar from the decision bound. Thus, prototype enhancement effects found in such tasks may
not be due to an over representation of the prototype but instead to the coincidental placement of the prototype with respect to the subject's decision bound. This hypothesis was strongly supported
by Ashby et al. (1994). Before deciding that prototype enhancement has occurred in a categorization task, it is vital to rule out the possibility that the superior performance to the prototype was
not simply an artifact of the structure of the contrasting categories. Most studies reporting prototype enhancement in categorization tasks have not included analyses of this type, so the magnitude,
and perhaps even the existence, of prototype effects in categorization is largely unknown. In addition to category structure, other experimental conditions may make prototype effects more or less
likely. Table 3 lists several properties that might bias an experiment in favor of prototype theory and thus might make prototype effects more likely. The minimum distance classifier, which uses the
decision rule of prototype theory, generally requires the subject to integrate information across dimensions (e.g., Ashby & Gott, 1988). Although subjects can integrate information when the stimulus
Stimulus Categorization
mensions are either integral or separable (Ashby & Maddox, 1990), integration should be easier when the stimulus dimensions are perceptually integral. Thus, it seems plausible that prototype theory
might perform better when the category exemplars vary along perceptually integral dimensions (however, see Nosofsky, 1987). Minimum distance classification is rarely optimal. Thus, as with
independent decisions classification, any experimental conditions that facilitate optimal responding will tend to disconfirm prototype theory. Optimal responding is most likely when subjects are
experienced (and motivated), the stimuli are simple, and the optimal rule is simple. Therefore, subjects should be most likely to use a suboptimal rule such as minimum distance classification if they
are inexperienced, if the stimuli vary on many dimensions, if the optimal rule is complex, and if there are more than two categories (thus further complicating the optimal rule). Much of the
empirical support for prototype theory is from experiments that used random dot patterns as stimuli (e.g., Goldman & Homa, 1977; Homa et al., 1973, 1981; Homa & Cultice, 1984; Homa & Vosburgh, 1976;
Posner & Keele, 1968, 1970). These stimuli vary along many dimensions that are most likely integral. In addition, most of these experiments tested subjects for only a single experimental session, and
thus the subjects were relatively inexperienced. Finally, a number of these experiments used more than two categories (e.g., Homa et al., 1973; Homa & Cultice, 1984), so the experimental conditions
were favorable for prototype theory. Exemplar theorists have also questioned whether the existence of prototype effects necessarily implies that the category representation is dominated by the
prototype. They argue that prototype effects are the natural consequence of exemplar-based processes of the kind hypothesized by exemplar theory. For example, Shin and Nosofsky (1992) found that the
small prototype effects found in their categorization task could be predicted by an exemplar model. In addition, a number of investigators have shown that many of the prototype effects found in
typicality rating, recognition, and recall tasks are qualitatively consistent with predictions from exemplar models (e.g., Busemeyer, Dewey, & Medin, 1984; Hintzman, 1986; Hintzman & Ludlam, 1980;
Nosofsky, 1988b). Another hypothesis that explains the failure of Ashby et al. (1994) and Shin and Nosofsky (1992) to find prototype effects is that prototype effects are small or nonexistent in the
majority of categorization tasks, but are robust in other types of cognitive and perceptual tasks. For example, the prevalence of prototype effects (or graded structure) in typicality rating tasks is
uncontested. Graded structure has been found in a wide range of category types, for example, in taxonomic categories such as fruit (Rips et al., 1973; Rosch, 1973, 1975, 1978; Rosch & Mervis, 1975;
Smith, Shoben, & Rips, 1974), logical categories such as odd number (Armstrong, Gleitman,
F. Gregory Ashby and W. Todd Maddox
& Gleitman, 1983), linguistic categories (see Lakoff, 1986 for a review), and many others (Barsalou, 1983, 1985).7 As discussed earlier, recognition memory (Omohundro, 1981), and recall (Mervis et
al., 1976) also show marked prototype effects. One thing that typicality rating, recognition memory, and recall tasks have in common is that they all require the subject to access exemplar
information from memory. It is a matter of debate whether traditional categorization tasks require exemplar information (e.g., Ashby & Lee, 1991, 1992, 1993; Maddox & Ashby, 1993). Thus, one
plausible hypothesis is that the prototype dominates the category representation, so any task requiring the subject to access the category representation will show prototype effects. Categorization
experiments usually do not show prototype effects because categorization does not require information about individual exemplars. Why is the prototype over represented? One possibility is that the
dominance of the prototype within the category representation is a consequence of consolidation processes. During periods of time in which the subject is gaining no new information about the
category, the subject's memory for the category consolidates. The prototype begins to dominate the representation. According to this hypothesis, the prototype's prominence should increase with time
because the consolidation process would have longer to operate. This prediction is consistent with the result that the few prototype effects that are found in categorization tasks tend to increase
with the length of the delay between the training and testing conditions (e.g., Homa & Cuhice, 1984). This result is important because it seems inconsistent with the hypothesis that prototype effects
are the result of exemplar-based similarity computations.
C. Exemplar and Decision Bound Theory In virtually every empirical comparison, exemplar models and decision bound models have both outperformed classical models and prototype models. There have been
only a few attempts to compare the performance of exemplar and decision bound models, however. When the category distributions were bivariate normal and the data were analyzed separately for each
subject, Maddox and Ashby (1993) found that the general linear and general quadratic decision bound models consistently outperformed Nosofsky's (1986) generalized context model. In cases where the
optimal decision bound was linear, the advantage of the decision bound model was due entirely to response selection assumptions. A more general form of the generalized context model that used the
deterministic Eq. (3) response selecAlthough nearly all categories show graded structure, Barsalou (1983, 1985, 1987) showed convincingly that graded structure is unstable and can be greatly
influenced by context.
4 Stimulus Categorization
tion rule, rather than the relative goodness rule of Eq. (1), gave fits that were indistinguishable from the general linear classifier. When the optimal decision bound was quadratic, however, the
general quadratic classifier outperformed this deterministic version of the generalized context model. McKinley and Nosofsky (1995) added a memory decay parameter to the deterministic version of the
generalized context model, and a parameter that generalizes the representation assumptions. In this way, they produced an exemplar model that fit the quadratic bound data as well as the general
quadratic classifier of decision bound theory. Two studies have fit decision bound and exemplar models to data from experiments with nonnormally distributed categories. Maddox and Ashby (1993) fit a
number of models to the data from Nosofsky's (1986) criss-cross and interior-exterior conditions. In all cases, there was a substantial advantage for the general quadratic classifier over the best
exemplar model (i.e., the deterministic version of the generalized context model). McKinley and Nosofsky (1995) created categories that were each probability mixtures of two bivariate normal
distributions (i.e., in each category, an exemplar was sampled either from one bivariate normal distribution or another). The resulting complex optimal bounds could not be approximated by a quadratic
equation. The deterministic version of the generalized context model with extra memory decay and representation parameters fit the data better than the general quadratic classifier. In the more
complex of the two experiments, however, neither model fit well. In fact, for 8 of the 11 subjects, the data were best fit by a model that assumes subjects used two quadratic decision bounds (instead
of the one assumed by the general quadratic classifier). Although more empirical testing is needed, much is now known about how to design an experiment to give either exemplar or decision bound
models the best opportunity to provide excellent fits to the resulting data. With respect to the exemplar models, the key theoretical result is Ashby and Alfonso-Reese's (1995) demonstration that
most exemplar models (e.g., the context, array-similarity, and generalized context models) essentially assume the subject is an extremely sophisticated statistician with perfect memory. Exemplar
models almost always predict that with enough training, subjects will perform optimally, no matter how complex the task. Thus, exemplar theory will provide a good account of any data in which the
subject responds nearly optimally. Optimal responding is most likely when subjects are experienced and the optimal categorization rule is simple. When the optimal rule is complex, the subject might
be able to memorize the correct response to individual stimuli if there are only a few exemplars in each category. This would allow nearly optimal responding, even in cases where the subject never
learns the optimal rule. These conditions are summarized in Table 3. Presumably, exemplar models performed poorly in
F. Gregory Ashby and W. Todd Maddox
Experiment 2 of McKinley and Nosofsky (1995) because of the many category exemplars and the complex optimal decision rule. The general quadratic classifier of decision bound theory will give a good
account of any data in which the subject uses a linear or quadratic decision bound (because the general linear classifier is a special case). Thus, if the optimal rule is linear or quadratic and the
subjects are experienced, the general quadratic classifier should always perform at least as well as the best exemplar models. If the subjects respond optimally in a task where the optimal rule is
more complex than any quadratic equation, then the general quadratic classifier will fit poorly (as in the McKinley & Nosofsky, 1995, experiments). VII. F U T U R E DIRECTIONS Much is now known about
the empirical validity of the various categorization theories and about their theoretical relations. As a result, the direction of research on human categorization is likely to change dramatically
during the next decade. For example, advances in the neuroscicnces may make it possible to test directly some of the fundamental assumptions of the theories. In particular, through the use of various
neuroimaging techniques and the study of selective brain-damaged populations, it may be possible to test whether subjects access exemplar or episodic memories during categorization, as assumed by
exemplar theory, or whether they access some abstracted representation (e.g., a semantic or procedural memory), as assumed by decision bound theory. The early results seem problematic for exemplar
theory, but the issue is far from resolved (Kolodny, 1994; Knowlton et al., 1992). A second major distinction between exemplar theory and, say, the general quadratic classifier of decision bound
theory is which a priori assumptions about category structure the subject brings to the categorization task. Exemplar theory assumes the subject makes almost no assumptions. When learning about a new
category, exemplar theory assumes the subject ignores all past experience with categories. The general quadratic classifier assumes the subject brings to a new categorization task the expectation
that each category has some multivariate normal distribution. Therefore, an extremely important research question is whether subjects make a priori assumptions about category structure, and if they
do, exactly what assumptions they make. A third important research question concerns optimality. Exemplar theory essentially assumes optimality for all categorization tasks, at least if the subjects
have enough experience and motivation. The general quadratic classifier assumes optimality is possible only in some tasks (i.e., those in which the optimal bound is linear or quadratic, or possibly
piecewise linear
4 Stimulus Categorization
or piecewise quadratic). Are there categorization problems that humans cannot learn? If so, how can these problems be characterized? A fourth important direction of future research should be to
explicate the role of memory in the categorization process. Because prototypicality effects seem to depend on the delay between training and test and on category size, it seems likely that memory
processes play a key role in the development of the category prototype. A likely candidate is consolidation of the category representation. If so, then it is important to ask whether other observable
effects of consolidation exist. A fifth goal of future research should be to develop process models of the categorization task (i.e., algorithmic level models). The major theories reviewed in this
chapter all have multiple process interpretations. A test between the various interpretations requires fitting the microstructure of the data. Simply fitting the overall response proportions is
insufficient. For example, a process model should be able to predict trial-by-trial learning data and also categorization response times. Finally, we believe that theories of human categorization
would benefit greatly from the study of categorization in animals, and even in simple organisms. The first living creatures to evolve had to be able to categorize chemicals they encountered as
nutritive or aversive. Thus, there was a tremendous evolutionary pressure that favored organisms adept at categorization. Because it is now believed that all organisms on Earth evolved from the same
ancestors (e.g., Darncll, Lodish, & Baltimore, 1990), it makes sense that the categorization strategies used by all animals evolved from a common ancestral strategy. If so, then it is plausible that
the fundamental nature of categorization is the same for all animals and that the main difference across the phylogenetic scale is in the degree to which this basic strategy has been elaborated
(Ashby & Lee, 1993).
Acknowledgments Preparation of this chapter was supported in part by National Science Foundation Grant DBS92-09411 to F. Gregory Ashby and by a Faculty-Grant-in-Aid from Arizona State University to
W. Todd Maddox. We thank Kyunghee Koh and Leola Alfonso-Reese for their helpful comments on an earlier draft of this chapter.
References Ahn, W. K., & Medin, D. L. (1992). A two-stage model of category construction. Cognitive
Science, 16, 81-121.
Anderson, J. R. (1975). Language, memory, and thought. Hillsdale, NJ: Erlbaum. Anderson, J. R. (1990). The adaptive character qt-thought. Hillsdale, NJ: Erlbaum. Anderson, J. R. (1991). The adaptive
nature of human categorization. Psychological Review, 98, 409-429.
F. Gregory Ashby and W. Todd Maddox
Armstrong, S. L., Gleitman, L. R., & Gleitman, H. (1983). On what some concepts might not be. Cognition, 13, 263-308. Ashby, F. G. (1992a). Multidimensional models of categorization. In F. G. Ashby
(Ed.), Multidimensional models of perception and cognition (pp. 449-483). Hillsdale, NJ: Erlbaum. Ashby, F. G. (1992b). Pattern recognition by human and machine. Journal of Mathematical Psychology,
36, 146-153. Ashby, F. G., & Alfonso-Reese, L. A. (1995). Categorization as probability density estimation. Journal of Mathematical Psychology, 39, 216-233. Ashby, F. G., Boynton, G., & Lee, W. W.
(1994). Categorization response time with multidimensional stimuli. Perception & Psychophysics, 55, 11-27. Ashby, F. G., & Gott, R. (1988). Decision rules in the perception and categorization of
multidimensional stimuli. Journal of Experimental Psychology: Learning, Memory and Cognition, 14, 33-53. Ashby, F. G., & Lee, W. W. (1991). Predicting similarity and categorization from
identification. Journal of Experimental Psychology: General, 120, 150-172. Ashby, F. G., & Lee, W. W. (1992). On the relationship between identification, similarity, and categorization: Reply to
Nosofsky and Smith (1992). Journal of Experimental Psychology:
General, 121,385-393.
Ashby, F. G., & Lee, W. W. (1993). Perceptual variability as a fundamental axiom of perceptual science. In S. C. Masin (Ed.), Foundations of perceptual theory (pp. 369-399). Amsterdam: North Holland.
Ashby, F. G., & Maddox, W. T. (1990). Integrating information from separable psychological dimensions. Journal of Experimental Psychology: HumanPerception and Performance, 16, 598-612. Ashby, F. G.,
& Maddox, W. T. (1992). Complex decision rules in categorization: Contrasting novice and experienced performance. Journal of Experimental Psychology: Human Perception and Performance, 18, 50-71.
Ashby, F. G., & Maddox, W. T. (1993). Relations among prototype, exemplar, and decision bound models of categorization. Journal qfMathematical Psychology, 37, 372-400. Ashby, F. G., & Maddox, W. T.
(1994). A response time theory of separability and integrality in speeded classification. Journal of Mathematical Psychology, 38, 423-466. Ashby, F. G., & Perrin, N. A. (1988). Toward a unified
theory of similarity and recognition. Psychological Review, 95, 124-15. Ashby, F. G., & Townsend, J. T. (1986). Varieties of perceptual independence. Psychological Review, 93, 154-179. Bahrick, H.
P., Bahrick, P. O., & Wittlinger, R. P. (1975). Fifty years of memory for names and faces: A cross-sectional approach. Journal of Experimental Psychology: General, 104, 54-75. Barlow, H. B. (1956).
Retinal noise and absolute threshold. Journal of the Optical Society of America, 46, 634-639. Barlow, H. B. (1957). Increment thresholds at low intensities considered as signal/noise discrimination.
Journal of Physiology, 136, 469-488. Barsalou, L. W. (1983). Ad hoc categories. Memory & Cognition, 11,211-227. Barsalou, L. W. (1985). Ideals, central tendency, and frequency of instantiation.
Journal of
Experimental Psychology: Learning, Memory, and Cognition, 11,629-654.
Barsalou, L. W. (1987). The instability of graded structure: implications for the nature of concepts. In U. Neisser (Ed.), Concepts and conceptual development: Ecological and intellectual factors in
categorization. Cambridge, England: Cambridge University Press. Baum, W. M. (1974). On two types of deviation from the matching law: Bias and undermatching. Journal of the Experimental Analysis of
Behavior, 22, 231-242. Black, M. (1954). Problems of analysis (Collected essays). Ithaca, NY: Cornell University Press. Bourne, L. E. (1966). Human conceptual behavior. Boston: Allyn and Bacon.
Stimulus Categorization
Bourne, L. E. (1970). Knowing and using concepts. Psychological Review, 77, 546-556. Brooks, L. (1978). Nonanalytic concept formation and memory for instances. In E. Rosch & B. B. Lloyd (Eds.),
Cognition and categorization. Hillsdale, NJ: Erlbaum. Bruce, H. M. (1959). An exteroceptive block to pregnancy in the mouse. Nature, 184, 105. Bruner, J. S., Goodnow, J., & Austin, G. (1956). A study
of thinking. New York: Wiley. Busemeyer, J. R., Dewey, G. I., & Medin, D. L. (1984). Evaluation of exemplar-based generalization and the abstraction of categorical information. Journal of
Experimental Psychology: Learning, Memory, and Cognition, 10, 638-648. Cohen, M. M., & Massaro, D. W. (1992). On the similarity of categorization models. In F. G. Ashby (Ed.), Multidimensionalmodels
of perception and cognition (pp. 395-447). Hillsdale, NJ: Erlbaum. Corter, J. E., & Tversky, A. (1986). Extended similarity trees. Psychometrika, 51,429-451. Crowther, C. W., Batchelder, W. H., & Hu,
X. (1995). A measurement-theoretic analysis of the fuzzy logical model of perception. Psychological Review, 102, 396-408. Darnell, J., Lodish, H., & Baltimore, D. (1990). Molecular cell biology. New
York: Freeman. Ennis, D. M., & Ashby, F. G. (1993). The relative sensitivities of same-different and identification judgment models to perceptual dependence. Psychometrika, 58, 257-279. Estes, W. K.
(1976). The cognitive side of probability learning. Psychological Review, 83, 37-64. Estes, W. K. (1986a). Array models for category learning. Cognitive Psychology, 18, 500-549. Estes, W. K. (1986b).
Memory storage and retrieval processes in category learning. Journal of Experimental Psychology: General, I15, 155-174. Estes, W. K. (1993). Models of categorization and category learning. Psychology
of learning and motivation, Vol. 29. San Diego: Academic Press. Estes, W. K. (1994). Classification and cognition. Oxford: Oxford University Press. Estes, W. K. (1995). Response processes in
cognitive models. In R. F. Lorch, Jr., & E. J. O'Brien (Eds.), Sources of coherence in text comprehension. Hillsdale, NJ: Erlbaum. Estes, W. K., Campbell, J. A., Hatsopoulos, N., & Hurwitz, J. B.
(1989). Base-rate effects in category learning: A comparison of parallel network and memory storage-retrieval models. Journal of Experimental Psychology: Learning, Memory, and Cognition, 15, 556-571.
Estes, W. K., & Maddox, W. T. (1995). Interactions of stimulus attributes, base-rate, and feedback in recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition, 21, 1075-1095.
Flannagan, M. J., Fried, L. S., & Holyoak, K. J. (1986). Distributional expectations and the induction of category structure. Journal of Experimental Psychology: Learning, Memory, and Cognition, 12,
241-256. Fodor, J. A., Bever, T. G., & Garrett, M. F. (1974). The psychology oflanguage: An introductionto psycholinguistics and generative grammar. New York: McGraw-Hill. Franks, J. J., & Bransford,
J. D. (1971). Abstraction of visual patterns. Journal of Experimental Psychology, 90, 65-74. Fried, L. S., & Holyoak, F. J. (1984). Induction of category distributions: A framework for classification
learning. Journal of Experimental Psycholo~ly: Learning, Memory, and Cognition, 10, 234-257. Garner, W. R. (1974). The processing ofit~rmation and structure. New York: Wiley. Garner, W. R. (1977).
The effect of absolute size on the separability of the dimensions of size and brightness. Bulletin of the Psychonomics Society, 9, 380-382. Garner, W. R., & Felfoldy, G. L. (1970). Integrality of
stimulus dimensions in various types of information processing. Cognitive Psychology, 1,225-241. Geisler, W. S. (1989). Sequential ideal-observer analysis of visual discriminations. Psychological
Review, 96, 267-314. Geyer, L. H., & DeWald, C. G. (1973). Feature lists and confusion matrices. Perception & Psychophysics, 14, 471-482.
F. G r e g o r y Ashby and W. Todd M a d d o x
Gibson, E., Osser, H., Schiff, W., & Smith, J. (1963). An Analysis of Critical Features of Letters, Tested by a Confusion Matrix. A Basic Research Program on Reading. (Cooperative Research Project
No. 639). Washington: U.S. Office of Education. Gluck, M. A. (1991). Stimulus generalization and representation in adaptive network models of category learning. Psychological Science, 2, 50-55.
Gluck, M. A., & Bower, G. H. (1988). From conditioning to category learning: An adaptive network model. Journal of Experimental Psychology: General, 117, 225-244. Gluck, M. A., Bower, G., & Hee, M.
R. (August, 1989). A cot!fzgural-cue network model qf animal and human associative learning. Paper presented at the Eleventh Annual Conference of the Cognitive Science Society. Ann Arbor, Michigan.
Goldman, D., & Homa, D. (1977). Integrative and metric properties of abstracted information as a function of category discriminability, instance variability, and experience. Journal qf Experimental
Psychology: Human Learning and Memory, 3, 375-385. Green, D. M., & Swets, J. A. (1966). Signal detection theory and psychophysics. New York: Wiley. Hammerton, M. (1970). An investigation into changes
in decision criteria and other details of a decision-making task. Psychonomic Science, 21,203-204. Hayes-Roth, B., & Hayes-Roth, F. (1977). Concept learning and the recognition and classification of
exemplars. Journal of Verbal Learning and Verbal Behavior, 16, 119-136. Haygood, R. C., & Bourne, L. E. (1965). Attribute and rule-learning aspects of conceptual behavior. Psychological Review, 72,
175-195. Healy, A. F., & Kubovy, M. A. (1977). A comparison of recognition memory to numerical decision: How prior probabilities affect cutoff location. Memory & Cognition, 5, 3-9. Herrnstein, R.J.
(1961). Relative and absolute strength of response as a function of frequency of reinforcement. Journal of the Experimental Analysis of Behavior, 4, 267-272. Herrnstein, R.J. (1970). On the law of
effect.Journal qf the Experimental Analysis of Behavior, 13, 243-266. Hintzman, D. L. (1986). "Schema abstraction" in a multiple-trace memory model. Psychological Review, 93, 411-428. Hintzman, D.
L., & Ludlam, G. (1980). Differential forgetting of prototypes and old instances: Simulations by an exemplar-based classification model. Memory & Cognition, 8, 378-382. Homa, D., & Chambliss, D.
(1975). The relative contributions of common and distinctive information on the abstraction from ill-defined categories. Journal qfExperimental Psychol-
ogy: Human Learning and Memory, 1,351-359.
Homa, D., & Cuhice, J. (1984). Role of feedback, category size, and stimulus distortion on the acquisition and utilization of ill-defined categories. Journal of Experimental Psychology: Learning,
Memory, and Cognition, 10, 83-94. Homa, D., Cross, J., Cornell, D., Goldman, D., & Schwartz, S. (1973). Prototype abstraction and classification of new instances as a function of number of instances
defining the prototype. Journal of Experimental Psychology, 101, 116-122. Homa, D., Dunbar, S., & Nohre, L. (1991). Instance frequency, categorization, and the modulating effect of experience.
Journal of Experimental Psychology: Learning, Memory, and Cognition, 17, 444-458. Homa, D., Sterling, S., & Trepel, L. (1981). Limitations of exemplar-based generalization and the abstraction of
categorical information. Journal of Experimental Psychology: Human Learning and Memory, 7, 418-439. Homa, D., & Vosburgh, R. (1976). Category breadth and the abstraction of prototypical information.
Journal of Experimental Psychology: Human Learning and Memory, 2, 322-330. Hull, C. L. (1920). Quantitative aspects of the evolution of concepts. Psychological Monographs (No. 123) Hurwitz, J. B.
(1990). A hidden-pattern unit network model of category learning. Unpublished doctoral dissertation, Harvard University, Cambridge, Massachusetts.
Stimulus Categorization
Hurwitz, J. B. (1994). Retrieval of exemplar and feature information in category learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20, 887-903. Huttenlocher, J., &
Hedges, L. V. (1994). Combining graded categories: Membership and typicality. Psychological Review, 101, 157-163. Hyman, R., & Well, A. (1968). Perceptual separability and spatial models. Perception
& Psychophysics, 3, 161 - 165. lmai, S., & Garner, W. R. (1965). I)iscriminability and preference for attributes in free and constrained classification. Journal qfExperimental Psychology, 69,
596-608. Kadlec, H., & Townsend, J. T. (1992). Implications of marginal and conditional detection parameters for the separabilities and independence of perceptual dimensions. Journal qf Mathematical
Psychology, 36, 325-374. Klahr, D., Langley, P., & Neches, R. (Eds.). (1987). Production system models qfllearning and development. Cambridge, MA: MIT Press. Knowhon, B. J., Ramus, S. J., & Squire,
L. R. (1992). Intact artificial grammar learning in amnesia: Dissociation of classification learning and explicit memory for specific instances. Psychological Science, 3, 172-179. Knowhon, B. J., &
Squire, L. R. (1993). The learning of categories: Parallel brain systems for item memory and category level knowledge. Science, 262, 1747-1749. Koh, K. (1993). Response variability in
catee,orization" Deterministic versusprobabilistic decision rules. University of California at Santa Barbara, unpublished manuscript. Kolodny, J. A. (1994). Memory processes in classification
learning: An investigation of amnesic performance in categorization of dot patterns and artistic styles. Psychological Science, 5, 164-169. Krumhansl, C. L. (1978). Concerning the applicability of
geometric models to similarity data: The interrelationship between similarity and spatial density. Psychological Review, 85, 445463. Kruschke, J. K. (1992). ALCOVE: An exemplar-based connectionist
model of category learning. Psychological Review, 99, 22-44. Kruskal, J. B. (1964a). Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis. Psychometrika, 29, 1-27.
Kruskal, J. B. (1964b). Nonmetric multidimensional scaling" A numerical method. Psychometrika, 29, 115-129. Kubovy, M., & Healy, A. F. (1977). The decision rule in probabilistic categorization: What
it is and how it is learned. Journal qf Experimental Psycholqe,y: General, 106, 427-446. Kubovy, M., Rapoport, A., & Tversky, A. (1971). Deterministic vs. probabilistic strategies in detection.
Perception & Psychoph),sics, 9, 427-429. Lakoff, G. (1986). Women,fire, and dangerous things. Chicago" University of Chicago Press. Lee, W. (1963). Choosing among confusably distributed stimuli with
specified likelihood ratios. Perceptual and Motor Skills, 16, 445-467. Lee, W., & Janke, M. (1964). Categorizing externally distributed stimulus samples for three continua. Journal qlExperimental
Psycholoe, y, 68, 376-382. Lee, W., & Janke, M. (1965). Categorizing externally distributed stimulus samples for unequal molar probabilities. Psychological Reports, 17, 79-90. Lewandowsky, S. (1995).
Base-rate neglect in ALCOVE A critical reevaluation. Ps),chological Review, 102, 185-191. Luce, R. D. (1963). Detection and recognition. In R. D. Luce, R. R. Bush, & E. Galanter (Eds.), Handbook of
mathematical psychology (pp. 103-189). New York: Wiley. Maddox, W. T. (1992). Perceptual and decisional separability. F. G. Ashby (Ed.), Multidimensional models qf perception and cognition (pp.
147-180). Hillsdale, NJ: Erlbaum. Maddox, W. T. (1995). Base-rate effects in multidimensional perceptual categorization. Journal of Experimental Psychology: Learning, Memory, and Cognition, 21,
F. Gregory Ashby and W. Todd Maddox
Maddox, W. T., & Ashby, F. G. (1993). Comparing decision bound and exemplar models of categorization. Perception & Psychophysics, 53, 49-70. Maddox, W. T., & Ashby, F. G. (1996). Perceptual
separability, decisional separability, and the identification-speeded classification relationship. Journal of Experimental Psychology: Human Perception and Performance, 22, 795-817. Maddox, W. T., &
Estes, W. K. (1995). On the role of frequency in "Word-frequency" and mirror effects in recognition. Journal of Experimental Psychology: General. Manuscript submitted for publication. Marley, A. A.J.
(1992). Developing and characterizing multidimensional Thurstone and Luce models for identification and preference. In F. G. Ashby (Ed.), Multidimensionalmodels of perception and cognition (pp.
299-333). Hillsdale, NJ: Erlbaum. Marr, D. (1982). Vision. New York: W. H. Freeman. Massaro, D. W. (1987). Speech perception by ear and eye: A paradigm for psychological inquiry. Hillsdale, NJ:
Lawrence Erlbaum Associates. Massaro, D. W., & Friedman, D. (1990). Models of integration given multiple sources of information. Psychological Review, 97, 225-252. McKinley, S. C., & Nosofsky, R. M.
(1995). Investigations of exemplar and decision bound models in large, ill-defined category structures. Journal qfExperimental Psychology: Human Perception and Performance, 21, 128-148. Medin, D. L.,
Alton, M. W., Edelson, S. M., & Freko, D. (1982). Correlated symptoms and simulated medical classification. Journal of Experimental Psychology: Learning, Memory, and Cognition, 8, 37-50. Medin, D.
L., & Edelson, S. M. (1988). Problem structure and the use of base-rate information from experience. Journal of Experimental Psychology: General, 117, 68-85. Medin, D. L., & Schaffer, M. M. (1978).
Context theory of classification learning. Psychological Review, 85, 207-238. Medin, D. L., & Schwanenflugel, P. J. (1981). Linear separability in classification learning.
Journal of Experimental Psychology: Human Learning and Memory, 1,335-368.
Medin, D. L., Wattenmaker, W. D., & Hampson, S. E. (1987). Family resemblance, conceptual cohesiveness, and category construction. Cognitive Psychology, 19, 242-279. Mervis, C. B. (1980). Category
structure and the development of categorization. In R. Spiro, B. C. Bruce, & W. F. Brewer (Eds.), Theoretical issues in reading comprehension. Hillsdale, NJ: Lawrence Erlbaum Associates. Mervis, C.
B., Catlin, J., & Rosch, E. (1976). Relationships among goodness-of-example, category norms, and word frequency. Bulletin of the Psychonomics Society, 7, 283-284. Miller, G. A., & Johnson-Laird, P.
N. (1976). Language and perception. Cambridge, MA: Harvard University Press. Myung, I.J. (1994). Maximum entropy interpretation of decision bound and context models of categorization. Journal of
Mathematical Psychology, 38 335-365. Neisser, U. (1967). Cognitive Psychology. New York: Appleton-Century-Crofts. Neumann, P. G. (1974). An attribute frequency model for the abstraction of
prototypes. Memory & Cognition, 2, 241-248. Newell, A., & Simon, H. A. (1972). Humanproblem solving. Englewood Cliffs, NJ: PrenticeHall. Nosofsky, R. M. (1984). Choice, similarity, and the context
theory of classification. Journal of Experimental Psychology: Learning, Memory, and Cognition, 10, 104-114. Nosofsky, R. M. (1985). Overall similarity and the identification of separable-dimension
stimuli: A choice model analysis. Perception & Psychophysics, 38, 415-432. Nosofsky, R. M. (1986). Attention, similarity, and the identification-categorization relationship. Journal of Experimental
Psychology: General, 115, 39-57. Nosofsky, R. M. (1987). Attention and learning processes in the identification and categoriza-
Stimulus Categorization
tion of integral stimuli. Journal of Experimental Psychology: Learning, Memory, and Cognition, 13, 87-108. Nosofsky, R. M. (1988a). Exemplar-based accounts of relations between classification,
recognition, and typicality. Journal of Experimental Psychology: Learning, Memory, and Cognition, 14, 700-708. Nosofsky, R. M. (1988b). Similarity, frequency, and category representations. Journal of
Experimental Psychology: Learning, Memory, and Cognition, 14, 54-65. Nosofsky, R. M. (1991a). Relations between the rational model and the context model of categorization. Psychological Science, 2,
416-421. Nosofsky, R. M. (1991b). Stimulus bias, asymmetric similarity, and classification. Cognitive Psychology, 23, 91-140. Nosofsky, R. M. (1992a). Exemplar-based approach to relating
categorization, identification and recognition. In F. G. Ashby (Ed.), Multidimensionalmodels of perception and cognition (pp. 363-393). Hilldale, NJ: Erlbaum. Nosofsky, R. M. (1992b). Exemplars,
prototypes, and similarity rules. In A. Healy, S. Kosslyn, & R. Shiffrin (Eds.), Festschrififor William K. Estes. Hillsdale, NJ: Erlbaum. Nosofsky, R. M., Clark, S. E., & Shin, H.J. (1989). Rules and
exemplars in categorization, identification, and recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition, 15, 282-304. Nosofsky, R. M., Kruschke, J. K., & McKinley, S. C.
(1992). Combining exemplar-based category representations and connectionist learning rules. Journal of Experimental Psychology: Learning, Memory, and Cognition, 18, 211-233. Nosofsky, R. M., Palmeri,
T. J., & McKinley, S. C. (1994). Rule-plus-exception model of classification learning. Psychological Review, 101, 53-79. Omohundro, J. (1981). Recognition vs. classification of ill-defined category
exemplars. Memory & Cognition, 9, 324-331. Pao, Y. H. (1989). Adaptive pattern recognition and neural networks. Reading, MA: AddisonWelsey. Parkes, A. S., & Bruce, H. M. (1962). Pregnancy-block of
female mice placed in boxes soiled by males. Journal of Reproduction and Fertility, 4, 303-308. Parzen, E. (1962). On estimation of a probability density function and mode. The Annals of Mathematical
Statistics, 33, 1065-1076. Perrin, N. (1992). Uniting identification, similarity, and preference: General recognition theory. F. G. Ashby (Ed.), Multidimensionalmodels of perception and cognition
(pp. 123-146). Hillsdale, NJ: Erlbaum. Perrin, N. A., & Ashby, F. G. (1991). A test of perceptual independence with dissimilarity data. Applied Psychological Research, 15, 79-93. Posner, M. I., &
Keele, S. W. (1968). On the genesis of abstract ideas. Journal of Experimental Psychology, 77, 353-363. Posner, M. I., & Keele, S. W. (1970). Retention of abstract ideas. Journal of Experimental
Psychology, 83, 304-308. Reber, A. S. (1976). Implicit learning of synthetic languages: The role of instructional set. Journal of Experimental Psychology: Human Memory and Learning, 2, 88-94. Reber,
A. S., & Allen, R. (1978). Analogical and abstraction strategies in synthetic grammar learning: A functionalist interpretation. Cognition, 6, 189-221. Reed, S. K. (1972). Pattern recognition and
categorization. Cognitive Psychology, 3, 382-407. Rips, L. J., Shoben, E. J., & Smith, E. E. (1973). Semantic distance and the verification of semantic relations. Journal of Verbal Learning and
Verbal Behavior, 12, 1-20. Robson, J. G. (1975). Receptive fields: Neural representation of the spatial and intensive attributes of the visual image. Handbook of Perception, 5, 81-116. Rosch, E.
(1973). Natural categories. Cognitive Psychology, 4, 328-350.
F. Gregory Ashby and W. Todd Maddox
Rosch, E. (1975). Cognitive reference points. Cognitive Psychology, 7, 532-547. Rosch, E. (1977). Human categorization. In N. Warren (Ed.), Studies in cross-culturalpsychology. London: Academic
Press. Rosch, E. (1978). Principles of categorization. In E. Rosch, & B. B. Lloyd (Eds.), Cognition and categorization (pp. 27-48). Hillsdale, NJ: Erlbaum. Rosch, E., & Mervis, C. (1975). Family
resemblances: Studies in the internal structure of categories. Cognitive Psychology, 7, 573-6()5. Rosch, E., Simpson, & Miller, R. S. (1976). Structural bases of typicality effects. Journal of
Experimental Psychology: Human Perception and Perlbrmance, 2, 491-502. Sattath, S., & Tversky, A. (1977). Additive similarity trees. Psychometrika, 42, 319-345. Shaw, M. L. (1982). Attending to
multiple sources of information. I: The integration of information in decision making. Cognitiw' Psycholqe,y, 14, 353-41)9. Shepard, R. N. (1957). Stimulus and response generalization: A stochastic
model relating generalization to distance in psychological space. Psychometrika, 22, 325-345. Shepard, R. N. (1962a). The analysis of proximities: Multidimensional scaling with an unknown distance
function I. Psychometrika, 27, 125-14(). Shepard, R. N. (1962b). The analysis of proximities: Multidimensional scaling with an unknown distance function II. Psychometrika, 27, 219-246. Shin, H.J., &
Nosofsky, R. M. (1992). Similarity-scaling studies of"dot-pattern" classification and recognition. Journal of Experimental Psychology: General, 121,278-304. Silverman, B. W. (1986). Density
estimation tbr statisticsand data analysis. London: Chapman and Hall. Smith, E. E., & Medin, D. L. (1981). Categories and concepts. Cambridge, MA: Harvard University Press. Smith, E. E., Shoben,
E.J., & Rips, L.J. (1974). Structure and process in semantic memory: A featural model for semantic decisions. Psychological Review, 81,214-241. Thomas, R. (1994, August). Assessinr perceptual
properties via same-different judgments. Paper presented at the Twenty-Seventh Annual Mathematical Psychology Meetings. Seattle, Washington. Torgerson, W. S. (1958). Theory and methods qfscaling. New
York: Wiley. Townsend, J. T., & Ashby, F. G. (1982). Experimental tests of contemporary mathematical models of visual letter recognition. Journal qfl Experimental Psychology: Human Perception and
Performance, 8, 834-864. Townsend, J. T., Hu, G. G., & Ashby, F. G. (1981). Perceptual sampling of orthogonal straight line features. Psychological Research, 43, 259-275. Townsend, J. T., & Landon,
D. E. (1982). An experimental and theoretical investigation of the constant-ratio rule and other models of visual letter confusion..Journal of Mathematical
Psychology, 25, 119-162.
Tversky, A. (1972). Elimination by aspects: A theory of choice. Psychological Review, 79, 281299. Tversky, A. (1977). Features of similarity. Psychological Review, 84, 327-352. Tversky, A., & Gati,
I. (1982). Similarity, separability and the triangle inequality. Psychological Review, 89, 123-154. Walker, J. H. (1975). Real-world variability, reasonableness judgments, and memory representations
for concepts. Journal of Verbal Learning and Verbal Behvior, 14, 241-252. Ward, L. M. (1973). Use of Markov-encoded sequential information in numerical signal detection. Perception & Psychophysics,
14, 337-342. Wattenmaker, W. D. (1992). Relational properties and memory-based category construction. Journal of Experimental Psychology: Learning, Memory, and Cognition, 15, 282-304. Weissmann, S.
M., Hollingsworth, S. R., & Baird, J. C. (1975). Psychophysical study of numbers. Ill: Methodological applications. Psychological Research, 38, 97-115.
Stimulus Categorization
Wittgenstein, L. (1953). Philosophical investigations. New York: Macmillan. Wyszecki, G., & Stiles, W. S. (1967). Color science: Concepts and methods, quantitative data and formulas. New York: Wiley.
Young, G., & Householder, A. S. (1938). Discussion of a set of points in terms of their mutual distances. Psychometrika, 3, 19-21. Zadeh, L. A. (1965). Fuzzy sets. hlformation and Control, 8,
This Page Intentionally Left Blank
Behavioral Decision Research: An Overview John W. Payne James R. Bettman Mary Frances Luce
I. I N T R O D U C T I O N This chapter concerns an area of inquiry referred to as behavioral decision research (BDR), which has grown rapidly over the past four decades. To illustrate the types of
decisions with which BDR is concerned, consider the following examples. Sue Terry is faced with a decision about where to go to college. She is an excellent swimmer and wishes to attend a school with
a good women's athletic program. Sue has received letters from a large number of schools that are interested in giving her an athletic scholarship. She decides to apply to the four schools with the
top-ranked women's swimming teams in the country. She is undecided about which school is her first choice. One school's swimming team has dominated women's intercollegiate swimming for several years.
However, Sue wonders if she would be among the top swimmers in her event at that school. A second school does not have quite as good a team but has an excellent overall women's athletic program. Sue
is certain she can be the top or second best swimmer in her event on this team. A third school has just brought in a new swimming coach who had coached two Olympic medalists in Sue's event. That
school also has an excellent academic reputation. Sue cannot decide which of the schools she prefers and finds it very difficult to trade off a school with a better team (which will
Measurement,Judgmenl, and Decision Making Copyright 9 1998 by Academic Press. All rights of reproduction in any form reserved.
John W. Payne, James R. Bettman, and Mary Frances Luce
probably win the national championship) against another school where she is more assured of competing against a third school with the best academics. Sue also wishes she had a better idea about her
odds of getting the kind of coaching attention she wants at the school with the best swimming program. Jim Johnson, an assistant professor in psychology, is concerned about how to invest a modest
amount of money he has just received from an uncle as a gift. His stockbroker friend has given him lots of information about the investment options available to him; in fact, Jim thinks that perhaps
he has been given too much information. Some stock options seem to offer high returns along with substantial risk. Other options, like money market funds, don't seem to have as much risk; however,
the potential return with such funds also seems low. Which option seems best also appears to depend on whether the general state of the economy improves, stays the same, or gets worse. Jim vacillates
about which investment option he prefers. The goals of BDR are to describe and explain judgment and choice behavior and to determine how knowledge of the psychology of decision making can be used to
aid and improve decision-making behavior. Although psychological concepts and methods have played a major role in the development of BDR as a field, BDR is intensely interdisciplinary, often using
concepts and methods from economics, statistics, and other fields. In addition, BDR is nearly unique among subdisciplincs in psychology because it often proceeds by using psychological concepts in
general, and perceptual and cognitive mechanisms in particular, to test the descriptive adequacy of normative theories of judgment and choice. This chapter provides an overview of the field of
behavioral decision research. We hope not only to arouse interest in decision making as a research focus, but also to suggest approaches and directions for new research. To keep the scope of the
chapter manageable, we focus mostly on the topics of multiattribute preferences (values), the formation of beliefs about uncertain events, and the making of risky decisions. Further, we focus on
decision tasks that arc relatively well structured in terms of the objectives and alternatives available. Research on the structuring of decision problems is more limited (however, for work on
identifying possible options, see Adelman, Gualtieri, & Stanford, 1995; Gettys, Plisker, Manning, & Casey, 1987; Keller & Ho, 1988; and Klein, Wolf, Militello, & Zsambok, 1995; for work on
identifying cues for inference, see Klayman, 1988; and for recent work on representing decision problems in various ways, see Coupey, 1994, and Jones & Schkade, 1995). A recurring theme of the
chapter is the important role that considerations of information processing limitations and cognitive effort play in explaining decision behavior.
5 Behavioral Decision Research: An Overview
II. DECISION TASKS A N D DECISION DIFFICULTY What makes a decision difficult? The difficulty of a decision seems to depend on a number of cognitive and emotional elements. For example, decisions are
often complex, containing many alternatives and several possible outcomes. As we will discuss later, a good deal of research suggests that people respond to cognitively difficult, complex decisions
by trying to simplify those decisions in various ways. Decisions can also be difficult because of uncertainties about the possible outcomes. Generally, the more uncertain you are about what will
happen if you choose various options, the more difficult the decision. We discuss research focusing on how people deal with decisions characterized by uncertainties about possible outcomes. We also
consider research on using cues (e.g., a student's GPA in high school, SAT scores, and letters of recommendation) to make predictions (e.g., a judgment about how well someone will do in college).
Even when we feel we know what we will receive when we choose an option, we may not know how we feel about it. A prestigious Ivy League school may offer a competitive and high-pressure undergraduate
program, but we might be uncertain about how we would like that environment. Thus, there can be uncertainty in values as well as uncertainty in outcomes (March, 1978). Related to the uncertainty in
values is the fact that decisions often involve conflicting values, where we must decide how much we value one attribute relative to another. In other words, no single option may be best on all our
valued objectives. Conflict among values has long been recognized as a major source of decision difficulty (Hogarth, 1987; Shepard, 1964). Consequently, we will also review work on how people make
decisions among options with conflicting values. A variety of other factors affect decision difficulty. One major factor is the emotional content of the decision. For most people, there is an
enormous difference between choosing a brand of mayonnaise and buying an automobile for family use. In the former case, the decision is often routine, has relatively few consequences, and is made
almost automatically, with little effort. In the latter case, the decision maker may devote a great deal of effort, search for a large amount of information, solicit advice, and agonize over
difficult trade-offs in making a choice, such as deciding between additional safety features in a car and an increased price. The decision maker may even try to avoid making trade-offs at all (Baron
& Spranca, 1997). Although there is relatively little research on how the emotional content of the decision affects how judgments and choices are made, we discuss that issue and provide a conceptual
framework. Finally, although we will not discuss them in order to keep the scope of the chapter reasonable, other factors that influence decision difficulty extend beyond the individual decision
John W. Payne, James R. Bettman, and Mary Frances Luce
maker and the specific task at hand to include such factors as whether the decision maker is accountable to others for the decision that is made (e.g., Tetlock, 1985; Tetlock & Beottger, 1994;
Siegel-Jacobs & Yates, 1996). See also Hinz, Tindale, and Vollrath (1997) and Kerr, MacCoun, and Kramer (1996) for recent examples of the extensive literature on group decision making. We have argued
that decisions can become quite difficult for a variety of reasons. How do people cope with such difficulty? To what extent do people solve difficult decisions by obtaining complete information,
making trade-offs, and always selecting the alternative that maximizes their values? One approach to decision making, favored by many economists, argues that decision makers are exquisitely rational
beings in solving judgment and choice problems. The rational or economic person is assumed to have knowledge of the relevant aspects of his environment which, if not absolutely complete, is at least
impressively clear and voluminous. He is assumed also to have a well-organized and stable system of preferences and a skill in computation that enables him to calculate, for the alternative courses
of action that are available to him, which of these will permit him to reach the highest attainable point on his preference scale. (Simon, 1955, p. 99) Neoclassical economics argues that models of
rational, optimizing behavior also describe actual human behavior: "The same model is used as a normative definition of rational choice and a descriptive predictor of observed choice" (Thaler, 1987,
p. 99). Specific models that have been used both as normative definitions of behavior and as descriptive predictors of actual judgments and choices are Bayes' theorem in the area of probabilistic
judgment and the expected utility model in the area of risky decision making, described later in this chapter. Another approach to characterizing decision making, which most psychologists feel is
more descriptive of actual decision making, is that of bounded rationality. III. B O U N D E D RATIONALITY "Human rational behavior is shaped by a scissors whose two blades are the structure of task
environments and the computational capabilities q~ the actor."
H. Simon, 1990, p. 7 In one of the most important papers in the history of BDR, Simon (1955) argued that understanding actual decision behavior would require examining how perceptual, learning, and
cognitive factors cause human decision behavior to deviate from that predicted by the normative "economic man" model. In contrast to the normative assumptions, Simon argued that the decision maker's
limited computational capabilities would interact with the
5 Behavioral Decision Research: An Overview
complexity of task environments to produce bounded rationality--that is, decision behavior that reflects information processing limitations. As a resuit, Simon suggested that actual decision behavior
might not even approximate the behavior predicted by normative models of decision tasks (Simon, 1978). For example, Simon (1955) suggested that people often select among options such as those facing
Sue Terry and Jim Johnson in the introduction by identifying an option that is "good enough." That is, instead of trying to select the optimal or best option, which may be too daunting a task, people
may just try to "satisfice" and select the first option that meets one's minimum requirements to be satisfactory. The information processing capacity limitations emphasized by Simon may explain
results showing that preferences for and beliefs about objects or events are often constructed--not merely revealed--in responding to a judgment or choice task. The concept of constructive
preferences and beliefs is that people do not have well-defined values for most objects, questions, and so on. Instead, they may construct such preferences on the spot when needed, such as when they
are asked how much they like an option (Bettman, 1979; Slovic, 1995). The notion of constructive preferences does not simply deny that observed preferences result from reference to a master list in
memory; it also implies that expressed judgments or choices are not necessarily generated by using some invariant algorithm such as expected utility calculation. Individuals may construct preferences
on the spot because they do not have the cognitive resources to generate well-defined preference orderings. According to March (1978), "Human beings have unstable, inconsistent, incompletely evoked,
and imprecise goals at least in part because human abilities limit preference orderliness" (p. 598). The theme of constructive judgments and choices underlies much current behavioral decision
research. The constructive nature of preferences and beliefs also implies, and is implied by, the fact that expressed judgments and choices often appear to be highly contingent upon a variety of task
and context factors, such as the order in which options are examined. Task factors are general characteristics of a decision problem (such as response mode, [e.g., judgment or choice], information
format, or order of alternative search) that do not depend on particular values of the alternatives. Context factors, such as similarity of alternatives, on the other hand, are associated with the
particular values of the alternatives. One of the major findings from behavioral decision research is that the information and strategies used to construct preferences or beliefs are highly
contingent on and predictable from a variety of task and context factors. We will review some of this research as we continue. The effects of bounded rationality are also evident in the observation
that people are sometimes relatively insensitive to factors that should matter
John W. Payne, James R. Bettman, and Mary Frances Luce
from a normative perspective. For example, people sometimes ignore normatively relevant information such as base-rates in making probability judgments. People may also be sensitivc to factors that
should not matter from a normative perspective, for example, equivalent response modes. More generally, task and context factors cause differcnt aspects of the problem to be salient and evoke
different processes for combining information. Thus, seemingly unimportant characteristics of the decision problem can at least partially determine the preferences and beliefs we observe. An emphasis
on decision behavior as a highly contingent form of information processing is stressed throughout this chaptcr. The rest of this chapter is organized as follows. First, wc review research on choice
with conflicting values and preferences. Wc consider various strategies used to make decisions among multiobjective (multiattribute) alternatives and discuss research showing how the usc of such
strategies varies depending on properties of the choice task. We assume in this section that people know what they will get when they choose an option, but the problem is a difficult one because no
option best meets all of their objectives. Second, we review research dealing with how people judge the probabilities or likelihoods of uncertain events. Ncxt, we briefly review the extensive
literature on how people make risky choices that involve trade-offs between the desirability of consequences and the likelihoods of those consequences (choices among gambles), for example, deciding
among investment options. We then describe research methods useful for studying decision behavior. Finally, we explore how concepts such as emotion, affcct, and motivation may be combined with the
more cognitive notions of bounded rationality to further our understanding of decision behavior. IV. C O N F L I C T I N G VALUES AND PREFERENCES Conflict among values arises because decisions like
the ones illustrated at the beginning of this chapter generally involve a choice among options whcre no single option best meets all of our objectives. In fact, when one option dominates the others
(i.e., is better on all objectives), selection of that option is perhaps the most widely accepted principle of rational choice. As noted earlier, conflict is a major source of decision difficulty. If
conflict is present and a rule for resolving the conflict is not readily available in memory, decision making is often characterized by tentativeness and the use of relatively simple methods or
heuristic strategies, even in well-defined laboratory tasks. In this section we briefly describe some of the strategies that people use to make preferential choices. A major observation of behavioral
decision research is that people use a wide variety of strategies in making preference
5 Behavioral Decision Research: An Overview
judgments, some of which can be thought of as confronting conflict and others as avoiding conflict (Hogarth, 1987). After presenting descriptions of various strategies, we discuss research that
demonstrates how the use of such strategies is contingent on the nature and context of the task facing the decision maker.
A. Decision Strategies 1. Weighted Additive Value A common assumption about decisions among multiattribute alternatives is that individuals confront and resolve conflicts among values by considering
the trade-offbetween more of one valued attribute (e.g., economy) against less of another valued attribute (e.g., safety). The weighted additive value model (WADD) is often used to represent the
trading-off process. A measure of the relative importance (weight) of an attribute is multiplied by the attribute's value for a particular alternative and the products are summed over all attributes
to obtain an overall value for that alternative, WADD(X); that is, tl
WADD (X) -
where X; is the value of option X on attribute i, n is the total number of relevant attributes, and W; is the weight given to attribute i. Consistent with normative procedures for dealing with
multiattribute problems (Keeney & Raiffa, 1976), the WADD model uses all the relevant problem information, explicitly resolves conflicting values by considering trade-offs, and selects the
alternative with the highest overall evaluation. Almost 20 years ago, Edwards and Tversky (1967) stated that this notion of additive composition "so completely dominates the literature on riskless
choice that it has no competitors" (p. 255). As we shall see, such competitors now exist in abundance. How do people think of "weights" within the context of the WADD rule? Weights are sometimes
interpreted locally; that is, the relative weights reflect the ranges of attribute values over the options in the choice set so that the greater the range, the greater the attribute's importance
(Goldstein, 1990). At other times, subjects interpret the weight given to an attribute more globally; for example, safety may be considered much more important than cost, regardless of the local
range of values (Beattie & Baron, 1991). Whether the influence of the weights on preferences reflects an adding or
John W. Payne, James R. Bettman, and Mary Frances Luce
averaging process is also at issue. In an averaging model, the weights are normalized, or constrained to sum to one. Perhaps the key distinction between an adding or averaging process is what happens
to a judgment when new information is obtained. For instance, assume you have received two strongly positive pieces of information about an applicant for a job. On the basis of that information you
have formed a favorable overall impression of the applicant. Now assume that you receive a third piece of information about the applicant that is positive, but only moderately so. What happens to
your overall impression of the applicant? Under a strict adding process, your impression should be even more favorable, because you have received more positive information. Under an averaging
process, your overall impression may be less favorable than it was, even though the new information is positive, because you will average two strongly positive pieces of information with a moderately
positive piece of information. See Jagacinski (1995) for an example of a study that distinguishes between adding and averaging models in the context of a personnel selection task. Research suggests
that the averaging model better describes judgments in many situations (Anderson, 1981). Variations on the adding and averaging models include versions of each model that allow for an initial
impression of an option or versions of each model that allow for configural terms. Configural terms allow for possible interactions among attributes. For example, a worker who is both prompt and
works at a high level of efficiency might be given extra credit in a performance evaluation, that is, a positive configural term; see Birnbaum, 1974; Birnbaum, Coffey, Mellers, and Weiss (1992), and
Champagne and Stevenson (1994) for discussions of configural strategies for combining information into an evaluation of an alternative. Three strategies related to the additive rule--the expected
value, expected utility, and subjective expected utility (SEU) rules--may be used in making decisions under risk. To calculate expected value, the value X i (i.e., the consequence or monetary amount)
of each possible outcome of a lottery or gamble is by multiplied by its probability of occurrence (Pi), and these value-probability products are summed over all the outcomes to obtain the expected
value. Then the lottery or gamble with the highest EV is selected. The expected utility rule is similar, but it substitutes the utility of each outcome, U(Xi), for its monetary value in the
calculation. The EU rule thus applies to a broader domain than monetary gambles, but at the cost of additional processing effort. In general, however, the processing characterizing both of these
models is very similar. The SEU rule allows for a subjective probability function S(Pi) to be used along with a utility function to represent risky decisions. The subjective expected utility of a
risky option X is then given by
5 Behavioral Decision Research: An Overview
sEu (x) :
F, s(p,) u ( x , ) . i=1
The EV, EU, and SEU rules, especially the latter two, are considered normative rules for choice, so these rules are often used in the literature as both descriptions of actual behavior and as
normative prescriptions for behavior. In fact, a great deal of research on decision making has been motivated by ascertaining the extent to which such normative rules describe actual choice behavior;
for example, see Fox, Rogers, and Tversky (1996). As we outline later in the chapter, choice behavior often departs substantially from these normative prescriptions. 2. Probabilistic Models of Choice
Before we review some of the other strategies for choice that have been identified, we briefly discuss the idea that choice behavior can be modeled as a probabilistic process. The idea that the
alternative with the highest overall evaluation (perhaps derived from a WADD or EV) rule is always chosen is a deterministic one. That is, if V(X) represents the value of alternative X, it is assumed
that A will be chosen from the two options A and B if V(A) > V(B). However, when faced with the same alternatives under seemingly identical conditions, people do not always make the same choice.
Thus, some researchers have argued that the deterministic notion of preference should be replaced by a probabilistic one, which focuses on the probability of choosing A from the total set of
optionsmthat is, choosing A rather than B, denoted P(A;{A,B}), if the total set of options is A and B). Such a probability is often viewed as a measure of the degree to which A is preferred to B.
There is extensive literature on probabilistic choice models (see Meyer & Kahn, 1991, for a recent review of that literature). Perhaps the best known probabilistic choice model is the muhinominal
logit model (McFadden, 1981), in which the probability of choosing an option Xi from the choice set (X 1 . . . X,,) is given by the equation p(x,;
, x,,})
i'(x,) / ~., e~'IXi) J
where Y(Xi) = b i + ~., bk Xik , Xik is the value of option i on attribute k, b k is a scaling parameter (weight) for attribute k, and b; is a constant meant to capture those aspects of the
attractiveness of option i not captured by the values on the attributes. Note that the V(Xi) function is essentially an additive composition rule similar to the WAD D strategy mentioned earlier.
John W. Payne, James R. Bettman, and Mary Frances Luce
An important implication of the multinominal logit model, and many other probabilistic choice models, is the property called independence of irrelevant alternatives (IIA). The basic idea of the IIA
principle is that the relative preference between options does not depend on the presence or absence of other options and is thus independent of the context of choice as defined by the offered choice
set. For example, the IIA principle means that the probability of a decision maker's selecting steak over chicken for dinner from a menu is the same for all menus containing both entrees (Coombs,
Dawes, & Tversky, 1970). More generally, Tvcrsky and Simonson (1993) have argued that the IIA assumption is essentially equivalent to the idea of"value maximization" and the belief that "the decision
maker has a complete preference order of all options, and that--given an offered set--the decision maker always selects the option that is highest in that order" (p. 1179). Although it is clear that
people sometimes make decisions in ways consistent with the WADD, EV, and EU models, and probabilistic versions of those models such as the multinominal logit model, it has also become obvious over
the past 20 years that people often make decisions using simpler decision processes (heuristics) more consistent with the idea of bounded rationality. Further, at least partially as a result of the
use of those heuristics, people often exhibit choices that arc context dependent. That is, principles such as IIA are systemically violated (e.g., see Simonson & Tversky, 1992). Tversky and Simonson
(1993) have proposed a componcntial context model of such effects that specifically considers the relative advantage of each alternative when compared to other options in the set. We describe some of
the more common heuristics next. Each heuristic represents a different method for simplifying decision making by limiting the amount of information processed or by making the processing of that
information easier. In addition, these heuristics often avoid conflict by not making trade-offs among attributes. That is, many of the heuristics are noncompensatory, meaning that a good value on one
attribute cannot compensate for a bad value on another. Given space limitations, we focus on deterministic versions of these heuristics, although probabilistic forms of some of these strategies
exist. 3. Satisficing (SAT) One of the oldest heuristics in the decision-making literature is the satisficing strategy described by Simon (1955) and mentioned earlier. Alternatives are considered one
at a time, in the order they occur in the set, and the value of each attribute of the alternative is compared to a predefined cutoff, often viewed as an aspiration level. The alternative is rejected
if any attribute value is below the cutoff, and the first option with all values surpassing the cutoffs is selected. If no alternatives pass all the cutoffs, the process can be
5 Behavioral Decision Research: An Overview
repeated with lower cutoffs or an option can be selected randomly. A major implication of the satisficing heuristic is that choice depends on the order in which alternatives are considered. No
comparison is made of the relative merits of alternatives; rather, if alternative A and alternative B both pass the cutoffs, then whether A or B is chosen depends on whether A or B is evaluated
first. 4. The Equal Weight (EQW) Heuristic The equal weight strategy considers all alternatives and all the attribute values for each alternative but simplifies the decision by ignoring information
about the relative importance or probability of each attribute (outcome). Assuming that the attribute values can be expressed on a common scale of value, this heuristic is a special case of the
weighted additive rule, which obtains an overall value for each option by summing the values for each attribute for that alternative. Several researchers have argued that the equal weight rule is
often a highly accurate simplification of the decisionmaking process (Dawes, 1979; Einhorn & Hogarth, 1975). 5. The Majority of Confirming Dimensions (MCD) Heuristic The MCD heuristic chooses bctwccn
pairs of alternatives by comparing the values for each of the two alternatives on each attribute and retaining the alternative of the pair with a majority of winning (better) attribute values. The
retained alternative is then compared in a similar fashion to the next alternative among the set of alternatives and such pairwise comparisons repeat until all alternatives have been processed and
the final winning alternative has been identified (Russo & Dosher, 1983). The MCD heuristic is a simplified version of Tversky's (1969) additive difference (ADDIF) model. Tversky's model takes the
difference between the subjective values of the two altcrnatives on each dimension. A weighting function is applied to each of thcsc diffcrcnces, and the results are summed over all the dimensions,
yielding an overall relative evaluation of the two options. Under certain conditions, the preference orderings produced by the additive difference rule and the WADD rule are identical, even though
the two rules differ in their processing details (see Tversky, 1969, for a further discussion of the relationship between the ADDIF and WADD models). Aschenbrenner, Bockenholt, Albert, and
Schmalhofer (1986) have proposed a variation on the additive difference process. In their model, attribute differences are processed sequentially, with the summed differences accumulating until the
advantage of one option over the other exceeds some criterion value (this value may reflect the decision maker's desired balance between the effort involved and the quality of the decision process;
see Bockenholt, Albert, Aschenbrenner, & Schmalhofer, 1991).
John W. Payne, James R. Bettman, and Mary Frances Luce
6. The Lexicographic (LEX) Heuristic The lexicographic heuristic is quite simple: the alternative with the best value on the most important attribute is selected. If two alternatives are tied for the
best value, the second most important attribute is considered the tiebreaker, and this process continues until the tie is broken. Sometimes the notion of a just-noticeable difference (JND) is added
to the LEX rule; that is, options are considered to be tied on an attribute if they are within a J N D of the best alternative on that attribute (Tversky, 1969). This version of the LEX rule is
sometimes called lexicographic-semiorder (LEXSEMI). One implication of using a lexicographic-semiorder decision rule is that a person may exhibit intransitivities in preferences in which X > Y, Y >
Z, and Z > X, as shown in the following example, adapted from Fishburn (1991). Suppose that Professor P is about to changejobs and feels that if two offers are far apart on salary (e.g., more than
$10,000 apart), then she will choose the job with the higher salary. Otherwise, the prestige of the university will be more important to her. Suppose her three offers are as follows:
Salary X $65,000 Y $50,000 Z $58,000
Prestige Low High Medium,
In this case she will prefer X to Y on the basis of X's better salary, will prefer Y to Z because they are less than $10,000 apart in salary and Y has greater prestige, and she will prefer Z to X on
the basis of prestige. Overall, therefore, she exhibits an intransitive pattern of preferences. The general assumption is that choice rationality requires transitive preferences, although Fishburn
(1991) has presented arguments for the reasonableness of sometimes violating transitivity. 7. The Elimination-by-Aspects (EBA) Heuristic First suggested by Tversky (1972), an EBA choice strategy
first considers the most important attribute, retrieves the cutoff value for that attribute, and eliminates all alternatives with values below the cutoff for that attribute (Tversky actually assumed
that attribute selection was probabilistic, with the probability of attribute selection a function of its weight or importance). This process eliminates options that do not possess an aspect, defined
as that which meets or exceeds the cutoff level on the selected attribute. The EBA process continues with the second most important attribute, and so on, until one option remains. Although the EBA
process violates the normative notion that one should use all relevant information to make a decision, it reflects rationality in using attribute weight or importance to order the attributes. Such
"partial" rationality in processing characterizes most choice heuristics.
5 Behavioral Decision Research: An Overview
8. Combined Strategies Individuals sometimes combine strategies, typically with an initial phase where poor alternatives are eliminated and a second phase where the remaining alternatives are
examined in more detail (Payne, 1976). One combined heuristic that is frequently observed in decision behavior is an elimination-by-aspects strategy in the initial phase to reduce the set of
alternatives, followed by use of a weighted additive strategy on the reduced set of options. See Russo and Leclerc (1994) for another view of phases in decision processes. Beach (1990, 1993) has
advanced a theory of decision making called image theory that emphasizes the prechoice screening of options. In image theory, prechoice screening of options both prevents the choice of an option that
is "too unacceptable" and reduces the workload of the decision maker. According to Beach, screening involves testing the compatibility of a particular option with the decision maker's standards,
which reflect morals, goals, values, and beliefs relevant to the decision problem. The degree of fit or compatibility of an option depends on the number of standards that are violated by the option's
various features. The compatibility testing process is noncompensatory; nonviolations cannot compensate for violations. Beach (1993) reviewed some of the research in support of image theory. A major
implication of those results is that screening may play a more important role in decision making than has been generally accepted. We have now described many decision-making strategies, but we have
not yet specified the conditions leading to the use of one strategy as opposed to another. In the next section, we first review work showing multiple strategy use in how people adapt to the
complexity of decisions. Then we consider the extensive research showing that decision behavior is highly contingent on seemingly minor changes in how preferences are expressed and how options are
B. Contingent Decision Behavior 1. Task Complexity Although many striking examples exist of multiple strategy use and contingent judgment and choice, some of the most compelling and earliest to be
demonstrated concern how people adapt their decision processes to deal with decision complexity. The primary hypothesis for this research is that people use simplifying decision heuristics to a
greater extent for more complex decision problems. This hypothesis has been supported by a number of studies manipulating decision complexity using the number of alternatives, number of attributes,
and time pressure, among other factors. Perhaps the most well-established task-complexity effect is the impact of changes in the number of alternatives available (Payne, 1976). When faced
John W. Payne, James R. Bettman, and Mary Frances Luce
with two alternatives, people use compensatory decision strategies which involve trading off a better value on one attribute against a poorer value on another (e.g., weighted adding). However, when
faced with multialternative decision tasks, people prefer noncompensatory choice strategies (Billings & Marcus, 1983; Johnson, Meyer, & Ghose, 1989; Klayman, 1985; Onken, Hastie, & Revelle, 1985).
Another way to manipulate decision complexity is to vary the amount of attribute information. Several studies, though not all, find that decision quality can decrease as the number of attributes is
increased above a certain level of complexity (Keller & Staelin, 1987; Sundstrom, 1987). Such "information overload" studies have been criticized on a variety of methodological grounds (e.g., Meyer &
Johnson, 1989), and Grcther and Wilde (Grether & Wilde, 1983; Grether, Schwartz, & Wilde, 1986) argue that in "real" tasks people are able to ignore the less-relevant information so that overload is
not a serious issue. On the other hand, Gaeth and Shanteau (1984) found that judgments were adversely influenced by irrelevant factors, although training reduced that influence. The crucial question
appears to be how people selectively focus on the most important information and avoid getting distracted by irrelevant information. People also respond to decision problems varying in time pressure
using several coping mechanisms, including acceleration of processing, selectivity in processing, and shifts in decision strategies. As time constraints become more severe, the time spent processing
an item of information decreases substantially (Ben Zur & Breznitz, 1981), processing focuses on the more important or more negative information about alternatives (Ben Zur & Breznitz, 1981; Payne,
Bettman, &Johnson, 1988; Svenson & Edland, 1987; Wallsten & Barton, 1982), and decision strategies may shift (Payne et al., 1988; Payne, Bettman, & Luce, 1996; Zakay, 1985). Finally, there may be a
hierarchy among these responses to time pressure. Payne et al. (1988) found that under moderate time pressure subjects accelerated processing and to a lesser extent became more selective. Under more
severe time pressure, people accelerated processing, selectively focused on a subset of the available information, and changed processing strategies. Similar effects were found by Payne, Bettman, and
Luce (1996) when time stress was manipulated by varying the opportunity cost of delaying decisions and by Pieters, Warlop, and Hartog (1997) in a naturalistic consumer choice domain. See Svenson and
Maule (1993) for a collection of papers dealing with time pressure effects on decision making. Studies of people's contingent responses to complex decisions provide clear examples of constructive
decision processes. However, many other striking cases of constructive processes exist, including differential responses to what might seem trivial changes in task or information presentation. We
consider several cases of this sort next.
5 Behavioral Decision Research" An Overview
2. Response Mode and Procedure lnvariance One of the most important characteristics of a decision task is the method by which the decision maker is asked to respond. Figure 1 provides examples of
different response modes used in decision research. Decision research has generally used two types of response modes: (1) A choice task involves presenting two or more alternatives and asking the
subject to select the alternative(s) that is most preferred, most risky, and so forth; (2) A judgment task usually involves successively presenting individual alternatives and requesting that the
subject assign a value (e.g., the option's worth or riskiness) to each. A matching task is a variant of a judgment task involving the presentation of two alternatives and requiring the subject to
fill in a missing value for one option in the pair so as to make the two options in the pair equal in value. Procedure invariance is a fundamental principle of rational decision making: Strategically
equivalent ways of eliciting a decision maker's preferences should result in the same revealed preferences. However, the use of different response modes can lead to differential weighting of
attributes and can change how people combine information, resulting in different preference assessments. Research on the effects of choice versus matching tasks and on preference reversals documents
such response mode effects.
Choice Mode: Prob
Matching Mode: Amount
H: 32/36
H" 32/36
L: 4/36
L" 4/36
Which gamble do you prefer?
Complete the missing value so that the two gambles are equal in value.
Bidding Mode: Prob L: 4/36
Rating Mode: Amount
What is the minimum amount for
which you would sell the gamble?
H: 32/36
Amount $4
How attractive is this gamble?
FIGURE 1 Examplesof response modes. From Figure 2.1 of The AdaptiveDecisio, Maker, by J. W. Payne,j. R. Bettman, and E. J. Johnson, Cambridge: Cambridge University Press, 1993, p. 41. Reprinted with
the permission of Cambridge University Press.
John W. Payne, James R. Bettman, and Mary Frances Luce
a. Choice versus Matching
The so-called prominence effect (Tversky, Sattath, & Slovie, 1988) provides an excellent example of the contingent weighting of attributes as a function of response mode. The prominence effect is the
finding that the predominant or more important attribute (e.g., lives saved in comparison to the cost of a safety program) is given even more weight when preferences are assessed using choice than
when preferences are assessed using a matching task. To illustrate the difference between matching and choice tasks, imagine that you must consider two programs for dealing with traffic accidents.
The programs are both described to you in terms of their yearly dollar costs and the number of fatalities per year. In a matching task, you are given all of the values but one. For example, suppose
that Program A is expected to lead to 570 fatalities and cost $12 million, whereas Program B is expected to lead to 500 fatalities and cost $X. Then for the matching task you would be asked to give a
value $X for the cost of program Bmpresumably an amount greater than $12 million since Program B leads to fewer fatalitiesmthat would equate the overall values of Programs A and B according to your
preferences. In a choice task, on the other hand, you would be given all of the cost and fatality values for both programs (e.g., all the values in the example above plus a cost of $55 million for
Program B) and be asked to choose the program you most prefer. For these specific values, most people choose Program B over Program A. This implies that saving 70 lives is more important than saving
$43 million. In a matching task, on the other hand, people often provide values for $X that arc less than $55 million, implying that a cost difference of less than $43 million is equivalent to 70
fewer fatalities. Therefore, the trade-offbetween cost and fatalities differs depending on whether it is assessed with a choice task or a matching task. Tversky et al. (1988) have suggested that the
two tasks encourage use of different heuristics or computational schemes. Choice, they argue, involves more qualitative, ordinal, lexicographic reasoning (i.e., one selects the option that is
ordinally superior on the most important attribute). Such lexicographic reasoning is easier cognitively than explicit trade-offs, avoids rather than confronts conflict, and is easier to justify to
oneself and others. Matching tasks, on the other hand, require a more cardinal, quantitative assessment in which one must consider the size of the differences for both attributes and the relative
weights of the attributes. More generally, Tversky et al. (1988) suggested that there is strategy compatibility between the nature of the required responsemordinal or cardinalmand the types of
reasoning employed by a decision maker. They argue that choice, for example, requires an ordinal response and evokes
5 Behavioral Decision Research: An Overview
arguments (processes) based on the ordering of the attribute values. Hawkins (1994) provided evidence that processing characteristics do vary systematically across response modes and that these
processing variations are predictive of preference reversals.
b. Preference Reversals Fischer and Hawkins (1993) discuss the notion of scale compatibility, an idea related to, but distinct from, the concept of strategy compatibility. Scale compatibility states
that enhanced weight is given to a stimulus attribute to the extent it is compatible with the response scale. The general idea of scale compatibility has played a major role for some time in
understanding the classic preference reversal phenomenon (Lichtenstein & Slovic, 1971). In the standard preference-reversal paradigm, individuals evaluate two bets of comparable expected value. One
of the bets offers a high probability of winning a small amount of money, whereas the other bet offers a low probability of winning a large amount of money. Most people prefer the bet with the higher
probability of winning when asked to choose between the two bets. However, if they are asked to bid for (assign a cash equivalent to) each bet, most people assign a higher value to the
lowprobability, high-payoff bet. Thus, preferences "reverse" between the two response modes. Tversky, Slovic, and Kahneman (1990) have shown that a major cause of preference reversals is such
overpricing of the low-probability, high-payoff bet, perhaps due to the scale compatibility between the payoff amount and the bid response mode (see also Bostic et al., 1990). Finally, Schkade, and
Johnson (1989), using computer-based monitoring of information-acquisition behavior, also support scale compatibility as a factor underlying preference reversals. Delqui~ (1993) provided strong
evidence of scale compatibility effects using both risky and nonrisky decision stimuli. Although scale compatibility plays a role in preference reversals, other mechanisms may also be contributing
factors. For example, Goldstein and Einhorn (1987) have argued that the evaluation process is the same for all response modes and have claimed that reversals are mainly due to expressing the
underlying internal evaluation on different response scales. How individuals reframe decisions under certain response modes also may lead to preference reversals (Bell, 1985; Casey, 1991; Hershey &
Schoemaker, 1985). Suppose that a person is given one option, which is a sure thing, and a second option, which is a gamble offering either a specific greater amount with probability p or a specific
lesser-amount with probability 1 - p. Suppose further that the person is asked to set (match) the probability p of obtaining the greater amount in order to make the sure-thing option and the gamble
equivalent in value. Hershey and Schoemaker (1985), for instance, suggested that this matching task encourages the person to use the amount
John W. Payne, James R. Bettman, and Mary Frances Luce
of the sure thing as a reference point, with the two outcomes of the gamble then coded as a gain and as a loss. Preference reversals may also be due to changes in evaluation processes across response
modes (e.g., Johnson, Paync, & Bcttman, 1988; Mellers, Ord6fiez, & Birnbaum, 1992; Schkade & Johnson, 1989), as suggested by the strategy-compatibility hypothesis discussed previously. If different
strategies are used to generate each type of response, reversals can easily result. Fischer and Hawkins (1993) found in a series of experiments that strategy compatibility effects were stronger than
scale compatibility as explanations of procedural variance. An obvious hypothesis is that the more ambiguity in one's existing preferences, perhaps due to a lack of familiarity with the objects to be
valued, the more one's expressed preferences will be subject to task factors such as how you ask the question. There is support for this hypothesis. For example, Coupey, Irwin, and Payne (1998) have
reported that the difference between choice and matching responses is greater for unfamiliar consumer products than for more familiar product categories. Further, familiarity exhibits a stronger
influence on matching responses than on choice responses. The data suggest that subjects tend to weight attributes more equally, and perhaps depend more on the information presented as part of the
task itself rather than on information brought to the task, when constructing preferences using a matching rcsponsc in an unfamiliar product category. In summary, either framing, strategy selection,
weighting of information, or expression of preferences can explain preference reversals. However, preference reversals may be as prevalent and robust as they are because there are multiple underlying
causes, each operative in some situations but not others (e.g., Creyer & Johar, 1995; Goldstein & Einhorn, 1987; Hsee, 1996). Regardless of which particular cause is operative, it is now abundantly
clear that the answer to how much you likc a decision option can depend greatly on how you arc asked the question. 3. Descriptive Invariance Although the principle of descriptive invariance (i.e.,
that different representations of the same choice problem should lead to equivalent preferences), seems to be reasonable, research has shown consistently that how problems are presented affects
preferences. Not only how you ask the question but also how the options are described affects preferences, even when the descriptions or presentations are normatively equivalent (Tversky & Kahneman,
1986). Two major streams of research that demonstrate descriptive variance are investigations of framing and the effects of information presentation.
5 Behavioral Decision Research: An Overview
a. Framing Effects Framing affects how the acts, contingencies, and outcomes of a decision are determined. Framing can be influenced by both the presentation of the decision problem and by the
decision maker's norms, habits, and expectations (Tversky & Kahneman, 1986). Tversky and Kahneman (1981), for example, showed that simple changes in wording--for example, describing outcomes in terms
of lives saved rather than describing them in terms of lives lostmcan lead to vastly different preferences (for other demonstrations of such wording effects, see also Huber, Ncale, & Northcraft,
1987; Kramer, 1989; Levin & Gaeth, 1988; Paese, 1995; Puto, 1987; Schneider, 1992). There appears to be a crucial distinction between (1) framing that leads to coding outcomes as gains and (2)
framing that results in outcomes' being coded as losses, because people clearly treat negative consequences and positive consequences differently. Tvcrsky and Kahneman's (1991) concept of loss
aversion (the impact of a difference on a dimension is greater when that difference is seen as a loss than when it is seen as a gain) stresses the importance of this difference. A theory of framing
has proven difficult to formalize, however, although some progress has been made. For instance, Thaler (1985; Thaler & Johnson, 1990) suggested that framing is an active process rather than simply a
passive response to the given decision problem, and he examined the hypothesis that people frame outcomes to make them appear the most pleasant or the least unpleasant. In particular, Thaler (1985)
argued that people generally prefer to keep gains separate (segregated) and to integrate (package together) all negative outcomes. Thaler and Johnson (1990) called this view
hedonic editing.
Linville and Fischer (1991) suggested that framing is driven by the need to conserve the limited, but renewable, psychological, cognitive, and social resources available for coping with emotional
events. They showed that the original hedonic-editing hypothesis does not fully account for peoples' preferences for temporally separating or combining good and bad news; rather, people prefer to
segregate bad news but to combine a positive and negative event on the same day. Thus, reference points, target levels, or aspiration levels can contribute to framing effects (see Schneider, 1992)
and to procedural variance. As noted in our descriptions of heuristics presented earlier, this idea is also of long standing in theories of decision making (Siegel, 1957; Simon, 1955). For instance,
Simon suggested that individuals simplify choice problems by coding an outcome as satisfactory if the outcome is above the aspiration level or unsatisfactory if it is below. Such codings play a
crucial role in his notion of satisficing. Finally, there is a great deal of evidence that choice depends on the reference level used in coding outcomes (Fischer, Kamlet,
John W. Payne, James R. Bettman, and Mary Frances Luce
Fienberg, & Schkade, 1986; Highhouse & Johnson, 1996; Payne, Laughhun, & Crum, 1984; Tversky & Kahneman, 1991). One particularly important type of reference-level effect is the status quo bias
(Kahneman, Knetsh, & Thaler, 1990; Samuelson & Zeckhauser, 1988), in which the retention of the status quo option is favored over other options.
b. Information Presentation Effects Information presentation differences also influence decision behavior. Slovic (1972) suggested that decision makers tend to use information in the form it is
displayed, without transforming it, as a way to conserve cognitive effort. This "concreteness" principle is the basis for predicting several types of information format effects. For example, Russo
(1977) showed in a classic study that the use of unit price information in a supermarket increased when the information was displayed in lists ranking brands by unit price. He argued that standard
displays using a shelf tag for each item made items hard to compare. Information must be easily processable as well as available. In other demonstrations of concreteness effects, Aschenbrenner (1978)
inferred that subjects used the dimensions of gambles as presented, and Bettman and Kakkar (1977) showed that individuals acquired information in a fashion consistent with the format of the display.
Jarvenpaa (1989) extended the Bettman and Kakkar (1977) results by showing that information was processed in a manner consistent with how graphic displays were organized, that is, by alternative or
by attribute. MacGregor and Slovic (1986) showed that people will use a less important cue simply because it is more salient in the display. Finally, Schkade and Kleinmuntz (1994) examined the
differential influence of the organization and sequence of information on decision processes. Although the finding that information acquisition proceeds in a fashion consistent with display format is
perhaps not surprising, it has important implications both for using relatively simple changes in information presentation to aid decision makers and for the design of graphics for computer-based
decision support systems. Other work has examined the effects of different representations of values. For example, Stone and Schkade (1991) found that using words to represent attribute values led to
less compensatory processing than numerical representation of the values (see also Schkade & Kleinmuntz, 1994). Wallsten and his colleagues (Budescu, Weinberg, & Wallston, 1988; Erev & Cohen, 1990;
Wallsten, 1990) have carried out an important series of experiments testing differences between representing probability information in numerical or verbal form. People prefer to receive information
about probabilities in numerical form, but they prefer to use words (e.g., doubtful, likely) to express event probabilities to others. Gonzfilez-Vallejo and Wallsten (1992) have shown that preference
reversals are also impacted by whether
5 Behavioral Decision Research: An Overview
probability information is given a numerical or verbal form; reversals are less with a verbal format. Another series of experiments has dealt with the completeness of information displays (Dube-Rioux
& Russo, 1988; Highhouse & House, 1995; Jagacinski, 1995; Weber, Eisenffihr, & von Winterfeldt 1988). Individuals may respond differently to the problem if they do not realize that information is
missing, and the apparent completeness of a display can blind a decision maker to the possibility that important information is lacking (a result earlier obtained by Fischhoff, Slovic, &
Lichtenstein, 1978). Finally, Russo, Medvec, and Meloy (1996) have shown that preexisting preferences can lead to distortion of the new information in favor of the preferred alternative. This last
result suggests dynamic aspects of information use in the construction of preferences. 4. Asymmetric Dominance Effects Number of alternatives, response mode, and information display are examples of
task factors. Contingent decision behavior has also been shown for context factors reflecting the particular values of the alternatives. One striking example of context-dependent preferences is the
asymmetric dominance effect. An alternative is asymmetrically dominated if it is dominated by at least one option in the choice set and is also not dominated by at least one other option (e.g., for
the case of three options, A, B, and C, if B were dominated by A but not by C, B would be asymmetrically dominated). The striking effect of asymmetric dominance (Heath & Chatterjee, 1995; Huber,
Payne, & Puto, 1982; Simonson & Tversky, 1992) is that adding an asymmetrically dominated option to a choice set increases the choice share of the dominating option (e.g., A in our example). This
violates the principle of regularity, that is, that adding a new option cannot increase the probability of choosing one of the original options. Regularity is a necessary condition for most
probabilistic choice models. Explanations for the asymmetric dominance effect include agenda affects relating to the order of comparison of pairs of options (Huber, Payne, & Puto, 1982), simplifying
choice by searching for dominant alternatives (Montgomery, 1983), and problems with simplicity of the stimuli (Ratneshwar, Shocker, & Stewart, 1987; see Wedell, 1991, for evidence inconsistent with
this view, however). A recent explanation that has received some support is the notion that the effect arises because people use the relations among options as reasons for justifying their choices;
that is, one can justify the choice of the dominating option by saying it is clearly better than the asymmetrically dominated option (Simonson, 1989). Wedell (1991) also reported data consistent with
this explanation.
John W. Payne, James R. Bettman, and Mary Frances Luce
The task and context effects discussed so far have illustrated that decision behavior is contingent upon a variety of factors. As we have also discussed, one explanation for a number of task and
context effects is that individuals use different decision strategies in different situations. Why would this be so? One explanation is that an individual's contingent use of multiple strategies is
an adaptive response of a limited-capacity information processor to the demands of a complex world (Payne, Bettman, & Johnson, 1993). The basic idea is that using multiple strategies allows a
decision maker to adaptively trade off the accuracy or quality of the decision against the cognitive effort needed to reach a judgment or choice. Svenson (1996) outlined another framework for
explaining contingent decision behavior; Hammond, Hamm, Grassia, and Pearson (1987), Montgomery (1983), and Tversky and Kahneman (1986) provided other frameworks that do not emphasize cognitive
effort to such an extent. The general question of how to account for contingent decision behavior remains an active area of research. So far in this chapter we have focused on decisions that are
difficult due to conflicting objectives (that is, there are multiple objectives and no option is best on all of them). Decisions can also be difficult because of the need to make guesses about the
future consequences of current actions. A great deal of research has been concerned with the question of how people judge the likelihoods or probabilities of uncertain events. As we will see, recent
work on how, and how well, people assess the probability of an event has adopted many of the same concepts used to explain preferential decisions. In particular, people have available several
different strategies for assessing beliefs about uncertain events, and individuals use these different modes of probabilistic reasoning in a highly contingent fashion. People also often construct
probability responses (Curley, Browne, Smith, & Benson, 1995). In the following sections, we consider different strategies for probabilistic reasoning and then consider evidence for the contingent
use of such strategies. V. BELIEFS A B O U T UNCERTAIN EVENTS A focus of much of the early work on probabilistic reasoning was the extent to which intuitive judgements about probabilities matched the
normative predictions made by the rules of statistics. For example, many studies have examined the extent to which people revise their opinions about uncertain events in ways consistent with such
statistical laws as Bayes' theorem. Bayes' theorem deals with problems of the following type: Imagine that you are a physician trying to make a diagnosis concerning whether or not one of your
patients has cancer. Denote the hypothesis that the patient has cancer as H. Before you collect any new information about this patient, you have a prior probability that the patient has cancer.
Denote that prior probability as P(H). The prior probability that the patient does not have cancer will then
5 Behavioral Decision Research: An Overview
be denoted 1 - P(H). This prior will likely be based on whatever you know about the patient up to this point. Now assume that you collect some new evidence about the patient's condition by conducting
some diagnostic, but imperfect, test. Thus, there is some probability that the test will be positive if the patient has cancer but there is also some probability that the test will be positive even
if the patient does not have cancer. Denote the first probability as P(DIH), or the probability of a test result indicating cancer given that the true condition of the patient is that he or she has
cancer. Denote the second probability as P(DI not H), or the probability of a test result indicating cancer given that the patient does not have cancer. Given your prior probability P(H) and the
probabilities of the two test results, P(D/H) and P(DI not H), what is the revised probability that your hypothesis that the patient has cancer is true after the data are observed, that is, P(HID)?
More than 200 years ago, an English clergyman, Reverend Bayes, offered the solution to this type of problem. His solution is given by the following equation:
P(D/H)P(H) P(H/D) = P(D/H)P(H) + P(D/not H)(1 - P(H)) Essentially Bayes' theorem is a way of combining what you already believe about an uncertainty with new information that is available to you to
yield an updated belief about the likelihood of that uncertainty. See Yates (1990) for a good discussion of Bayes' theorem from the perspective of psychology and decision making. For more than 30
years, people have been using Bayes' theorem as a standard against which to compare people's actual probability judgments (e.g., Dawes, Mirels, Gold, & Donahue, 1993; Phillips & Edwards, 1966). The
general result has been that people's probability judgments deviate substantially from the predictions of Baycs' theorem, with the nature of the deviation dependent on the situation (see Fischhoff&
Beyth-Marom, 1983). Although the fact that intuitive judgments often deviate from laws of probability such as Bayes' theorem is now widely accepted, some investigators question both the meaning and
relevance of errors in intuitive judgments (see von Winterfeldt & Edwards, 1986, for example). Nevertheless, Bayes' theorem has been, and continues to be, a useful benchmark against which to compare
intuitive human judgments. Another benchmark for intuitive judgment is simply the accuracy of the judgment. That is, does the predicted event (judgment) correspond to the actual event that occurs? A
related accuracy question is the ability of a judge to know how likely it is that his or her judgments are correct. The hope is that stated confidence matches expected accuracy, that is, confidence
judgments are well "calibrated" (Yates, 1990). See Hammond (1996) for a discussion of alternative standards against which to compare intuitive judgments
John W. Payne, James R. Bettman, and Mary Frances Luce
and Wallsten (1996) for a discussion of methodological issues in analyzing the accuracy of human judgment.
A. Strategies for Probabilistic Reasoning If people are not reasoning in ways consistent with the laws of probability when making intuitive judgments, how are they thinking about uncertainties?
Roughly 20 years ago, Kahneman and Tversky (1973) argued that people use a variety of heuristics to solve probability judgment tasks. The specific heuristics suggested included availability,
representativeness, and anchoring and adjustment. The availability heuristic refers to assessing the probability of an event based on how easily instances of that event come to mind. Kahneman and
Tversky argued that availability is a useful procedure for assessing probabilities because instances of more frequent events are usually retrieved faster and better; however, availability is affected
by factors like vividness and recency that do not impact relative frequency and probability. Consequently, the use of the availability heuristic can lead to predictable errors in judgment. The
representativeness heuristic assesses the probability of an event by judging the degree to which that event corresponds to an appropriate mental model for that class of events, such as a sample and a
population, an instance and a category, or an act and an actor. For example, a manager is using representativeness as a heuristic when he or she predicts the success of a new product based on the
similarity of that product to past successful and unsuccessful product types. As with the use of the availability heuristic, the representativeness heuristic can be useful in probabilisticjudgment.
As with availability, however, the representativeness heuristic can lead people to ignore or misuse information that affects actual probabilities. Finally, anchoring and adjustment is a general
judgment process in which an initially generated or given response serves as an anchor; that anchor is adjusted based on other information, but the adjustment is generally insufficient (see Chapman &
Johnson, 1994, for an investigation of some of the limits on anchoring). An example of anchoring and adjustment is when a manager uses this year's sales to forecast next year's sales. The notion of
insufficient adjustment means that the forecast for next year may not reflect the differences to be expected next year as much as it reflects this year's sales. The availability heuristic has been
investigated for judgments about political events (Levi & Pryor, 1987), perceptions of the risk of consumer products (Folkes, 1988), accountants' hypothesis generation (Libby, 1985), and judgments
about others (Shedler & Manis, 1986). The relationship between memory access and judgment has been examined more generally by Lichtenstein and Srull (1985), Hastie and Park (1986), and MacLeod and
Campbell (1992).
5 Behavioral Decision Research" An Overview
The representativeness heuristic has been studied in detail by Bar-Hillel (1984), and Camerer (1987) showed in an innovative study that representativeness affects prices in experimental markets,
although the effect is smaller for more experienced subjects. Finally, anchoring and adjustment has been investigated in a variety of domains, including accounting (Butler, 1986), marketing (Davis,
Hoch, & Ragsdale, 1986; Yadav, 1994), the assessment of real estate values (Northcraft & Neale, 1987), negotiations (White & Sebenius, 1997), and as a general process for updating beliefs (Hogarth &
Einhorn, 1992).
B. Contingent Assessments of Uncertainty As noted earlier, heuristics often ignore potentially relevant problem information. Using heuristics adaptively, even though some information may be
neglected, can save substantial cognitive effort and still produce reasonably good solutions to decision problems (Gigerenzer & Goldstein, 1996; Payne, Bettman, & Johnson 1993). It is still the case,
however, that people make systematic errors in forming probability judgments in many situations (Kahneman & Tversky, 1996). Much decision research over the past two decades has tried to identify
biases (errors) in probabilistic reasoning. Others have argued that there has been an overemphasis on biases in judgment (e.g., Beach, Barnes, & Christensen-Szalanski, 1986). As we illustrate next
with reference to two of the most studied "errors" in judgment, the relevant question is not whether biases exist, but under what conditions relevant information will or will not be used when
responding to a probability judgment task. 1. The Use/Misuse of Base-Rate Information More than 20 years ago, Kahneman and Tversky (1973) reported a series of studies in which subjects were presented
with a brief personality description of a person along with a list of different categories to which the person might belong, and were asked to indicate to which category the person was most likely to
belong. Their findings were clear and striking; subjects essentially ignored the relative sizes of the different categories (i.e., the base rates) and based their judgments almost exclusively on the
extent to which the description matched their various stereotypes about the categories (representativeness). Since then, many researchers have investigated the utilization of base-rate information in
decision making (see Bar-Hillel, 1990, for an overview of base-rate studies; see Birnbaum, 1983, and Koehler, 1996, for criticisms of some of these studies). Overall, it appears that base-rate
information is sometimes ignored and at times used appropriately. For example, Medin and Edelson (1988) stated that in their studies "participants use base-rate information appropriately,
John W. Payne, James R. Bettman, and Mary Frances Luce
ignore base-rate information, or use base-rate information inappropriately (predict that the rare disease is more likely to be present)" (p. 68). Such variability in the use of base-rate information
to assess the probability of an event has led to a contingent processing view of probabilistic reasoning. Gigerenzer, Hell, and Blank (1988) and Ginossar and Trope (1987) provided two examples of
contingent-processing approaches to base-rate, both of which show that the use of base-rate information is highly sensitive to a variety of task and context variables. For example, Gigerenzer et al.
found greater use of base-rate information when the problem context changed from guessing the profession of a person to predicting the outcome of a soccer game. They argued that "the content of the
problem strongly influenced both subjects' performance and their reported strategies [emphasis added]" (p. 523). Ginossar and Trope (1987) proposed that people have a variety of strategies, both
statistical and nonstatistical, for making probabilistic judgments. Which heuristic is used for a particular judgment task is contingent upon the recency and frequency of prior activation of the
rules, the relationship between the rules and task goals, and the applicability of the rules to the problem givens. They concluded that the appropriate question is not whether people are inherently
good or bad statisticians, but what cognitive factors determine when different inferential rules, statistical or nonstatistical, will be applied. The Ginossar and Trope viewpoint is consistent with
much of the research on preferences reported earlier in this chapter and is one we share. 2. The Conjunction Fallacy Research on the conjunction fallacy has also argued that the same person may use a
variety of strategies for solving probabilistic reasoning problems. Tversky and Kahneman (1983) distinguished between intuitive (holistic) reasoning about the probabilities of events and extensional
(decomposed) reasoning, where events are analyzed into exhaustive lists of possibilities and compound probabilities are evaluated by aggregating elementary ones. One law of probability derived from
extensional logic is that the probability of a conjunction of events, P(A and B), cannot exceed the probability of any one of its constituent events, P(A) and P(B). Tversky and Kahneman have argued
that intuitive reasoning, on the other hand, is based on "natural assessments" such as representativeness and availability, which "are often neither deliberate nor conscious" (1983, p. 295). Tversky
and Kahneman demonstrated numerous instances in which people violate the conjunction rule by stating that the probability of A and B is greater than the probability of B, consistent with their
hypothesis that probabilistic reasoning is often intuitive. Crandall and Greenfield (1986), Fisk (1996), Thuring and Junger-
5 Behavioral Decision Research: An Overview
mann (1990), Wells (1985), and Yates and Carlson (1986) have provided additional evidence for violations of the conjunction rule. Although Tversky and Kahneman argued that violations of the
conjunction rule are both systematic and sizable, they note that "availability judgments are not always dominated by nonextensional heuristics . . . [and] judgments of probability vary in the degree
to which they follow a decompositional or holistic approach" (1983, p. 310). Thus, it is critical to understand when the decision maker will use one approach or another in solving problems under
uncertainty, as was the case for understanding differential strategy use in assessing preferences. Reeves and Lockhart (1993), for instance, have shown that conjunctive fallacies vary as a function
of whether probability problems were presented in a frequency versus case-specific form. Examples of frequency and case-specific versions, respectively, are (1) Jimmy will probably get a birthday
present from his Uncle Marvin because Uncle Marvin has sent him a present many times in the past, and (2)Jimmy will probably get a birthday present from his Uncle Marvin because Uncle Marvin is
conscientious and has often remarked that Jimmy is his favorite nephew (p. 207). Reeves and Lockhart show that violations of the conjunctive rule are generally greater with case-specific versions of
problems. Jones, Jones, and Frisch (1995) have extended this important line of reasoning by showing that representivencss affects occur primarily when people are making judgments about single cases,
whereas availability effects occur primarily in judgments of relative frequency. Tversky and Koehler (1994; see also Rottenstreich & Tversky, 1997) have developed a theory of subjective probability
judgment that helps explain the conjunction fallacy and other biases. This theory, called "support theory," asserts that subjective probability is attached to descriptions of events, called
hypotheses, rather than to events. Judged probability then reflects the strength of evidence or support for the focal relative to the alternative hypothesis. A key implication of support theory is
that the judged probability of an event, such as a plane crash, can be increased by "unpacking" the description into disjoint components, such as an accidental plane crash or a nonaccidental plane
crash caused by sabotage. Unpacking thus relates to the notion discussed earlier that different descriptions of the same event (i.e., different ways of framing that event) can lead to different
C. Expertise and Uncertainty Judgments Although thus far we have emphasized properties of the task as determinants of behavior, the processes used to construct a solution to a decision problem
clearly may differ as a function of individual differences as well. One particularly important individual difference factor is the degree of knowledge or expertise an individual possesses.
John W. Payne, James R. Bettman, and Mary Frances Luce
One question of great interest is the extent to which expertise improves the assessment of uncertainty. Experience does not necessarily improve judgment. Garb (1989), for example, reviewed the
effects of training and experience on the validity of clinical judgments in mental health fields and concluded that "the results on validity generally fail to support the value of experience in
mental health fields. However, the results do provide limited support for the value of training" (p. 391). Garb did argue that experienced judges know to a greater extent which of their judgments is
more likely to be correct; that is, their judgments are better calibrated. Wright, Rowe, Bolger, and Gammack (1994) have shown that self-rated expertise is a good predictor of probability forecasting
performance. See also Winkler and Poses (1993) for evidence of good probability assessment by physicians. Gigerenzer, Hoffrage, and Kleinbolting (1991) have strongly made the related argument that
performance on probabilistic reasoning tasks depends on whether the problem refers to a natural environment known to an individual, with performance much better in natural environments. On the other
hand, Griffin and Tversky (1992) have argued that when the knowledge of experts is high, and consequently, the predictability of tasks is reasonably high, experts will do better than lay people in
terms of calibration. However, when predictability is very low, experts may do worse on some measures of probability assessment. In particular, experts may be overconfident about their ability to
predict (see Spence, 1996, for a comparison of expertise differences on problems differing in complexity). Expertise, however, is not a panacea for making assessments of uncertain events; experts
also use heuristics, such as representativeness, and show biases in the use of base-rate information. Cox and Summers (1987), for example, found that experienced retail buyers used representativeness
heuristics when making sales forecasts. Why might expertise not lead to better assessments? Because the prediction of future events often depends on learning from and understanding past events, the
hindsight bias (Fischhoff, 1975), or the "I knew it all along" phenomenon, may cause people to learn less from experience than they should. Indeed, Hawkins and Hastie (1990) concluded that hindsight
issues affect the judgments of experts in "real" tasks. For other reasons why expertise may not lead to better assessments, see Camerer and Johnson (1991). Finally, see Dawes (1994) for a very
thought-provoking discussion of the "myth of expertise" in the context of psychotherapy. Although not directly an expertise issue, a growing topic of interest deals with individual differences in
probability judgments due to natural or cultural variations. Examples of this program of research include studies by Whitcomb, Onkal, Curley, and Benson (1995), Wright and Phillips (1980), and Yates,
Zhu, Ronis, Wang, Shinotsuka, and Toda (1989). Next we examine in more detail how people make decisions under risk
5 Behavioral Decision Research: An Overview
and uncertainty, which draws on studies of both preferences and judgments about uncertain events. VI. DECISIONS UNDER RISK AND UNCERTAINTY How people choose among gambles, which involves tradeoffs
between the desirability of consequences and the likelihood of consequences, has been one of the most active areas of decision research. Understanding decision making under risk and uncertainty not
only provides insight into basic psychological processes ofjudgment and choice, but also is directly relevant for improving decisions in a wide range of contexts (e.g., medical care, public policy,
business). It is increasingly clear that decisions under risk are sensitive to the same types of influences described earlier for preferences among multiattribute alternatives and for the assessment
of uncertainties. In the following sections, we consider generalizations of expected-utility models (how values depend on the specific set of available options and interactions between payoffs and
probabilities), responses to repeated-play gambles, and ambiguity and risky choice.
A. Generalizations of Expected-Utility Models Although their descriptive validity has long been questioned, expected utility (EU) theory (von Neumann & Morgenstern, 1947) and subjective expected
utility (SEU) theory have been the standard models for decisions under risk. Mark Machina recently summarized risky-decision research by noting that "choice under uncertainty is a field in flux"
(Machina, 1987, p. 121). Evidence of violations of the standard EU and SEU models has accumulated to such an extent that numerous theorists have developed alternatives to the standard models that
allow the trade-offs between probabilities and values to reflect contextual factors. For example, one possibility is to allow the value of an outcome of one gamble to depend on the outcome that would
have been received if a different gamble had been chosen instead and the same random event had occurredmthat is, the notion of regret (Bell, 1982; Loomes & Sugden, 1987). Many proposed
generalizations of EU and SEU also depart from the notion that it is essential to disentangle belief and value (Sharer, 1986). For example, the probabilities (decision weights) of outcomes could be
weighted by the rank order of the attractiveness of the outcomes so that the lowest-ranked, least-attractive outcomes could be given relatively greater weight (Quiggin, 1982; Segal, 1989). Because
people appear to respond differently to gains and losses, as noted earlier, one could also allow decision weights to differ for gain outcomes and loss outcomes (Einhorn & Hogarth, 1986). Other
generalizations of the expected utility model allow the decision
John W. Payne, James R. Bettman, and Mary Frances Luce
weights assigned to the outcomes to vary as a function of both the rank and the sign of the payoffs (Luce, 1990; Luce & Fishburn, 1991; Tversky & Kahneman, 1992) or allow configural weights (Weber,
Anderson, & Birnbaum, 1992). Such weights can also vary depending on whether the decision maker is evaluating a prospect from the perspective of a buyer or seller (Birnbaum & Becghley, 1997). Rank-
and sign-dependent models demonstrate impressive predictive power (however, see Wakker, Erev, &" Weber 1994). Nonetheless, Tversky and Kahneman have argued that formal models of the valuation of
risky options are at best approximate and incomplete, and that choice is a constructive and contingent process. When faced with a complex problem, people employ a variety of heuristic procedures in
order to simplify the representation and the evaluation of prospects. The heuristics of choice do not readily lend themselves to formal analysis because their application depends on the formulation
of the problem, the method of elicitation, and the context of choice. (1992, p. 317) Should we attempt further generalizations of EU beyond those already proposed, or should we move away from such
models in the attempt to understand risky decision behavior (Camcrcr, 1989)? Fennema and Wakkcr (1997) have argued that the mathematical form of such generalized utility models as cumulative prospect
theory (Tversky & Kahneman, 1992) is well suited for modeling psychological phenomena associated with risky choice. Shafir, Osherson, and Smith (1993) suggested that the absolute approach of
expectation models, in which the attractiveness of a gamble is assumed to be independent of other alternatives, should be combined with a comparative approach, in which the attractiveness of a gamble
depends on the alternatives to which it is compared. Lopes (1987; Schneider & Lopes, 1986) argued that we should move away from expectation models in favor of models that more directly reflect the
multiple and conflicting goals that people may have in making risky decisions (e.g., maximizing security, maximizing potential gain, and maximizing the probability of coming out ahead). This focus on
multiple goals is similar in spirit to the early idea of characterizing gambles by risk dimensions rather than moments (Slovic & Lichtenstein, 1968). 1. Repeated-Play Gambles The notion that multiple
goals can underlie risky choice may affect how people respond to gambles involving single play versus repeated-play gambles. People may emphasize different goals depending on how often a gamble will
be played (Lopes, 1981) or whether the decision involves a single individual or a group of comparable individuals (Redelmeir & Tversky, 1990). Recent work shows that risky-choice behavior can differ
for unique
5 Behavioral Decision Research: An Overview
and repeated gambles (Joag, Mowen, & Gentry, 1990; Keren & Wagenaar, 1987; Koehler, Gibbs, & Hogarth, 1994). Wedell and Bockenholt (1990), for example, showed that there are fewer preference
reversals under repeatedplay conditions. There may be an interesting connection between the repeated play of gambles and when people will reason statistically. Framing an apparently unique risky
decision as part of a much larger set of risky choices may lead to behavior more in line with a considered trade-off of beliefs and values (Kahneman & Lovallo, 1992). 2. Ambiguity and Risky Choice It
is generally assumed that in decision making under risk, decision makers have well-specified probabilities representing their uncertainties about events. However, ambiguity often characterizes event
probabilities. A decision maker might tell you, for example, that his or her best guess is that the probability of an event is .4, but the estimate is shaky. The standard theory of subjective
expected utility states that an expected probability is adequate to represent the individual's uncertainty about an event; however, people respond differently, even when the expectations of the
probabilities are the same, if some probabilities are more uncertain than others. In particular, individuals are often averse to ambiguity, at least when the probabilities of the events are moderate
(e.g., .5) or larger (Ellsberg, 1961). In fact, Frisch and Baron (1988) argued that it may be reasonable to show such ambiguity aversion. However, ambiguity seeking can occur for lower-probability
events (Curley & Yates, 1989), a result Ellsberg also suggested. Einhorn and Hogarth (1985) modeled how people adjust probabilities under ambiguity to reflect what might be imagined and compared
imagination to a mental simulation process. The adjustment is made from an initial estimate of the probability of an event and the size of the adjustment depends on both the amount of ambiguity and
the initial probability value. Hogarth and Kunreuther (1985, 1989) used this ambiguity model to try to understand when, and at what prices, insurance coverage will be offered for different
uncertainties. Hogarth and Kunreuther (1995) proposed that people deal with situations involving no relevant probability information by generating arguments that allow them to resolve choice
conflicts such as potential feelings of regret if an action is not taken. Concern about others' evaluations of one's decisions may be a partial explanation for ambiguity avoidance. In the standard
Ellsberg task, where there is one urn containing 50 red balls and 50 black balls and another urn containing 100 red and black balls in unknown proportions, the preference for a bet based on the known
50:50 urn is enhanced when subjects anticipate that the contents of the unknown urn will be shown to others (Curley, Yates, & Abrams, 1986).
John W. Payne, James R. Bettman, and Mary Frances Luce
Heath and Tversky (1991) extended the study of ambiguity to situations where the probabilities are based on knowledge rather than chance. They argued that the willingness to bet on an uncertain event
depends not only on the estimated likelihood of that event and the precision of that estimate but also on the degree to which one feels knowledgeable or competent in a given context. They found that
more knowledgeable subjects in a domain (e.g., politics) were more likely to prefer a bet based on their judged probability than on a matched chance bet, but that the chance bet was preferred over a
matched judgmental bet in domains where one felt less competent. Heath and Tversky concluded that the effect of knowledge or competence far outweighs that of ambiguity or vagueness in understanding
how beliefs and preferences interact to determine risky decisions. Factors beyond beliefs about values and likelihoods (e.g., personal feelings of competence) may have a major influence on
risk-taking behavior. We have briefly and selectively reviewed the extensive literature on the psychology of decision making. However, an interesting feature of behavioral decision research that we
have not yet addressed is the richness of the methods used to investigate judgment and choice behavior. In the next section, we examine some of the methods used to study decision processes. VII. M E
T H O D S FOR S T U D Y I N G DECISION MAKING
"The theory of bounded rationality requires close, abnost microscopic, study of how people actually behave." H. A. Simon, 1991, p. 364 There are two basic categories of methods for studying decision
making: input-output methods and process-tracing approaches. In this section of the chapter we briefly compare and discuss each approach. For more details on the methods of decision research see
Carroll and Johnson (1990). For a theoretical argument regarding different approaches to decision research, see Svenson (1996).
A. Input-Output Approaches Rather than attempting to directly measure the decision process, inputoutput approaches postulate an underlying decision process and select factors that should affect the
process in certain ways. Then an experiment is carried out to manipulate those factors (the input) and if the effects (the output) are as predicted, the researcher might claim that the experiment
supported the hypothesized process. Abelson and Levi (1985) referred to this approach as the "structural" approach, that is, it determines the structure of the relationship between inputs and
5 Behavioral Decision Research: An Overview
To illustrate an input-output or structural approach to investigating judgment, consider work conducted by Lusk and Hammond (1991). The context of that work was severe weather forecasting,
specifically the shortterm forecasting (0-30 minutes) of microbursts (brief, localized windstorms that are a potentially fatal hazard to aircraft). As part of the work described by Lusk and Hammond,
forecasters were presented with a set of hypothesized precursors to microbursts (cues) and were asked to judge the probability of the occurrence of a microburst based on the set of cue values
representing a hypothetical storm. See Figure 2 for an example of a profile of storm cues. By giving the forecasters a series of such profiles (the inputs) and recording the judged probabilities of
microbursts for the various profiles (the outputs), a number of interesting questions about the structure of the judgments could be answered. For example, a common question is whether the judgments
can be "captured" by a simple linear model in which the judged probabilities are a weighted function of the cue values. Typically, the observed judgments are related to the input values through the
use of statistical techniques such as regression or analysis of variance. Lusk and Hammond (1991) reported that the judgments of their expert forecasters could be adequately represented by a linear
model. Lusk and Hammond's finding that a simple linear combination of cues fit the observed judgments well is quite common. Many studies have shown that judgments can be captured very successfully by
a simple linear model based on a weighted combination of cue values (Slovic & Lichtenstein, 1971), even though in many of these studies, as in Lusk and Hammond, the subjects believed that they were
in fact employing a more complex nonlinear model. Mellers et al. (1992) provided another good example of an input-output approach to decision research. In that study, subjects were asked to state
their preferences for simple gambles using a variety of response modes. In addition to variations in response modes, experiments differed with respect to whether the gambles involved gains or losses
and the presence of financial incentives. The gambles were constructed from a 6 x 6 (amount by probability) factorial design. Amounts ranged from $3 to $56.70 and the probabilities ranged from .05 to
.94. In the actual task the only data point collected on each experimental trial was the preference expressed using one of the response modes. After the test trials, the subjects were asked to write
a paragraph describing what they did. By examining the responses obtained and by fitting alternative models to the data using statistical procedures, Mellers et al. were able to show that the
preference reversals they observed seemed to be due to changes in decision strategies as a function of response mode. One feature of the Mellers et al. experiments was that each subject generated
approximately 108 judgments of single gambles and 225
John W. Payne, James R. Bettman, and Mary Frances Luce
Not Descending
2. COLLAPSING STORM Not Collapsing
3. ORGANIZED CONVERGENCE ABOVE CLOUD BASE (X 10 -3 s "1) 4. ORGANIZED CONVERGENCE (DIVERGENCE) ABOVE CLOUD BASE (X 10 -3 s -1) 5. REFLECTIVE NOTCH
i -3
i -4
L -6
-7 -8 High Convergence
Low Convergence
1 [
No Notch
High Convergence
Low Convergence
6. ROTATION
..... l
i] 1
Probability (0.0-100) of Microburst within 5-10 minutes
F I G U R E 2 Example of microburst profile. From Figure 2 of "Example of a Microburst Profile," by C. A. Lusk and K. R. Hammond, 1991, Journal of Behavioral Decision Making, 4, p. 58. Copyright John
Wiley & Sons Limited. Reproduced with permission.
comparative judgments of pairs of gambles, thus providing a large amount of data that could be used to relate changes in input (gambles and response modes) to changes in output (expressed
preferences). As noted earlier, there is a large body of research demonstrating that human judgment can be successfully captured by simple relatively linear models. There is a great deal of doubt,
however, about whether the linear
5 Behavioral l)ecision Research: An Overview
model, or simple variants like averaging, accurately reflects the underlying decision process (Hoffman, 1960). Dawes and Corrigan (1974), for example, argued that the characteristics of most decision
tasks that have been studied almost ensure that the linear model will provide a good fit. It has also been shown that a simple linear model will fit simulated data generated by nonlinear rules
reasonably well (Yntcma & Torgerson, 1961). This is not to say that input-output analyses cannot be used to investigate how information is being processed in making a judgment or choice. Many studies
have contributed greatly to our understanding of the psychology of decision making using input-output or structural approaches. However, a number of researchers have concluded that data reflecting
more than just the end product of the decision process are needed. In the words of Pitz (1976), "If a theorist is seriously interested in the processes used by a subject in arriving at a decision, it
is essential to devise a technique for exploring the predecisional behavior." A description of some of the techniques used in decision research to investigate process at a detailed level is presented
B. Process-Tracing Approaches In process-tracing approaches, the researcher attempts to measure the ongoing decision process directly. The basic idea is to increase the density of observations about
a decision process over the time course of that process. We will consider three major process-tracing methods: verbal protocols, information acquisition approaches, and, to a lesser extent,
chronomctric methods. 1. Verbal Protocols Protocol analysis is one approach to gathering detailed process-tracing data on decision making (Adelman, Gualticri, & Stanford, 1995; Payne, 1994; Payne,
Braunstein, & Carroll, 1978; Schkadc & Paync, 1994). To use this approach, the subject is asked to think out loud as he or she is actually performing the task of interest, such as choosing among
several alternatives. Such a verbal record is called a protocol. Protocols differ from introspection or retrospective reports about decision processes because the subject is asked to verbalize
thoughts as they occur in the course of making a decision. The protocol data are then analyzed to attempt to gain insights into the subject's decision processes. For example, Bcttman and Park (1980a,
1980b) developed an extensive scheme for coding protocols which was used and expanded on by Biehal and Chakravarti (1982a, 1992b, 1983, 1989). The major advantage of protocol collection and analysis
is that a great deal of data on internal events is made available. The four panels of Figure 3 provide excerpts from the protocols of two subjects (A and D) faced with tasks characterized by two
levels of complexity: (1) choice problems with two alternatives (Panels a and b), and (2) multialternative choice problems (Panels c and d) (Payne, 1976). The verbal
338 a
John W. Payne, James R. Bettman, and Mary Frances Luce
Additive Utility
Additive Difference
A24: O.K., the decision is now between the two rent prices
D238: O.K., we have an A and a B D239: First look at the rent for both of them
A25: in accordance with the other qualities A26: Now apartment A has the advantage
D240: The rent for A is $170 D241 : The rent for B is $140 D242 $170 is a little steep D243: but it might have a low noise level D244: So we'll check A's noise level D245: A's noise level is low
D246: We'll go to B's noise level
A27: because the noise level is low A28: and the kitchen facilities are good A29: even though the rent is $30 higher than B
D247: It's high D248: Gee, I can't really study very well with a lot of noise D249: So rll ask myseIt the question, is it worth spending that extra $30 a month to be able to study in my apartment?
C A163: A164: A165: A166: A167: A168: A169:
Satisficing The rent for apartment E is $140 which is a good note The noise level for the is apartment is high That would almost deter me right there Ah, I don't like a lot of noise And, if it's
high, it must be pretty bad Which means, you couldnl sleep
A170: I would just put that one aside right there. I wouldnl look any further than that A171: Even though, the rent is good
Elimination-by-Aspects D289: Since we have a whole bunch here, D290: I'm going to go across the top and see which noise levels are high D291: If there are any high ones, Ill reject them immediately
D295: Go to D D296: It has a high noise level D297: So, we'll automatically eliminate D D303: So, we have four here D304: that are O.K. in noise level
F I G U R E 3 Verbal protocols of choice strategy. From Figure 4.4 of The AdaptiveDecision Maker, by J. W. Payne, J. R. Bettman, and E. J. Johnson. Cambridge: Cambridge University Press, p. 152.
Reprinted with the permission of Cambridge University Press.
protocols illustrate a variety of decision strategies, (e.g., satisficing and elimination by aspects). Further, by comparing Panels a and b to Panels c and d, we see that how people decide how to
decide may be a function of the number of alternatives available (i.e., processing appears to be simplified in the case of several alternatives). Although protocol analysis often allows the
researcher to gain important insights into decision making, there are disadvantages. Collecting protocol data in quantity is extremely time-consuming, so small samples of subjects have typically been
used. In addition, protocol data may not be entirely
5 Behavioral Decision Research: An Overview
reflective of subjects' decision processes. The protocols may reflect subjects' biases or may be censored by subjects while they are being reported. In addition, subjects may be unable to verbalize
retrospectively some internal processes (Nisbett & Wilson, 1977). Finally, protocols may not provide insights into all of the processing performed, and there may not be output corresponding to all
internal states (Lindsay & Norman, 1972, pp. 517520). Subjects may select aspects of processing to verbalize based upon what they believe is important and may not verbalize those data most valuable
to the researcher (Frijda, 1967). Although such problems with selectivity in verbal reporting may exist, several researchers have argued and have provided convincing evidence that decision makers do
have self-insight (e.g., Ericsson & Simon, 1993). For further discussion, see Lynch and Srull (1982); Biehal and Chakravarti (1989); Russo, Johnson, and Stephens (1989); Ericsson and Simon (1993);
and Payne (1994). There is also concern that attempting to observe the details of choice processes may affect those processes. For example, having to use processing capacity to verbalize ongoing
thoughts might make subjects simplify their processing. Ericsson and Simon (1993) reported many studies showing no effects of taking protocols on decision processes. In studies of decision making,
however, the results have been more mixed. Although Smead, Wilcox, and Wilkes (1981) and Biehal and Chakravarti (1983) reported no significant differences between protocol and no-protocol conditions,
Biehal and Chakravarti (1989) found differences in the extent of alternative-based processing and problem framing due to verbal protocols. Therefore, although verbal protocols can provide invaluable
data on choice processes, one must be very careful to control for any effects of taking protocols (see Biehal & Chakravarti, 1989, and Russo, Johnson, & Stephens, 1989, for suggestions). 2.
Information Acquisition Approaches Early attempts to monitor the amount and sequence of information acquired during decision making (e.g., Jacoby, 1975; Payne, 1976) used an information display
board, often a matrix with brands as rows and attributes as columns. In each cell of the matrix, information cards were placed giving the value for the particular attribute and brand appearing in
that row and column (e.g., the price for Brand X). Subjects were asked to examine as many cards as desired, one at a time, and choose a brand. The amount of information acquired and the order in
which it was acquired were the major data provided. Therefore, by exerting control over the selection process, a detailed record of the sequence of information examined was obtained. This technique
has been updated by using computer displays, (e.g., Brucks, 1988; Dahlstrand & Montgomery, 1984; Jacoby, Mazursky, Troutman, &
John W. Payne, James R. Bettman, and Mary Frances Luce
Kuss, 1984; Payne, Bettman, &Johnson, 1993; Payne & Braunstein, 1978), which can also be programmed to provide data on the time taken for each piece of information in addition to data on amount and
sequences. An example of a computer-based information display used to monitor processing is given in Figure 4. Information monitoring approaches have several disadvantages as measures of decision
processes. First, the monitoring process is relatively obtrusive; subjects may bias or change their information-seeking behavior since it is so obviously under observation. Second, only external
responses (namely which information is selected) are examined. Not only is internal processing not studied directly, but only a subset of the internal processing has an explicit trace in the
information-seeking sequence. For example, the researcher does not observe any internal memory search that may take place in parallel with the external search through the matrix. However, the amount
of time spent on an information acquisition may provide some insights on the amount of internal processing. Third, the normal matrix format for presenting the information makes it equally easy for a
decision maker to process by alternative or by attribute, unlike many actual decision tasks in which information is organized by alternative (e.g., brands on supermarket shelves) and processing by
alternative is thus relatively more easy than attribute processing. A matrix display also helps structure the decision problem by providing the alternatives and attributes. Brucks (1988) has
addressed this problem by not presenting the alternatives and attributes to subjects. OUTCOME 1
OUTCOME 2
OUTCOME 3
OUTCOME 4
FIGURE 4
Example of a mouse]ab stimulus display with time pressure clock. From
Figure 2 of"Adaptive Strategy Selection in Decision Making," by J. W. Payne,J. R. Bettman, and E. J. Johnson, 1988,Journal of Experimental Psychology: Learning, Memory,and Cognition, 14, p. 543.
Copyright 9 1988by the AmericanPsychologicalAssociation. Reprintedwith permission.
5 Behavioral Decision Research: An Overview
Instead, subjects make inquiries about the attributes of the alternatives of interest using their own words, and the requested information is provided by artificial intelligence programs or
unobtrusive human intervention. Another approach for studying information acquisition is the analysis of eye movements (e.g., Pietcrs ct al., 1997; Russo & Doshcr, 1983; Russo & Leclerc, 1994; Russo
& Rosen, 1975; van Raaij, 1977). Typically, the alternatives are displayed in tabular format on a screen in front of the subject (Russo & Dosher, 1983; Russo & Rosen, 1975) or as separate options
(van Raaij, 1977). Specialized equipment records the sequence of eye movements used by the subject to examine the choice objects. The recording process may entail some restrictions to prevent large
head movements, and the researcher must prevent subjects from using peripheral vision by providing relatively large separations between items in the visual display. Eye movement data have several
advantages: they provide a very detailed and dense trace of the information search; eye movements may be relatively more difficult for subjects to censor than verbal protocols; and eye movement data
may be useful when protocols fail, such as studying processes that occur rapidly or which involve nonverbal representations or automated processes (Ericsson & Simon, 1993). Eye movement data also
pose problems, however. Collecting and analyzing such data is time-consuming, expensive, and usually uses small sample sizes. Also, the apparatus is obtrusive, so subjects are aware that their eye
movements are being monitored. The choice stimuli used in eye movement studies have often been simplistic arrays because researchers must localize eye movements. Finally, eye movement data directions
reveal only external search, not necessarily internal processes. We have developed a computer-based information display that employs a mouse to control information acquisition (Payne, Bettman, and
Johnson, 1993). Because pointing with the mouse is a relatively effortless response, Payne et al. have argued that it can approximate the detail provided by eye movements with much less cost. This
acquisition system measures both the sequence and timing of information gathering in several different types of task environments. 3. Chronometric Analysis Analysis of response times, or chronometric
analysis, has also been used to study choice (e.g., Johnson & Russo, 1978; Sujan, 1985). The times taken to complete a response are the basic data collected, so that in a sense this is a form of
input-output analysis where the output is total time. Researchers usually assume that the time taken directly reflects the amount of processing effort used in completing the task. By comparing the
mean response times over different experimental conditions, it is hoped that one can learn about
John W. Payne, James R. Bettman, and Mary Frances Luce
the information processing characterizing such tasks. For example, researchers have used such analyses to study the structure of memory (e.g., Johnson & Russo, 1978); to examine the usage of various
heuristics for evaluating alternatives (e.g., Sujan, 1985); and to test models of cognitive effort in decision processes (Bettman, Johnson, & Payne, 1990). This discussion of methods for studying
decision making has been quite brief; more detail can be found in Carroll and Johnson (1990) and in Ford, Schmitt, Schechtman, Hults, and Doherty (1989). What is clear from even our brief discussion
is that no method is perfect; rather, each has its own biases and disadvantages. Because the various methods have different strengths and weaknesses, using several complementary approaches in the
same study seems to hold the greatest promise for separating the effects of the research method from those associated with the phenomenon under study. Having discussed some basic concepts in
behavioral decision research and some of the methods used to study decision making, we conclude with a discussion of an area we believe is particularly promising for future research, namely, how such
"hot" psychological concepts as emotion, affect, and motivation may be combined with the more "cold" cognitive notions of bounded rationality to further our understanding of decision behavior.
VIII. E M O T I O N A L FACTORS AND DECISION BEHAVIOR Over the past 40 years, the emerging field of BDR has progressed from the neoclassical economic conceptualization that choice behavior follows
normative, utility-maximizing guidelines to a conceptualization of choice behavior as a compromise between task environments and humans' limited information processing abilities. The work reviewed in
this chapter demonstrates that these ideas of cognitive limitations, or bounded rationality, have been well integrated into our understanding of choice. However, at least one potentially relevant
psychological aspect of decision makersmthat they experience emotion, particularly negative emotionmhas been less so. Negative emotion experienced by decision makers may influence reactions to
decision task environments, and even the very prospect of having to make a decision may arouse negative emotion (e.g., Hogarth, 1987; Janis & Mann, 1977; Kahneman, 1993; Lopes, 1987; Simon, 1987). 1
Emotional or motivational influences on decision behavior have been gaining increasing attention 1 Positiveemotions are also relevant to, and may also impact, decision processes. We focus on negative
emotion because negative emotion is often proposed to be commonly, or even inherently, associated with all but the most trivial decisions (e.g., Festinger, 1957;Janis & Mann, 1977; Larrick, 1993;
Shepard, 1964; Tversky & Shafir, 1992).
5 Behavioral Decision Research: An Overview
(e.g., Beattie & Barlas, 1993; Larrick, 1993; Lopes, 1987; Simonson, 1992), although this work has yet to be well integrated into the majority of behavioral decision research. Like cognitive
limitations or bounded rationality, emotional factors may cause decision behavior to deviate from normative optimality. For instance, decision makers may cope with emotionally difficult decisions by
using simplified heuristics, just as they often do with cognitively complex decisions. In fact, it seems that the concept of decision complexity, or any purely cognitive analysis, cannot fully
explain the phenomenology of or the typical reactions to truly difficult decisions. For instance, Hogarth (1987) noted that conflict-avoidant heuristic decision rules may be attractive in part
because they protect the decision maker from the distressing prospect of explicitly confronting value trade-offs. Some classic works also have noted that the resolution of these conflicts between
important goals, which is necessary for the resolution of many decision situations, is inherently unpleasant (Festinger, 1957; Janis & Mann, 1977; Shepard, 1964). Thus, at least one of the variables
that has been important in understanding cognitive limitations and decision difficulty, conflict among decision attributes, has also been recognized as potentially encouraging negative emotion. Next
we outline the types of affect, including conflict among important goals, that may influence decision behavior. Then, we offer some suggestions regarding how to incorporate an understanding of
emotion into theoretical approaches to decision behavior.
A. Sources of Emotion During Decision Making Following Lazarus (1991)., we conceptualize negative emotions as resulting from threats to important or valued goals (see also Frijda, 1988; Oatley &
Jenkins, 1992). Thus, we consider how and when goals may be threatened during a decision task in order to define sources of emotion that may potentially influence the decision maker. We identify
three such sources. Specifically, goals may become threatened, and hence negative emotion may be elicited (1) through the specific characteristics or consequences of relevant alternatives, (2)
through more general characteristics of the decision task, and (3) through background characteristics influencing the decision maker but unrelated to the decision task itself. Each of these sources
of decisionrelated emotion is discussed next. 1. Specific characteristics or aspects of decision alternatives may arouse emotion. These emotion-arousing context factors include the possible outcomes
or consequences of considered alternatives, such as when a person investing a sum of money is distressed over the possibility of losing it (Janis & Mann, 1977; Simon, 1987). They also include
conflict among goal-relevant
John W. Payne, James R. Bettman, and Mary Frances Luce
attributes, such as when an automobile purchaser feels she must sacrifice safety in return for monetary savings (Festinger, 1957; Janis & Mann, 1977; Tversky & Shafir, 1992). More generally,
emotion-arousing context factors are present when the potential consequences associated with considered alternatives threaten a decision maker's goals. 2. More general characteristics of decision
problems, such as time pressure or the amount of information to be considered, may cause negative emotion. By interfering with decision makers' abilities to process decision information, these task
factors may threaten goals regarding decision accuracy. Emotional task factors may also indirectly threaten goals related to decision outcomes (e.g., a decision maker may worry about possible
decision consequences if task factors seem to be compromising her ability to accurately resolve a decision). Thus, task factors should be particularly distressing when potential decision consequences
involve high stakes or goal relevance. Emotion-arousing task factors have been studied under the rubric of "stress" and decision making (e.g., Ben Zur & Breznitz, 1981; Hancock & Warm, 1989; Keinan,
1987). However, it seems this research can be integrated into theory involving decision making under emotion, given recent movements to conceptualize psychological stress as simply a class of
negative emotion (Lazarus, 1993). 3. Background or ambient sources of emotion, for instance an uncomfortable room temperature or a lingering negative mood, may influence decision processing. Thus, a
decision maker may feel emotional when goals unrelated to the decision itself are threatened, and this emotion may influence her cognitive functioning during the decision task. Much of the literature
on stress and decision making actually involves these ambient emotion sources. However, our interest is in emotion that is more directly related to the task itself, so we will now concentrate on the
way that emotional task and context factors can be integrated into a theoretical understanding of choice. We believe this is an important new area of inquiry for BDR. B. Influences o f E m o t i o n
on D e c i s i o n B e h a v i o r
It seems that emotion aroused by both context variables (e.g., potential decision consequences) and task variables (e.g., time pressure) may influence the process by which a decision is resolved.
Explaining this influence of emotion on decision strategy selection will likely necessitate that current theoretical approaches to BDR be broadened. Recent movements in both psychology (e.g.,
Kruglanski & Webster, 1996; Kunda, 1990; Lazarus, 1991; Zajonc, 1980) and behavioral decision research (e.g., Kahneman, 1993; Larrick, 1993; Lopes, 1987) have encouraged such a broadening of
theoretical approaches to encompass the interaction of emotion and cognition in deter-
5 Behavioral Decision Research: An Overview
mining behavior. We now discuss two possible approaches to broadening BDR theories to account for the influence of emotional task and context factors. First, it seems possible that emotion will
generally interfere with decision processes, degrading cognitive performance (e.g., Hancock & Warm, 1989). Thus, one could model the effects of negative emotion on decision processing by assuming
that any cognitive operation will both take more time and contain more error as negative emotion is increased (see Eysenck, 1986, for the related idea that anxiety reduces short-term storage capacity
and attentional control). This viewpoint implies that decision makers adapting to negative emotion will shift to easier-to-implement decision strategies in order to compensate for emotion-induced
decreases in cognitive efficiency. Thus, this viewpoint implies that increasing negative emotion will function in a manner similar to increasing task complexity, causing a shift to simpler,
easier-to-implement decision strategies. A second possible theoretical view involves the idea that decision makers may more directly adapt to negative emotion itself, in addition to adapting to the
effects of negative emotion on processing efficiency. Specifically, as the potential for negative emotion increases, people may choose decision strategies in the interest of coping with negative
emotion, as well as choosing strategies in the interest of satisfying goals such as minimizing expended cognitive effort. This view is consistent with some of the more general literature on emotion
and coping (e.g., Folkman & Lazarus, 1988; Lazarus, 1991). A coping approach leads to questions regarding which decision strategies will satisfy a proposed desire to cope and how they will do so. By
considering this question, we have derived two more specific predictions regarding how decision strategies may be altered under an increasing potential for negative emotion. Each prediction is
discussed next. One way to cope with negative emotion is to make efforts to solve the environmental problem that is the source of that emotion. For instance, someone worried about the possibility of
having cancer may cope with those emotions by making an appointment to see a doctor (i.e., see Folkman and Lazarus's problem-focused coping). Consistent with this coping mechanism, decision makers
adapting to negative emotion may devote increased attention and effort to the decision task itself, attempting to ensure that the best possible decision is made. Thus, coping goals aroused by
negative emotion may motivate a decision maker to work harder. This effect seems particularly likely when the experienced negative emotion is directly tied to possible decision consequences (i.e.,
when an emotional context factor is present), as it is when maximizing decision accuracy will potentially guard against undesired outcomes (e.g., Eysenck, 1986). At the same time that they try to
solve environmental problems, indirectly minimizing experienced negative emotions, decision makers may
John W. Payne, James R. Bettman, and Mary Frances Luce
more directly act to mitigate or minimize negative emotion. For example, a person with health concerns may cope with the associated negative emotion by concentrating on a distracting hobby (i.e., see
Folkman & Lazarus's emotion-focused coping). Consistent with this coping mechanism, individuals may process such that they avoid the most distressing aspect(s) of decision processing, especially as
the emotional potential of the decision increases. Making explicit trade-offs between attributes is often considered to be particularly distressing (e.g., Hogarth, 1987). Thus, it seems that decision
makers attempting to protect themselves from negative emotion may shift to simpler, more conflict-avoidant decision strategies. This may actually happen at the same time that the decision maker works
harder (perhaps by processing more information or prolonging deliberation time), consistent with Folkman and Lazarus's (1980, 1988) findings that both problem-focused and emotion-focused coping
strategies tend to be brought to bear on any emotional situation. In conclusion, the two possible theoretical approaches outlined here yield similar, but not perfectly overlapping, predictions
regarding decision behavior in negatively emotional environments. The processing efficiency approach argues that individuals will shift to simpler strategies under emotion, whereas the coping
approach argues that decision makers will use strategies that simplify in some respects (e.g., shifting to conflict-avoidant rules) but that are more complex in other respects (e.g., processing
information more completely or vigorously). Luce, Bettman, and Payne (1997) tested these predictions in three experiments and found evidence more consistent with the coping approach--decision
processing became both more extensive and more conflict-avoidant under negative emotion. IX. SUMMARY In this chapter we have provided an overview of the field of behavioral decision research. As an
area of active psychological inquiry, BDR is relatively young. The vast proportion of research on the psychology of decision making has occurred in the past two or three decades. Nevertheless, we
have achieved a number of important insights into decision behavior. For example, it is clear that the classical economic man model of decision making is seriously flawed as a description of actual
decision behavior. On a more positive note, we now understand a number of the strategies used to make judgments and choices and some of the task and context factors that determine when various
strategies are used. We have also identified properties of decisions, such as loss aversion and the importance of the gainversus-loss distinction, that are important for understanding risky choice.
The field of BDR has also developed a rich set of tools for investigating decisions and decision processing. Those tools are being applied to under-
5 Behavioral Decision Research: An Overview
stand both cognitive and emotional influences on decision making. Another exciting trend in BDR is the fact that the results and methods of the field are increasingly being used to inform a wide
variety of applied areas of study, including health, business, and public policy. Behavioral decision research holds promise for both a better understanding of how people make decisions and the
development of better methods to aid decision making.
Acknowledgments The research reported in this chapter was supported by a grant from the Decision, Risk, and Management Science Program of the National Science Foundation.
References Abelson, R. P., & Levi, A. (1985). Decision making and decision theory. In G. Lindzey & E. Aronson (Eds.), The handbook of social psychology, Vol. 1 (pp. 231-309). New York: Random House.
Adelman, L., Gualtieri, J., & Stanford, S. (1995). Effects of earned focus on the option generation process: An experiment using protocol analysis. Organizational Behavior and Human Decision
Processes, 61, 54-66. Anderson, N. H. (1981). Foundations of information integration theory. New York: Academic Press. Aschenbrenner, K. M. (1978). Single-peaked risk preferences and their
dependability on the gambles' presentation mode. Journal of Experimental Psychology: Human Perception and Performance, 4, 513- 520. Aschenbrenner, K. M., Bockenholt, U., Albert, D., & Schmalhofer, F.
(1986). The selection of dimensions when choosing between multiattribute alternatives. In R. W. Scholz (Ed.), Current issues in West German decision research (pp. 63-78). Frankfurt: Lang. Bar-Hillel,
M. (1984). Representativeness and fallacies of probability judgment. Acta Psycho~ Iogica, 55, 91-107. Bar-Hillel, M. (1990). Back to base-rates. In R. M. Hogarth (Ed.), Insights in decision making:
Theory and applicationsnA tribute to Hillel J. Einhorn (pp. 200-216). Chicago: University of Chicago Press. Baron, J., & Spranca, M. (1997). Protected values. Organizational Behavior and Human
Decision Processes, in press. Beach, L. R. (1990). Image theory: Decision making in personal and organizational contexts. Chichester: John Wiley. Beach, L. R. (1993). Broadening the definition of
decision making: The role of prechoice screening of options. Psychological Science, 4, 215-220. Beach, L. R., Barnes, V. E., & Christensen-Szalanski, J. J. J. (1986). Beyond heuristics and biases: A
contingency model ofjudgmental forecasting. Journal of Forecasting, 5, 143-157. Beattie, J., & Barlas, S. (1993). Predicting perceived differences in tradeoff difficulty. Working paper, University of
Sussex, Sussex, England. Beattie, J., & Baron, J. (1991). Investigating the effect of stimulus range on attribute weight. Journal of Experimental Psychology: Human Perception and Performance, 17,
571-585. Bell, D. E. (1982). Regret in decision making under uncertainty. Operations Research, 30, 961981. Bell, D. E. (1985). Disappointment in decision making under uncertainty. Operations
Research, 33, 1-27.
John W. Payne, James R. Bettman, and Mary Frances Luce
Ben Zur, H., & Breznitz, S.J. (1981). The effects of time pressure on risky choice behavior. Acta Psychologica, 47, 89-104. Bettman, J. R. (1979). An it~rmation processing theory qfconsumer choice.
Reading, MA: Addison Wesley. Bettman, J. R., Johnson, E. j., & Payne, j. w. (1990). A componential analysis of cognitive effort in choice. Organizational Behavior and Human Decision Processes, 45,
111-139. Bettman, J. R., & Kakkar, P. (1977). Effects of information presentation format on consumer information acquisition strategies. Journal o]'Consumer Research, 3, 233-240. Bettman, J. R., &
Park, C. W. (1980a). Effects of prior knowledge and experience and phase of the choice process on consumer decision processes: A protocol analysis. Journal of Consumer Research, 7, 234-248. Bettman,
J. R., & Park, C. W. (1980b). Implications of a constructive view of choice for analysis of protocol data: A coding scheme for elements of choice processes. In J. C. Olson (Ed.), Advances in consumer
research, Vol. 7 (pp. 148-153). Ann Arbor, MI: Association for Consumer Research. Biehal, G. J., & Chakravarti, D. (1982a). Experiences with the Bettman-Park verbal protocol coding scheme. Journal of
Consumer Research, 8, 442-448. Biehal, G.J., & Chakravarti, D. (1982b). Information presentation format and learning goals as determinants of consumers' memory retrieval and choice processes. Journal
qf Consumer Research, 8, 431-441. Biehal, G. J., & Chakravarti, D. (1983). Information accessibility as a moderator of consumer choice. Journal of Consumer Research, 10, 1-14. Biehal, G. J., &
Chakravarti, D. (1989). The effects of concurrent verbalization on choice processing. Journal of Marketing Research, 26, 84-96. Billings, R. S., & Marcus, S. A. (1983). Measures of compensatory and
noncompensatory models of decision behavior: Process tracing versus policy capturing. Organizational Behavior and Human Performance, 31, 331-352. Birnbaum, M. H. (1974). The nonadditivity of
personality impressions. Journal qfExperimental Psychology, 102, 543-561. Birnbaum, M. H. (1983). Base rates in Bayesian inference: Signal detection analysis of the cab problem. AmericanJournal of
Psychology, 96, 85-94. Birnbaum, M. H., & Beeghley, D. (1997). Violations of branch independence in judgments of the value of gambles. Psychological Science, 8, 87-94. Birnbaum, M. H., Coffey, G.,
Mellers, B. A., & Weiss, R. (1992). Utility measurement: Configural-weight theory and the judge's point of view. Journal of Experimental Psychology: Human Perception & Performance, 18, 331-346.
Bockenholt, U., Albert, D., Aschenbrenner, M., & Schmalhofer, F. (1991). The effects of attractiveness, dominance, and attribute differences on information acquisition in multiattribute binary
choice. Organizational Behavior and Human Decision Processes, 49, 258281. Bostic, R., Herrnstein, R. J., & Luce, R. I). (1990). The effect on the preference-reversal phenomenon of using choice
indifferences. Journal of Economic Behavior and Organization, 13, 193-212. Brucks, M. (1988). Search monitor: An approach for computer-controlled experiments involving consumer information search.
Journal of Consumer Research, 15, 117-121. Budescu, D. V., Weinberg, S., & Wallsten, T. S. (1988). Decisions based on numerically and verbally expressed uncertainties. Journal of Experimental
Psychology: Human Perception and Performance, 14, 281-294. Butler, S. A. (1986). Anchoring in the judgmental evaluation of audit samples. Accounting
Review, 61, 101 - 111.
Camerer, C. F. (1987). Do biases in probability judgment matter in markets? Experimental evidence. American Economic Review, 77, 981-997.
Behavioral Decision Research: An O v e r v i e w
Camerer, C. F. (1989). An experimental test of several generalized utility theories. Journal qf Risk and Uncertainty, 2, 61- ! 04. Camerer, C., & Johnson, E. J. (1991). The process-performance
paradox in expert judgment: How can experts know so much and predict so badly? In A. Ericsson andJ. Smith (Eds.), The study of expertise: Prospects and limits (pp. 195-207). Cambridge: Cambridge
University Press. Carroll, J. S., &Johnson, E.J. (1990). Decision research: A field guide. Newbury Park, CA: Sage. Casey, J. T. (1991). Reversal of the preference reversal phenomenon. Organizational
Behavior and Human Decision Processes, 48, 224-251. Champagne, M., & Stevenson, M. K. (1994). Contrasting models of appraisals judgments for positive and negative purposes using policy modeling.
Organizational Behavior and Human Decision Processes, 59, 93-123. Chapman, G. B., & Johnson, E.J. (1994). The limits of anchoring. Journal of Behavioral Decision Making, 7, 223-242. Coombs, C. H.,
Dawes, R. M., & Tversky, A. (197~)). Mathematical psychology: An elementary introduction. Englewood Cliffs, NJ: Prentice-Hall. Coupey, E. (1994). Restructuring: Constructive processing of information
displays in consumer choice. Journal qf Consumer Research, 21, 83-99. Coupey, E., Irwin, J. R., & Payne, J. W. (in press). Product category familiarity and preference construction. Journal of
Consumer Research. Cox, A. D., & Summers, J. D. (1987). Heuristics and biases in the intuitive projection of retail sales. Journal of Marketing Research, 24, 290-297. Crandall, C. S., & Greenfield,
B. (1986). Understanding the conjunction fallacy: A conjunction of effects? Social Cognition, 4, 408-419. Creyer, E. H., & Johar, G. V. (1995). Response mode bias in the formation of preference:
Boundary conditions of the prominence effect. Organizational Behavior and Human Decision Processes, 62, 14-22. Curley, S. P., Browne, G.J., Smith, G. F., & Benson, P. G. (1995). Arguments in the
practical reasoning underlying constructed probability responses. Journal of Behavioral Decision Making, 8, 1-20. Curley, S. P., & Yates, J. F. (1989). An empirical evaluation of descriptive models
of ambiguity reactions in choice situations. Journal qfMathematical Psychology, 33, 397-427. Curley, S. P., Yates, J. F., & Abrams, R. A. (1986). Psychological sources of ambiguity avoidance.
Organizational Behavior and Human Decision Processes, 38, 230-256. Dahlstrand, U., & Montgomery, H. (1984). Information search and evaluative processes in decision making: A computer based process
tracing study. Acta Psychologica, 56, 113-123. Davis, H. L., Hoch, S. J., & Ragsdale, E. K. (1986). An anchoring and adjustment model of spousal predictions. Journal of Consumer Research, 13, 25-37.
Dawes, R. M. (1979). The robust beauty of improper linear models in decision making. American Psychologist, 34, 571-582. Dawes, R. M. (1994). House of cards: Psychology and psychotherapy. New York:
The Free Press. Dawes, R. M., & Corrigan, B. (1974). Linear models in decision making. Psychological Bulletin, 81, 95-106. Dawes, R. M., Mirels, H. L., Gold, E., & Donahue, E. (1993). Equating
inverse probabilities in implicit personality judgments. Psychological Science, 4, 396-400. Delqui6, P. (1993). Inconsistent trade-offs between attributes: New evidence in preference assessment
biases. Management Science, 39, 1382-1395. Dube-Rioux, L., & Russo, J. E. (1988). An availability bias in professional judgment. Journal of
Behavioral Decision Making, 1,223-237.
Edwards, W., & Tversky, A. (Eds.) (1967). Decision making. Harmondsworth, UK: Penguin. Einhorn, H. J., & Hogarth, R. M. (1975). Unit weighting schemes for decision making. Organizational Behavior and
Human Performance, 13, 171-192.
John W. Payne, James R. Bettman, and Mary Frances Luce
Einhorn, H. J., & Hogarth, R. M. (1985). Ambiguity and uncertainty in probabilistic inference. Psychological Review, 93, 433-461. Einhorn, H. J., & Hogarth, R. M. (1986). Decision making under
ambiguity. Journal of Business, 59, $225-$250. Ellsberg, D. (1961). Risk, ambiguity, and the Savage axioms. QuarterlyJournal of Economics, 75, 643-669. Erev, I., & Cohen, B. L. (1990). Verbal versus
numerical probabilities: Efficiency, biases, and the preference paradox. Organizational Behavior and Human Decision Processes, 45, 1-18. Ericsson, K. A., & Simon, H. A. (1993). Protocol analysis:
Verbal reports as data (Rev. ed.). Cambridge, MA: MIT Press. Eysenck, M. W. (1986). A handbook of cognitive psychology. London: Erlbaum. Fennema, H., & Wakker, P. (1997). Original and cumulative
prospect theory: A discussion of empirical differences. Journal of Behavioral Decision Making, 10, 53-64. Festinger, L. (1957). A theory of cognitive dissonance. Evanston, IL: Row, Peterson. Fischer,
G. W., & Hawkins, S. A. (1993). Strategy compatibility, scale compatibility, and the prominence effect. Journal of Experimental Psychology: Human Perception and Performance, 19, 580-597. Fischer, G.
W., Kamlet, M. S., Fienberg, S. E., & Schkade, D. (1986). Risk preferences for gains and losses in multiple objective decision making. Management Science, 32, 1065-1086. Fischhoff, B. (1975).
Hindsight 4 foresight: The effect of outcome knowledge on judgment under uncertainty. Journal of Experimental Psychology: Human Perception and Performance, 1, 288-299. Fischhoff, B., & Beyth-Marom,
R. (1983). Hypothesis evaluation from a Bayesian perspective. Psychological Review, 90, 239-260. Fischhoff, B., Slovic, P., & Lichtenstein, S. (1978). Fault trees: Sensitivity of estimated failure
probabilities to problem representation. Journal of Experimental Psychology: Human Perception and Performance, 4, 330-344. Fishburn, P. (1991). Nontransitive preferences in decision theory. Journal
of Risk and Uncertainty, 4, 113-124. Fisk, J. E. (1996). The conjunction effect: Fallacy or Bayesion inference? Organizational Behavior and Human Decision Processes, 67, 76-90. Folkes, V. S. (1988).
The availability heuristic and perceived risk. Journal of Consumer Research, 15, 13-23. Folkman, S., & Lazarus, R. S. (1980). An analysis of coping in a middle-aged community sample. Journal of
Health and Social Behavior, 21, 219-239. Folkman, S., & Lazarus, R. S. (1988). Coping as a mediator of emotion. Journal of Personality and Social Psychology, 54, 466-475. Ford, J. K., Schmitt, N.,
Schechtman, S. L., Hults, B. M., & Doherty, M. L. (1989). Process tracing methods: Contributions, problems, and neglected research questions. Organizational Behavior and Human Decision Processes, 43,
75-117. Fox, C. R., Rogers, B. A., & Tversky, A. (1996). Options traders exhibit subadditive decision weights. Journal of Risk and Uncertainty, 13, 5-19. Frijda, N. H. (1967). Problems of computer
simulation. Behavioral Science, 12, 59-67. Frijda, N. H. (1988). The emotions. Cambridge, England: Cambridge University Press. Frisch, D., & Baron, J. (1988). Ambiguity and rationality. Journal of
Behavioral Decision Making, 1, 149-157. Gaeth, G. J., & Shanteau, J. (1984). Reducing the influence of irrelevant information on experienced decision makers. Organizational Behavior and Human
Performance, 33, 263-282. Garb, H. N. (1989). Clinical judgment, clinical training and professional experience. Psychological Bulletin, 105, 387-396. Gettys, C. F., Pliske, R. M., Manning, C., &
Casey, J. T. (1987). An evaluation of human act
Behavioral Decision Research: An O v e r v i e w
generation performance. Organizational Behavior and Human Decision Processes, 39, 23-51. Gigerenzer, G., & Goldstein, D. G. (1996). Reasoning the fast and frugal way: Models of bounded rationality.
Psychological Review, 103, 650-669. Gigerenzer, G., Hell, W., & Blank, H. (1988). Presentation and content: The use of base rates as a continuous variable. Journal of Experimental Psychology: Human
Perception and Performance, 14, 513-525. Gigerenzer, G., Hoffrage, U., & Kleinbohing, H. (1991). Probabilistic mental models: A Brunswikian theory of confidence. Psychological Review, 98, 506-528.
Ginossar, Z., & Trope, Y. (1987). Problem solving in judgment under uncertainty. Journal of Personality and Social Psychology, 52, 464-474. Goldstein, W. M. (1990). Judgments of relative importance
in decision making: Global vs. local interpretations of subjective weight. Organizational Behavior and Human Decision Processes, 47, 313-336. Goldstein, W. M., & Einhorn, H. J. (1987). Expression
theory and the preference reversal phenomena. Psychological Review, 94, 236-254. Gonz~lez-Vallejo, C., & Wallsten, T. S. (1992). Effects of probability mode on preference reversal. Journal of
Experimental Psychology: Learning, Memory, and Cognition, 18, 855-864. Grether, D. M., Schwartz, A., & Wilde, L. L. (1986). The irrelevance of information overload: An analysis of search and
disclosure. Southern California Law Review, 59, 277-303. Grether, D. M., & Wilde, L. L. (1983). Consumer choice and information: New experimental evidence. Information Economics and Policy, 1,
115-144. Griffin, D., & Tversky, A. (1992). The weighing of evidence and the determinants of confidence. Cognitive Psychology, 24, 411-435. Hammond, K. R. (1996). Humanjudgmem and social policy.
Oxford: Oxford University Press. Hammond, K. R., Hamm, R. M., Grassia, J., & Pearson, T. (1987). Direct comparison of the efficacy of intuitive and analytical cognition in expert judgment. IEEE
Transactions on Systems, Man, and Cybernetics, 17, 753-770. Hancock, P. A., & Warm, J. S. (1989). A dynamic model of stress and sustained attention. Human Factors, 31, 519-537. Hastie, R., & Park, B.
(1986). The relationship between memory and judgment depends on whether the judgment task is memory-based or on-line. Psychological Review, 93, 258268. Hawkins, S. A. (1994). Information processing
strategies in riskless preference reversals: The prominence effect. Organizational Behavior and Human Decision Processes, 59, 1-26. Hawkins, S. A., & Hastie, R. (1990). Hindsight: Biased judgments of
past events after the outcomes are known. Psychological Bulletin, 107, 311-327. Heath, C., & Tversky, A. (1991). Preference and belief: Ambiguity and competence in choice under uncertainty. Journal
of Risk and Uncertainty, 4, 5-28. Heath, T. B., & Chatterjee, S. (1995). Asymmetric decoy effects on lower-quality versus higher-quality brands: Meta-analytic and experimental evidence. Journal of
Consumer Research, 22, 268-284. Hershey, J. C., & Schoemaker, P. J. H. (1985). Probability versus certainty equivalence methods in utility measurement: Are they equivalent? Management Science, 31,
1213-1231. Highhouse, S., & House, E. L. (1995). Missing information in selection. An application of the Einhorn-Hogarth Ambiguity Model. Journal of Applied Psychology, 80, 81-93. Highhouse, S., &
Johnson, M. A. (1996). Gain/loss asymmetry and riskless choice: Loss aversion among job finalists. Organizational Behavior and Human Decision Processes, 68, 225-233. Hinz, V. B., Tinsdale, R. S., &
Vollrath, D. A. (1997). The emerging conceptualization of groups as information processors. Psychological Bulletin, 121, 43-64.
John W. Payne, James R. Bettman, and Mary Frances Luce
Hoffman, P. J. (1960). The paramorphic representation of clinical judgment. Psychological Bulletin, 57, 116-131. Hogarth, R. M. (1987). Judgment and choice (2nd ed.). New York: John Wiley. Hogarth,
R. M., & Einhorn, H.J. (1992). Order effects in belief updating: The belief-adjustment model. Cognitive Psychology, 24, 1-55. Hogarth, R. M., & Kunreuther, H. (1985). Ambiguity and insurance
decisions. American Economic Review, 75, 386-390. Hogarth, R. M., & Kunreuther, H. (1989). Risk, ambiguity, and insurance. Journal of Risk and Uncertainty, 2, 5-35. Hogarth, R. M., & Kunreuther, H.
(1995). Decision making under ignorance: Arguing with yourself. Journal of Risk and Uncertainty, 10, 15-36. Hsee, C. K. (1996). The evaluability hypothesis: An explanation for preference reversals
between joint and separate evaluations of alternatives. Organizational Behavior and Human Decision Processes, 67, 247-257. Huber, J., Payne, J. w., & Puto, C. P. (1982). Adding asymmetrically
dominated alternatives. Violations of regularity and the similarity hypothesis. Journal of Consumer Research, 9, 9098. Huber, V. L., Neale, M. A., & Northcraft, G. B. (1987). Decision bias and
personnel selection strategies. Organizational Behavior and Human Decision Processes, 40, 136-147. Jacoby, J. (1975). Perspectives on a consumer information processing research program. Communication
Research, 2, 203-215. Jacoby, J., Mazursky, D., Troutman, T., & Kuss, A. (1984). When feedback is ignored: Disutility of outcome feedback. Journal qfApplied Psychology, 69, 531-545. Jagacinski, C. M.
(1995). Distinguishing adding and averaging models in a personnel selection task: When missing information matters. Organizational Behavior and Human Decision Processes, 61, 1-15. Janis, I. L., &
Mann, L. (1977). Decision making. New York: The Free Press. Jarvenpaa, S. L. (1989). The effect of task demands and graphical format on information processing strategies. Management Science, 35,
285-303. Joag, S. G., Mowen, J. C., & Gentry, J. W. (1990). Risk perception in a simulated industrial purchasing task: The effects of single versus multi-play decisions. Journal of Behavioral
Decision Making, 3, 91-108. Johnson, E. J., Meyer, R. M., & Ghose, S. (1989). When choice models fail: Compensatory representations in negatively correlated environments. Journal of Marketing
Research, 26, 255-270. Johnson, E. J., Payne, j. w., & Bettman, J. R. (1988). Information displays and preference reversals. Organizational Behavior and Human Decision Process, 42, 1-21. Johnson, E.
J., & Russo, J. E. (1978). The organization of product information in memory identified by recall times. In H. K. Hunt (Ed.), Advances in consumer research, Vol. 5 (pp. 79-86). Chicago: Association
for Consumer Research. Jones, S. K., Jones, K. T., & Frisch, D. (1995). Baises of probability assessment: A comparison of frequency and single-case judgments. Organizational Behavior and Human
Decision Processes, 61, 109-122. Jones, D. R., & Schkade, D. A. (1995). Choosing and translating between problem representations. Organizational Behavior and Human Decision Processes, 61,213-223.
Kahneman, D. (1993, November). J/DM President's Address. Paper presented at the meeting of the Judgment/Decision Making Society, Washington, D.C. Kahneman, D., Knetsch, J. L., & Thaler, R. (1990).
Experimental tests of the endowment effect and the Coase theorem. Journal of Political Economy, 98, 1325-1348. Kahneman, D., & Lovallo, D. (1992). Timid decisions and bold forecasts: A cognitive
perspective on risk taking. Management Science, 39, 17-31.
Behavioral Decision Research: An O v e r v i e w
Kahneman, D., & Tversky, A. (1973). On the psychology of prediction. Psychological Review, 80, 237-251. Kahneman, D., & Tversky, A. (1996). On the reality of cognitive illusions. Psychological
Review, 103, 582-591. Keeney, R. L., & Raiffa, H. (1976). Decisions with multiple objectives: Preferences and value tradeoffs. New York: Wiley. Keinan, G. (1987). Decision making under stress:
Scanning of alternatives under controllable and uncontrollable threats. Journal qf Personality and Social Psychology, 52, 639-644. Keller, K. L., & Staelin, R. (1987). Effects of quality and quantity
of information on decision effectiveness. Journal of Consumer Research, 14, 200-213. Keller, L. R., & Ho, J. L. (1988). Decision problem structuring: Generating options. IEEE Transactions on Systems,
Man, and Cybernetics, 18, 715-728. Keren, G., & Wagenaar, W. A. (1987). Violation of utility theory in unique and repeated gambles. Journal of Experimental Psychology: Learning, Memory, and
Cognition, 13, 387-391. Kerr, N. L., MacCoun, R.J., & Kramer, G. P. (1996). Bias in judgment: Comparing individuals and groups. Psychological Review, 103, 687-719. Klayman, J. (1985). Children's
decision strategies and their adaptation to task characteristics. Organizational Behavior and Human Decision Processes, 35, |79-201. Klayman, J. (1988). Cue discovery in probabilistic environments:
Uncertainty and experimentation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 14, 317-330. Klein, G., Wolf, S., Militello, L., & Zsambok, C. (1995). Characteristics of skilled
option generation in chess. Organizational Behavior and Human Decision Processes, 62, 62-69. Koehler, J. J. (1996). The base rate fallacy reconsidered: Descriptive, normative, and methodological
challenges. Behavioral and Brain Sciences, 19, 1-53. Koehler, J. J., Gibbs, B.J., & Hogarth, R. M. (1994). Shattering illusion of control: Multi-shot versus single-shot gambles. Journal qfBehavioral
Decision Making, 7, 183-191. Kramer, R. M. (1989). Windows of vulnerability or cognitive illusions? Cognitive processes and the nuclear arms race. Journal of Experimental Social Psychology, 25, 79-1
(10. Kruglanski, A. W., & Webster, D. M. (1996). Motivated closing of the mind: "Seizing" and "Freezing." Psychological Review, 103, 263-283. Kunda, Z. (1990). The case for motivated reasoning.
Psychological Bulletin, 108, 480-498. Larrick, R. P. (1993). Motivational factors in decision theories: The role of self-protection. Psychological Bulletin, 113, 440-450. Lazarus, R. S. (1991).
Progress on a cognitive-motivational-relational theory of emotion. American Psychologist 46, 819-834. Lazarus, R. S. (1993). From psychological stress to the emotions: A history of changing outlooks.
Annual Review qfl Psychology, 44, 1-21. Levi, A. S., & Pryor, J. B. (1987). Use of the availability heuristic in probability estimates of future events: The effects of imaging outcomes versus
imagining reasons. Organizational Behavior and Human Decision Processes, 40, 219-234. Levin, I. P., & Gaeth, G. J. (1988). How consumers are affected by the framing of attribute information before
and after consuming the product. Journal qfConsumer Research, 15, 374-378. Libby, R. (1985). Availability and the generation of hypotheses in analytical review. Journal of Accounting Research, 23,
648-667. Lichtenstein, M., & Srull, T. K. (1985). Conceptual and methodological issues in examining the relationship between consumer memory and judgment. In L. F. Alwitt & A. A. Mitchell (Eds.),
Psychological processes and advertising effects: Theory, research, and application (pp. 113-128). Hillsdale, NJ: Erlbaum. Lichtenstein, S., & Slovic, P. (1971). Reversals of preference between bids
and choices in gambling decisions. Journal of Experimental Psychology, 89, 46-55.
John W. Payne, James R. Bettman, and Mary Frances Luce
Lindsay, P. H., & Norman, D. A. (1972). Human information processing. New York: Academic Press. Linville, P. W., & Fischer, G. W. (1991). Preferences for separating or combining events. Journal of
Personality and Social Psychology, 59, 5-21. Loomes, G., & Sugden, R. (1987). Some implications of a more general form of regret. Journal of Economic Theory, 92, 805-824. Lopes, L. L. (1981).
Decision making in the short run. Journal of Experimental Psychology: Human Learning and Memory, 7, 377-385. Lopes, L. L. (1987). Between hope and fear: The psychology of risk. Advances in
Experimental Social Psychology, 20, 255-295. Luce, M. F., Bettman, J. R., & Payne, J. W. (1997). Choice processing in emotionally difficult decisions. Journal of Experimental Psychology: Learning,
Memory, and Cognition, 23, 384-405. Luce, R. D. (1990). Rational versus plausible accounting equivalences in preference judgments.
Psychological Science, 1,225-234.
Luce, R. D., & Fishburn, P. C. (1991). Rank- and sign-dependent linear utility models for finite first-order gambles. Journal of Risk and Uncertainty, 1, 29-59. Lusk, C. M., & Hammond, K. R. (1991).
Judgment in a dynamic task: Microburst forecasting. Journal of Behavioral Decision Making, 4, 55-73. Lynch, J. G., & Srull, T. K. (1982). Memory and attentional factors in consumer choice: Concepts
and research methods. Journal of Consumer Research, 9, 18-37. MacGregor, D., & Slovic, P. (1986). Graphical representation of judgmental information. Human-Computer Interaction, 2, 179-200. Machina,
M.J. (1987). Decision-making in the presence of risk. Science, 236, 537-543. MacLeod, C., & Campbell, L. (1992). Memory accessibility and probability judgments: An experimental evaluation of the
availability heuristic. Journal of Personality and Social Psychology, 63, 890-902. McFadden, D. (1981). Econometric models of probabilistic choice. In C. F. Manski & D. McFadden (Eds.), Structural
analysis of discrete data with econometric applications (pp. 198272). Cambridge, MA: MIT Press. March, J. G. (1978). Bounded rationality, ambiguity, and the engineering of choice. Bell Journal of
Economics, 9, 587-608. Medin, D. L., & Edelson, S. M. (1988). Problem structure and the use of base-rate information from experience. Journal of Experimental Psychology: General, 117, 68-85. Mellers,
B. A., Ord6fiez, L. D., & Birnbaum, M. H. (1992). A change of process theory for contextual effects and preference reversals in risky decision making. Organizational Behavior and Human Decision
Processes, 52, 331-369. Meyer, R. J., & Johnson, E. J. (1989). Information overload and the nonrobustness of linear models: A comment on Keller and Staelin. Journal of Consumer Research, 15, 498-503.
Meyer, R. J., & Kahn, B. (1991). Probabilistic models of consumer choice behavior. In T. S. Robertson & H. H. Kassarjian (Eds.), Handbook of consumer behavior (pp. 85-123). Englewood Cliffs, NJ:
Prentice-Hall. Montgomery, H. (1983). Decision rules and the search for a dominance structure: Towards a process model of decision making. In P. C. Humphreys, O. Svenson, & A. Vari (Eds.), Analyzing
and aiding decision processes (pp. 343-369). North Holland: Amsterdam. Nisbett, R. E., & Wilson, T. D. (1977). Telling more than we can know: Verbal reports on mental processes. Psychological Review,
84, 231-259. Northcraft, G. B., & Neale, M. A. (1987). Experts, amateurs, and real estate: An anchoringand-adjustment perspective on property pricing decisions. Organizational Behavior and Human
Decision Processes, 39, 84-97. Oatley, K., & Jenkins, J. M. (1992). Human emotions: Function and dysfunction. Annual Review of Psychology, 43, 55-85.
Behavioral Decision Research: An O v e r v i e w
Onken, J., Hastie, R., & Revelle, W. (1985). Individual differences in the use of simplification strategies in a complex decision-making task. Journal of Experimental Psychology: Human Perception and
Performance, 11, 14-27. Paese, P. W. (1995). Effects of framing on actual time allocation decisions. Organizational Behavior and Human Decision Processes, 61, 67-76. Payne, J. W. (1976). Task
complexity and contingent processing in decision making: An information search and protocol analysis. Organizational Behavior and Human Performance, 16, 366-387. Payne, J. W. (1994). Thinking aloud:
Insights into information processing. Psychological Science, 5, 241, 245-248. Payne, J. W., Bettman, J. R., & Johnson, E. J. (1988). Adaptive strategy selection in decision making. Journal of
Experimental Psychology: Learning, Memory, and Cognition, 14, 534-552. Payne, J. W., Bettman, J. R., & Johnson, E.J. (1993). The adaptive decision maker. Cambridge: Cambridge University Press. Payne,
J. W., Bettman, J. R., & Luce, M. F. (1996). When time is money: Decision behavior under opportunity-cost time pressure. Organizational Behavior and Human Decision Processes, 66, 131-152. Payne, J.
W., & Braunstein, M. L. (1978). Risky choice: An examination of information acquisition behavior. Memory & Cognition, 6, 554-561. Payne, J. W., Braunstein, M. L., & Carroll, J. S. (1978). Exploring
predecisional behavior: An alternative approach to decision research. Organizational Behavior and Human Performance, 22, 17-44. Payne, J. W., Laughhunn, D.J., & Crum, R. (1984). Muhiattribute risky
choice behavior: The editing of complex prospects. Management Science, 30, 1350-1361. Phillips, L. C., & Edwards, W. (1966). Conservatism in a simple probability inference task. Journal of
Experimental Psychology, 72, 346-354. Pieters, R., Warlop, L., & Hartog, M. (1997). The effects of time pressure and task motivation in visual attention to brands. Advances in Consumer Research, 24,
281-287. Pitz, G. F. (1976). Decision making and cognition. In H. Jungermann & G. de Zeeuw (Eds.), Decision making and change in human affairs (pp. 403-424). Dordrecht, Netherlands: D. Reidel. Puto,
C. P. (1987). The framing of buying decisions.Journal qfConsumer Research, 14, 301-315. Quiggin, J. (1982). A theory of anticipated utility. Journal of Economic Behavior and Organizations, 3,
323-343. Ratneshwar, S., Shocker, A. D., & Stewart, D. W. (1987). Toward understanding the attraction effect: The implications of product stimulus meaningfulness and familiarity. Journal of Consumer
Research, 13, 520-533. Redelmeir, D. A., & Tversky, A. (1990). Discrepancy between medical decisions for individual patients and for groups. New England Journal of Medicine, 322, 1162-1164. Reeves,
T., & Lockhart, R. S. (1993). Distributional vs. singular approaches to probability and errors in probabilistic reasoning. Journal of Experimental Psychology: General, 122, 207226. Rottenstreich, Y.,
& Tversky, A. (1997). Unpacking, repacking, and anchoring: Advances in support theory. Psychological Review, 104, 406-415. Russo, J. E. (1977). The value of unit price information.Journal of
Marketing Research, 14, 193-201. Russo, J. E., & Dosher, B. A. (1983). Strategies for muhiattribute binary choice. Journal of Experimental Psychology: Learning, Memory, and Cognition, 9, 676-696.
Russo, J. E., Johnson, E. J., & Stephens, D. M. (1989). The validity of verbal protocols. Memory & Cognition, 17, 759-769. Russo, J. E., & Leclerc, F. (1994). An eye-fixation analysis of choice
processes for consumer nondurables. Journal of Consumer Research, 21,274-290.
John W. Payne, James R. Bettman, and Mary Frances Luce
Russo, J. E., Medvec, V. H., & Meloy, M. G. (1996). The distortion of information during decisions. Organizational Behavior and Human Decision Processes, 66, 102-! 10. Russo, J. E., & Rosen, L. D.
(1975). An eye fixation analysis of muhiahernative choice. Memory and Cognition, 3, 267-276. Samuelson, W., & Zeckhauser, R. (1988). Status quo bias in decision making. Journal of Risk and
Uncertainty, 1, 7-59. Schkade, D. A., & Johnson, E.J. (1989). Cognitive processes in preference reversals. Organizational Behavior and Human Decision Processes, 44, 203-231. Schkade, D. A., &
Kleinmuntz, D. N. (1994). Information displays and choice processes: Differential effects of organization form and sequence. Organizational Behavior and Human Decision Processes, 57, 319-337.
Schkade, D. A., & Payne, J. W. (1994). How people respond to contingent valuation questions: A verbal protocol analysis of willingness-to-pay for an environmental regulation. Journal of Environmental
Economics and Management, 26, 88-109. Schneider, S. L. (1992). Framing and conflict: Aspiration level contingency, the status quo, and current theories of risky choice. Journal qf Experimental
Psychology: Human Learning and Memory, 18, 1040-1057. Schneider, S. L., & Lopes, L. L. (1986). Reflection in preferences under risk: Who and when may suggest why. Journal of Expeiimental Psychology:
Human Perception and Performance, 12, 535-548. Segal, U. (1989). Axi
|
{"url":"https://epdf.pub/measurement-judgment-and-decision-makingb6efd9cab8a4eacbdbb3f1b0538edd1b63370.html","timestamp":"2024-11-05T16:05:49Z","content_type":"text/html","content_length":"1049649","record_id":"<urn:uuid:9f2b07ab-a4d3-4df4-881e-dd9b5696c6f4>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00159.warc.gz"}
|
Vishal - Techgeek Innovation
Formula of Area and Perimeter The formula of area and perimeter is very simple. It’s just Area = Pi * R^2 where “R” is the radius of the circle and “p” is the speed of light. The value for pi…
Continue Reading
Definition of what is computer
Define Computer: What You Know and What You Don’t Computer is a word that means a lot of different things to a lot of different people. To some, it’s a pad of paper and a keyboard. To others, it’s
an… Continue Reading
Causes of the ozone layer depletion
Causes of the ozone layer depletion The depletion of the ozone layer is one of the most well-known and catastrophic effects of human activity. However, as with climate change, there are a number of
possible causes for this depletion that… Continue Reading
What are A List Of Prime Numbers? and What does it Mean for Mathematics
What are Prime Numbers? and What does it Mean for Mathematics What are A List Of Prime Numbers? and What does it Mean for Mathematics? Prime numbers aren’t the least bit mysterious. They simply stand
for ” Primordial ,” or… Continue Reading
Accounting Projects Topics and ideas
Accounting projects topics and research document for final year undergraduates and master’s students .This blog article will provide you with information on several account project themes that
students might develop for their research work. When selecting a study subject, ensure that… Continue Reading
what is electrical instruments and its Classification
Electrical instruments are devices that measure voltage, power, and current using the mechanical movement of an electromagnetic meter. To evaluate electrical activity and identify the existence of
voltage or current, electrical specialists use electrical measuring equipment. We can measure electrical… Continue Reading
What Is Reverse Polarity in outlet ?
When a receptacle is connected backward, it is known as reverse polarity. When the “hot” wire, commonly known as the black or red wire, is placed on the neutral side and the neutral wire is wired on
the “hot” side,… Continue Reading
What are Analogue and digital signal ?
Analogue and digital signal are the two forms of information-carrying signals. The primary distinction between the two signals is that analogue signals include continuous electrical impulses,
whereas digital signals contain non-continuous electrical signals. The many examples of different types of… Continue Reading
Different Types of Sensors in Cars
At the moment, several types of Sensors in Cars may be used in current automotive design. These are organized in the automobile engine to notice and address any problems such as repairs, service, and
so on. The sensors in autos… Continue Reading
List out the car engine parts names
Overview of car engine parts names There are several automobile car engine parts names that work together to provide power for vehicles on the road. The primary purpose of a car engine is to create
power from fuel in order… Continue Reading
|
{"url":"https://www.techgeekinnovation.com/author/vishal","timestamp":"2024-11-08T05:58:03Z","content_type":"text/html","content_length":"77164","record_id":"<urn:uuid:3a135946-c62f-46b9-9b17-ac015c7046a9>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00597.warc.gz"}
|
Classical limit of black hole quantum N-portrait and BMS symmetry
Black hole entropy, denoted by N, in (semi-)classical limit is infinite. This scaling reveals a very important information about the qubit degrees of freedom that carry black hole entropy. Namely,
the multiplicity of qubits scales as N, whereas their energy gap and their coupling as 1/. N. Such a behavior is indeed exhibited by Bogoliubov-Goldstone degrees of freedom of a quantum-critical
state of N soft gravitons (a condensate or a coherent state) describing the black hole quantum portrait. They can be viewed as the Goldstone modes of a broken symmetry acting on the graviton
condensate. In this picture Minkowski space naturally emerges as a coherent state of N=∞ gravitons of infinite wavelength and it carries an infinite entropy. In this paper we ask what is the
geometric meaning (if any) of the classical limit of this symmetry. We argue that the infinite- N limit of Bogoliubov-Goldstone modes of critical graviton condensate is described by
recently-discussed classical BMS super-translations broken by the black hole geometry. However, the full black hole information can only be recovered for finite N, since the recovery time becomes
infinite in classical limit in which N is infinite.
ASJC Scopus subject areas
• Nuclear and High Energy Physics
Dive into the research topics of 'Classical limit of black hole quantum N-portrait and BMS symmetry'. Together they form a unique fingerprint.
|
{"url":"https://nyuscholars.nyu.edu/en/publications/classical-limit-of-black-hole-quantum-n-portrait-and-bms-symmetry-2","timestamp":"2024-11-08T21:34:22Z","content_type":"text/html","content_length":"52951","record_id":"<urn:uuid:b7c96119-922f-4c6f-8286-582e8519faaa>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00780.warc.gz"}
|
Non-first-break technology to remove effects of shallow velocity anomalies
As was analytically shown in the first paper (Blias, 2006a) shallow velocity anomalies can cause large lateral variations in stacking velocities. Non-removed shallow velocity anomalies (SVAs) can
reduce the quality of the post-stack image and create time distortions in seismic horizons. A conventional approach to deal with SVAs utilizes first breaks to determine shallow velocity structures
(Hampson and Russell, 1984, Yilmaz, 1987, Cox, 1999). In some cases the first break approach does not work because of poor first break determination, a shallow low velocity layer, or permafrost among
other possible problems. In these cases we can sometimes use reflections to restore SVAs and to remove their effects on seismic data.
There are three main approaches to build depth velocity model:
1. Direct inversion of NMO velocities and zero-offset times: Dix type of method, including generalizations of Dix formula
2. Different types of reflection tomography.
3. Prestack depth migration velocity analysis.
The first type of approach (direct inversion) works on a layers-tripping method except for a 1D model when the Dix formula gives the solution using only two reflections – at the top and at the bottom
of the layer. If we estimate an interval velocity for 2D and 3D models (that is when the 1D local assumption does not work) we have to know the velocity distribution in the overburden. In the
presence of unresolved (that is with unknown shallow anomalies) shallow velocity anomalies, any kind of direct inversion will lead to big errors because the difference between NMO velocities for the
model with and without shallow velocity anomalies is big (it’s increasing with reflector depth).
Figure 1. Two descriptions of shallow velocity anomalies.
The second and third type of approach (any kind of reflection tomography, prestack depth migration velocity analysis) requires a reasonable initial approximate velocity model. To find an initial
model, we can try to use Dix type of inversion, which does not require initial model, but in the presence of unresolved shallow anomalies, it will not give reasonable interval velocities. That is
why, before using any reflection tomography or prestack depth migration velocity analysis or ray-based stacking velocity inversion, we need to develop a method to obtain appropriate initial depth
velocity model including the shallow part of the subsurface. After that we can use reflection tomography approach to improve this model. We can say that tomography and prestack depth migration
approaches were developed for the situation when the shallow part of the subsurface is simple and complex structures are relatively deep. In this paper, I consider the opposite case – shallow
velocity distribution is complicated (shallow velocity anomalies) and we don’t have shallow reflection, but deep geology is gentle.
There are two possibilities to describe shallow velocity anomalies. We can use a curvilinear boundary z = F(x), which separates two layers with constant velocities, fig. 1a. We may also use a
laterally inhomogeneous layer with the velocity u(x), fig. 1b. Let’s assume that the interval velocity in the second model gives the same vertical time in the first layer as in the first model within
the layer between 0 and the depth h. It means that we have a connection between the boundary F(x) and velocity u(x) (Fig. 1):
where s(x) is slowness in the layer: s(x)= 1/u(x). After differentiating this equation twice, we come to the connection between the second-order derivatives of the two forms:
Comparing formulas (1) and (2) from the paper (Blias, 2006a) and taking into account the last formula, we see that both shallow velocity anomaly descriptions give close to NMO velocities. It means
that if we describe a shallow velocity anomaly using either a laterally changing interval velocity or a curvilinear boundary, we will come very close to the same result.
To illustrate this, let’s consider two models with shallow velocity anomalies, as shown in Figure 2. In this example, we use the two descriptions of the shallow velocity anomalies. Both models have
two shallow velocity anomalies. For the first model, these anomalies are described with a curvilinear boundary F(x) as Figure 2a. The second model, with a laterally changing interval velocity in the
first layer, is shown in Figure 2b. Both descriptions give the same vertical time in the first layer, that is, equation (3) holds. Figure 2c shows NMO parameters for these two models. We see that the
NMO parameters for these two models are almost the same. It means that we can use either of the descriptions to build a shallow velocity model from time arrivals. Which one to use depends on a priori
information about shallow velocities.
Figure 2. Two descriptions of shallow velocity anomaly. Numerical example.
Technology for non-first-breaks removal of shallow velocity anomaly effects
Based on these results, a special technology has been developed (Blias, 2005a,b). Most of the coding has been developed by Valentina Khatchatrian. This technology includes five main steps.
1. Automatic high-density non-hyperbolic constrained velocity analysis. To run this velocity analysis, we first pick velocities at one CDP gather and put some constraints on how rapidly these
velocities can change laterally and on the possible values of stacking velocities (Blias, 2006b). The result of this step is a velocity section for each CDP point and each time sample. NMO
gathers are considered as a QC for the velocity analysis result.
2. Building initial depth velocity model. For this we use the stacking velocities after the first hyperbolic velocity analysis. First we determine the shallow velocity model using deep reflections
(Blias, 2005b). After that we use a generalization of the Dix formula for a layered medium with laterally varying velocities (Blias, 2003a)
3. Travel-time inversion and depth velocity model improvement. “We use non-hyperbolic travel-times to build a depth velocity model, including shallow velocity structures. For this we use a layer
based optimization approach” (Blias and Khatchatrian, 2003b). We describe interval velocities and boundaries as the sum of some reference (known) functions and linear combinations of basic
functions jk(x,y) with the coefficients a and b:
Here ϕ[k](x,y) – basic functions, α[m],[k], β[m],[k] are unknown coefficients for the m-th layer; ρ[m](x,y) and H[k](x,y) are slowness and boundaries for the zero approximation of depth velocity
model (reference model). Then we can calculate the objective function
Here k is the number of reflections that are used; t[k](S,M) is observed time for the k-th wave, S and R stand for the source and receiver points. Scalars u[m] and w[m] are regularization factors
to stabilize the solution of the inverse problem. To find a minimum of the non-linear function F, we use the Newton method.
4. Velocity anomaly replacement (VAR). We use the depth velocity model to remove the influence of the SVA. For a given shallow velocity model it can be done by forward and reverse prestack wavefield
extrapolation (Wapenaar and Berkhout, 1985). We use variable time shifts to remove SVA effects, which is much less expensive (Blias et al, 1985). For this we run raytracing for the obtained depth
velocity model and calculate prestack reflection time arrivals for all boundaries. Then we replace the shallow inhomogeneous layer with a homogeneous one and calculate time arrivals for this
model. The difference between the first and the second set of times is applied to CDP gathers. This procedure moves events on prestack data to the position where they would be if the shallow
layer were homogeneous.
Model data test
Let’s illustrate the above technique on model data. We test this approach on model data with modest deep structures. We will see that the suggested approach is stable and allows us to restore the
depth velocity model when we have complicated shallow velocity anomalies, caused by several curvilinear boundaries and laterally inhomogeneous layers. Figure 3a shows depth velocity model boundaries
and interval velocities are displayed on figure 3b.
Figure 3a. Boundaries.
Figure 3b. Interval velocities.
From these figures we see that the shallow part of the subsurface is complicated. The bottom of this shallow part is set to 300m. Above this boundary there are three curvilinear layers with laterally
changing interval velocities. The average velocity in this layer is 1600 m/s. For this model synthetic CDP gathers have been calculated with maximum offset/ reflector depth = 1.5. Shot interval =
receiver interval = 32 m. All 5 steps have been run on this synthetic data. Figure 4a shows a velocity grid after automatic continuous velocity analysis. We see large lateral stacking velocity
oscillations increasing with depth.
Figure 4. Velocity grid before (a) and after (b) VR.
An initial velocity model was build, using the methods from the author’s papers (Blias, 2003, 2005). To build the initial depth velocity model we need to assign an average velocity in the first
layer. We put h[1] = 240m and average velocity in the first layer of 1200m/s while we know that h1 is 300m and average velocity in this layer is 1600 m/s. It means that we have used the wrong a
priori parameters for the first layer to prove that it should not have much influence on the final result of shallow velocity anomaly replacement.
To improve this model, traveltime optimization inversion was applied using non-hyperbolic traveltimes extracted after residual velocity analysis.
Because we used the incorrect value for the bottom of the first layer with velocity anomalies (240 m instead of 300 m) the recovered velocity in this layer differs from the original average velocity.
As was mentioned above, the vertical time should be found with a reasonable accuracy. Figure 5 shows vertical times for the original model (blue) and for the model after traveltime inversion (brown).
Figure 5. Vertical times in the first layer for initial model (blue) and recovered model (brown).
After we determined the interval velocity in the first layer, we use a generalization of Dix’s formulas (Blias, 2003) to find interval velocities for the other layers. The results of these
calculations give us the initial depth velocity model, which is needed for the optimization traveltime inversion. Figure 6 shows the boundaries of the initial model (brown) and after optimization
(blue). Comparing these with the original model boundaries (fig. 3a) we see acceptable similarity between them. All structures were recovered correctly despite using the wrong thickness for the first
Figure 6. Boundaries of original model (brown) and after time-inversion (blue).
The model after traveltime inversion was used to calculate VR corrections. These corrections were applied to CDP gathers. They transform moveout curves to hyperbolic ones (strictly speaking, new NMO
curves can be better approximated with a hyperbola than the original ones). VR significantly removed non-hyperbolic distortions caused by shallow anomalies.
VR also significantly improved the velocity grid (fig. 4 b) and structural imaging. Figure 7 shows poststack sections before (a) and after (b) VR. We can see that after the VR the poststack data
looks more similar to the depth velocity model. From this we come to conclusion that, for the shallow part of the section, utilization of a laterally inhomogeneous layer instead of complicated
velocity model is acceptable. It allows us to restore shallow velocity anomalies using deep reflections and to eliminate their influence on prestack gathers with sufficient accuracy.
Figure 7. Poststack sections before (a) and after (b) VR.
Real data example. The quality of raw data did not allow us to use first breaks at all, so the only information that we could use was reflection traveltimes (Fig. 8).
Figure 8. Common shot gather.
After automatic high-density non-hyperbolic velocity analysis we determined NMO curves with high accuracy. Figure 9 shows initial CDP gathers (a), NMO gathers after hyperbolic velocity analysis (b)
and after non-hyperbolic residual velocity analysis (c). We see strong non-hyperbolic NMO and flattened arrivals after automatic non-hyperbolic velocity analysis.
Figure 9a. Initial CDP gathers.
Figure 9b. Gathers after hyperbolic NMO.
Figure 9c. Gathers after non-hyperbolic NMO.
Picked traveltimes were used to build a depth velocity model using a generalization of Dix’s formulas and optimization approach. The result is shown in figure 10. We see a strong shallow velocity
anomaly in the interval 12 – 18 km.
Figure 10a. Depth velocity model. Boundaries. Figure 10b. Depth velocity model. Interval velocities.
After depth velocity model building, we replaced the shallow velocity. For improved CDP gathers, the continuous automatic constrained velocity analysis has been applied (Blias, 2006b). Because of
shallow velocity anomaly replacement, stacking velocities do not have specific variations caused by shallow inhomogeneity (Fig. 11). Figure 12 shows seismic sections before (a) and after (b) shallow
velocity anomaly replacement. We can see significant changes in the interval between 15 and 22 km. Velocity replacement eliminated horizon fluctuation within this interval and improved the stacking
Figure 11a. Stacking velocities before VAR. Figure 11b. Stacking velocities after VAR.
Figure 12 a. Seismic section before VAR. Figure 12 b. Seismic section after VAR
We would like to thank Michael Burianyk (Shell Canada) and Jason Noble (Edge Technology) for helping me with this paper. I would like to acknowledge Valentina Khatchatrian who developed most of the
software and Samvel Khatchatrian for useful discussion.
Blias E.A. Levit A.N., Ferentsi V.N. 1985, Method of taking into account shallow anomalies while processing CDP data: Directions and methodology in Oil and Gas Exploration, Nauka, (in Russian).
Blias, E, 2003, Dix’s type formulae for a medium with laterally changing velocity: 73rd Meeting, Society of Exploration Geophysicists, Expanded Abstracts, 706-709.
Blias E., Khatchatrian V., 2003, Optimization approach to determine interval velocities in a medium with laterally inhomogeneous curvilinear layers: 73rd Meeting, Society of Exploration
Geophysicists, Expanded Abstracts, 670-763.
Blias, E, [2005a] Stacking velocities in the presence of shallow velocity anomalies: 75th Meeting, Society of Exploration Geophysicists, Expanded Abstracts, 2193-2196.
Blias, E [2005b] Determination of shallow velocity anomalies using deep reflections: Meeting, Society of Exploration Geophysicists, Expanded Abstracts, 2585-2588.
Blias, E, 2006a, Anomalous stacking velocities - critique, analysis, explanations and new insights: RECORDER, V. 31, N5, 42-48.
Blias, E, 2006b, Automatic high-density constrained velocity analysis: RECORDER, V. 31, N6, 25-30.
Cox, M. J. G. [1999] Static corrections for seismic reflection surveys. Soc. Expl. Geophys.
Hampson, D., and Russell, B., [1984] First break interpretation using generalized linear inversion. 54th Meeting, Society of Exploration Geophysicists, Expanded Abstracts, 532–534.
Yilmaz, ¨ O. [1987] Seismic data processing: Society of Exploration Geophysicists.
Taner, M. T., Wagner, D. E., Baysal, E., and Lu, L. [1998] A unified method for 2-D and 3-D refraction statics. Geophysics, 63, 260–274.
|
{"url":"https://csegrecorder.com/articles/view/non-first-break-technology-to-remove-effects-of-shallow-velocity-anomalies","timestamp":"2024-11-03T15:54:27Z","content_type":"text/html","content_length":"40004","record_id":"<urn:uuid:cda56c7c-00e2-42f1-b53e-d01804b533cf>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00308.warc.gz"}
|
Ancient Greek Philosophy Part 7 - Aristotle | Minds
Aristotle is likely the most far reaching philosopher to have ever lived. This despite the fact that only about 30 of the supposed 200 works he wrote survived to the modern era. Aristotle covered
every possible field of philosophy available to him at the time, and is credited as the inventor of the first formal system of logic. It's also worth noting that Aristotle's work had a profound
effect on Medieval Christian Philosophy via the works of Thomas Aquinas. Covering the breadth of Aristotle's works in a short article is impossible, so here I'll focus on Nicomachean Ethics and
Aristotle's views on the nature of human good.
"Every art and every inquiry, and similarly every action and pursuit, is thought to aim at some good; and for this reason the good has rightly been declared to be that at which all things aim." -
Nicomachean Ethics Book 1.1
This is the first sentence of Nicomachean Ethics, and already Aristotle is very much in alignment with my own views on the nature of good. That is, good is subjective and relative to the needs and
wants of living entities. Aristotle is not defining good as some objective reality that exists independently of our ability to reach it, the way Plato would. Aristotle understood the true nature of
the term. The good is simply what we want. With this definition established, Aristotle attempts to identify the highest good.
"If, then, there is some end of the things we do, which we desire for its own sake (everything else being desired for the sake of this)..., clearly this must be the good and the chief good." -
Nicomachean Ethics Book 1.2
To determine the highest human good, we must find the thing which humans desire above all else, which is the ultimate end of all other means. First Aristotle considers political science as a possible
answer, as politics often seeks to inform and direct the purposes of the people. But even political science is itself a means to an end.
"In view of the fact that all knowledge and every pursuit aims at some good, what it is that we say political science aims at and what is the highest of all goods achievable by action. Verbally there
is very general agreement; for both the general run of men and people of superior refinement say that it is happiness, and identify living well and doing well with being happy; but with regard to
what happiness is they differ, and the many do not give the same account as the wise." - Nicomachean Ethics Book 1.4
And so Aristotle arrives at the conclusion that the highest human good is happiness, but that happiness is something people find difficult to define specifically, because point-of-view varies from
person to person. Aristotle does his best to define happiness anyway, but he is unable to shake the subjective nature of the term.
"...happiness is an activity of soul in accordance with perfect virtue." - Nicomachean Ethics Book 1.13
Happiness as described by Aristotle is much more than just pleasure or fulfillment. The happiness of which he speaks, as an activity of soul, is something exclusive to humans, or at least sapient
life, as only creatures who exhibit rational thought can be said to have souls, at least for the purposes of this discussion. But what of virtue?
"Virtue is a state of character concerned with choice...the state of character which makes a man good and which makes him do his own work well." - Nicomachean Ethics Book 2.6
Since we are considering good to mean "what we want," the full definition of happiness (the highest human good) being expressed by Aristotle is something like; "The ongoing state of being in which a
person has made (and continues to make) choices that have made said person into who they want to be, thereby enabling them to effectively achieve their chosen purpose in life." A byproduct of
defining happiness this way is the assertion that no one is truly happy by accident. Only by knowing what we want, and deciding to become who we need to be in order to attain it, can one achieve the
highest form of happiness.
Other blogs on the Ancient Greeks:
Part 2 - Heraclitus of Ephesus
Part 4 - Parmenides and Zeno of Elea
|
{"url":"https://www.minds.com/PillarofCreation/blog/ancient-greek-philosophy-part-7-aristotle-869668634125078528","timestamp":"2024-11-09T09:40:04Z","content_type":"text/html","content_length":"313305","record_id":"<urn:uuid:ac92563d-fb67-48e2-90d1-8c0cbfc86ebe>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00544.warc.gz"}
|
is, informally speaking, the sum of the terms of a sequence.
Finite sequences and series
have defined first and last terms, whereas
infinite sequences and series
continue indefinitely.
The above text is a snippet from Wikipedia: Series (mathematics)
and as such is available under the Creative Commons Attribution/Share-Alike License.
is a Canadian French language Category A specialty channel devoted to scripted comedy and dramatic programming. The channel is a joint venture between Astral Media and Shaw Media.
The above text is a snippet from Wikipedia: Series+
and as such is available under the Creative Commons Attribution/Share-Alike License.
1. A number of things that follow on one after the other or are connected one after the other.
2. A television or radio program which consists of several episodes that are broadcast in regular intervals
Friends was one of the most successful television series in recent years.
3. A group of episodes of a television or radio program broadcast in regular intervals with a long break between each group, usually with one year between the beginning of each.
4. The sum of the terms of a sequence.
5. A group of matches between two sides, with the aim being to win more matches than the opposition.
6. An unranked taxon.
1. Connected one after the other in a circuit.
You have to connect the lights in series for them to work properly.
The above text is a snippet from Wiktionary: series
and as such is available under the Creative Commons Attribution/Share-Alike License.
Need help with a clue?
Try your search in the crossword dictionary!
|
{"url":"https://www.crosswordnexus.com/word/SERIES","timestamp":"2024-11-12T15:37:40Z","content_type":"application/xhtml+xml","content_length":"11695","record_id":"<urn:uuid:71e54d90-fd98-4da4-a695-48ac31fa424c>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00792.warc.gz"}
|
9 marbles and a weight balance – which is the heaviest one?
So, you have 9 marbles and a balance. Or weight scale. Or weight balance. Or whatever you want to call it.
Essentially, this:
My awesome drawing of a weight balance and 9 marbles
Because I draw like a 2 year old, some clarification as to what you see above. A real weight balance (or scale) looks slightly like the apparatus I drew above. You can put things in the little
“buckets” and the scale will tip to the heavier side.
Yeah, I probably shouldn’t have even bothered with the picture. In any case, on to the riddle.
1) You have 9 marbles (as seen).
2) 8 of them are exactly the same weight. The remaining marble is slightly heavier than the others. You don’t know which is which.
3) You are allowed to use/set the scale TWICE in total. In other words, you get to take 2 measurements (or “comparisons”) here.
4) Each “bucket” has enough room for multiple marbles. As long as you have the same number of marbles in each side, the heavy marble has just enough weight to tip the scale to its side.
How do you find the heavy marble?
(scroll down slowly for tips and to eventually the answer!)
TIP #1: You won’t be putting all the marbles in for your first measurement (that wouldn’t work because we have an odd number!). You will be leaving at least 1 aside.
TIP #2: Clearly, you have to put an even number of marbles in each side for your first measurement. So either 1+1, 2+2, 3+3, or 4+4. Don’t forget, all the marbles are the same weight except for the 1
heavier one. That’s an important tidbit.
TIP #3: How many groups of marbles can you compare at once? If you said 2, think again!
TIP #4 (a big one, and part of the answer): Pretend you only had 3 marbles, but could only use the scale once. Could you figure out which of the 3 was the heavy one? (the answer is yes). How?
TIP #5: below are some colors to help you….
We’ve separated the marbles into 3 groups. If you were stuck, the picture might help you with your first measurement.
Tip #6: Next image (solution to the 1st measurement) is below:
Start by putting 3 marbles into each side. In this case, we leave the black ones out.
One of 3 things will happen:
• The red side may go down (in which case we know the red group contains the heaviest marble), or
• The blue side may go down (in which case we know the blue group contains the heaviest marble), or
• The sides will balance (in which case we know the black group must contain the heavy marble)
Either way we’ve narrowed it down from 9 possibilities to 3 on our first measurement.
Let’s keep going.
For argument’s sake, let’s pretend it was the blue marbles that had the heavy one in the measurement above. Let’s take our 2nd and final measurement.
Similar to the first time around, we split these up evenly into 3 groups (3 groups of ONE this time).
One of three things will happen again:
• If #4 goes down, we know it is the heavy one
• If #5 goes down, we know it is the heavy one
• If the scale balances, we know #6 must be the heavy one!
Further thought (if that was too easy for you!):
To find the heavy marble took 2 measurements. How many measurements would we need if we had 27 marbles (26 being the same weight and 1 being heavier)?
What if we had 81 marbles (80 being even in weight and 1 being heavier)?
Well, this is the first brain teaser I’ve put up since 2006 (7 years… yikes!). But more should be coming soon. If you found this one interesting, you can hit up the “Riddles, Puzzles, Brain Teasers”
section by clicking the category below or finding it in menu at the top of the site.
12 Comments | Leave a Comment
Sort by Oldest | Sort by Newest
1. Brian on February 4, 2013 - click here to reply
I actually figured this one out in a couple minutes - usually I'm terrible at puzzles/riddles. Looking forward to the next one :D
2. lilly on June 19, 2013 - click here to reply
good one
3. Anonymous on November 11, 2018 - click here to reply
What's the answer
□ Matt Gadient on November 11, 2018 - click here to reply
It's in with the final 2 images, but in case they're not loading:
1. We first put marbles in groups of 3. We want to see which group contains the heavy marble. To do this, put 3 marbles in each side (3 left, 3 right, 3 unused)
2. If the scale tips to a side, we know that group of 3 has the heavy marble. If it does not tip to a side (if it balances), we know the "unused" group of 3 contains the heavy marble. Keep
the heavy group of 3 marbles and discard the rest.
3. Since we now have 3 marbles (1 of which is the heavy one), put 1 in each side (1 left, 1 right, 1 unused)
4. Set the scale again. If it tips to a side, we know that marble is the heavy one! If instead it balances, we know the one "unused" marble is the heavy one!
4. Sarah on June 17, 2020 - click here to reply
I have an alternative answer let me know what you think and if it has a higher probability of isolating the heavy one.
Remove one marble and put four and four one each side. If they are balanced the one you removed is the heaviest.
If they are unbalanced select the four from the heavier side. Remove one and isolate.
Replace that one with the one you originally removed because it has been ruled out. And do your second measure with two and two. If they are balanced the one you isolated is the heaviest.
If they are unbalanced on the side where you substituted the known marble. You can identify the heaviest.
If they are unbalanced on the 2 unknown you will have a 50/50 chance of choosing correctly. This solution provides 3 ways you could be absolutely sure.
□ Craig H. on July 9, 2021 - click here to reply
The problem with your solution is that the "worst case" scenario would involve having to use the scale 3 times.
Given: Set A = (1, 2, 3, 4) and Set B = (5, 6, 7, 8), and Set C = (9)
On Measurement #1, we find that Set A is heavier than Set B
As per your solution, we discard Sets B and C and split Set A by isolating 1 of the marbles -- so now: Set A=(1, 2), Set B=(3), and Set C=(4).
Since you can't compare a set of 2 marbles against a set of 1 marble...we're left with just comparing B & C in Measurement #2. If either of those is heavier, great...the problem is solved.
However, if they're balanced, then we then know that the heavy marble is in Set A....and that means making Measurement #3.
5. Anonymous on November 18, 2020 - click here to reply
Awesome I solved it first try
6. Bob M. on April 12, 2023 - click here to reply
I just came across your post last night. It's similar to a puzzle that was presented to me by a colleague while on a business trip to the Far East about twenty years ago. In this puzzle there are
nine identical marbles and eight are the same weight (same as in your scenario), but in this case one of the marbles is slightly heavier OR slightly lighter, and and you are allowed to use the
scale THREE times in total to determine which marble is the odd one, and whether it is heavier or lighter than the other eight.
7. Jay on June 15, 2023 - click here to reply
Why can we not do a 4-4 split and find out which one is heavier from that?
□ Matt Gadient on June 15, 2023 - click here to reply
The condition that restricts you to a maximum of 2 comparisons is the problematic factor here. In the 4-4 scenario, imagine that one side of the balance went down. You now have 4 marbles, 1
of which is heavier than the others, and can only set the balance once. How do you determine which is the heavy marble?
8. gungsukma on July 2, 2024 - click here to reply
I though of a solution I think pretty nice.
The coins are arranged like this:
#0 #1 #2
#3 #4 #5
#6 #7 #8
First, we compare the weight of the left coins (#0 #3 #6) vs the right coins (#2 #5 #8)
so we know whether the heaviest coin is in the left, middle, or right.
Then, we compare the weight of the top coins (#0 #1 #2) vs the bottom coins (#6 #7 #8)
so we know whether the heaviest coin is in the top, middle, or bottom.
Then we find the coin by coordinate.
9. oaelle on September 27, 2024 - click here to reply
TYSM for the answer
Leave a Comment
|
{"url":"https://mattgadient.com/9-marbles-and-a-weight-balance-which-is-the-heaviest-one/","timestamp":"2024-11-10T02:41:07Z","content_type":"text/html","content_length":"69008","record_id":"<urn:uuid:a0081c5e-f7be-43e2-915c-3b70372991b5>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00673.warc.gz"}
|
Simple Variable Mass 6DOF (Quaternion)
Implement quaternion representation of six-degrees-of-freedom equations of motion of simple variable mass with respect to body axes
Aerospace Blockset / Equations of Motion / 6DOF
The Simple Variable Mass 6DOF (Quaternion) implements a quaternion representation of six-degrees-of-freedom equations of motion of simple variable mass with respect to body axes.
For a description of the coordinate system and the translational dynamics, see the description for the Simple Variable Mass 6DOF (Euler Angles) block. Aerospace Blockset™ uses quaternions that are
defined using the scalar-first convention. For more information on the integration of the rate of change of the quaternion vector, see Algorithms.
The block assumes that the applied forces are acting at the center of gravity of the body.
F[xyz] — Applied forces
three-element vector
Applied forces, specified as a three-element vector.
Data Types: double
M[xyz](N-m) — Applied moments
three-element vector
Applied moments, specified as a three-element vector.
Data Types: double
dm/dt (kg/s) — Rate of change of mass
One or more rates of change of mass (positive if accreted, negative if ablated), specified as a scalar.
Data Types: double
V[re] — Relative velocity
three-element vector
One or more relative velocities, specified as a three-element vector, at which the mass is accreted to or ablated from the body in body-fixed axes.
To enable this port, select Include mass flow relative velocity.
Data Types: double
V[e] — Velocity in flat Earth reference frame
three-element vector
Velocity in the flat Earth reference frame, returned as a three-element vector.
Data Types: double
X[e] — Position in flat Earth reference frame
three-element vector
Position in the flat Earth reference frame, returned as a three-element vector.
Data Types: double
φ θ ψ (rad) — Euler rotation angles
three-element vector
Euler rotation angles [roll, pitch, yaw], returned as three-element vector, in radians.
Data Types: double
DCM[be] — Coordinate transformation
3-by-3 matrix
Coordinate transformation from flat Earth axes to body-fixed axes, returned as a 3-by-3 matrix.
Data Types: double
V[b] — Velocity in body-fixed frame
three-element vector
Velocity in body-fixed frame, returned as a three-element vector.
Data Types: double
ω[b] (rad/s) — Angular rates in body-fixed axes
three-element vector
Angular rates in body-fixed axes, returned as a three-element vector, in radians per second.
Data Types: double
dω[b]/dt — Angular accelerations
three-element vector
Angular accelerations in body-fixed axes, returned as a three-element vector, in radians per second squared.
Data Types: double
A[bb] — Accelerations in body-fixed axes
three-element vector
Accelerations in body-fixed axes with respect to body frame, returned as a three-element vector.
Data Types: double
Fuel — Fuel tank status
Fuel tank status, returned as:
• 1 — Tank is full.
• 0 — Tank is neither full nor empty.
• -1 — Tank is empty.
Data Types: double
A[be] — Accelerations with respect to inertial frame
three-element vector
Accelerations in body-fixed axes with respect to inertial frame (flat Earth), returned as a three-element vector. You typically connect this signal to the accelerometer.
This port appears only when the Include inertial acceleration check box is selected.
Data Types: double
Units — Input and output units
Metric (MKS) (default) | English (Velocity in ft/s) | English (Velocity in kts)
Input and output units, specified as Metric (MKS), English (Velocity in ft/s), or English (Velocity in kts).
Units Forces Moment Acceleration Velocity Position Mass Inertia
Metric (MKS) Newton Newton-meter Meters per second squared Meters per second Meters Kilogram Kilogram meter squared
English (Velocity in ft/s) Pound Foot-pound Feet per second squared Feet per second Feet Slug Slug foot squared
English (Velocity in kts) Pound Foot-pound Feet per second squared Knots Feet Slug Slug foot squared
Programmatic Use
Block Parameter: units
Type: character vector
Values: Metric (MKS) | English (Velocity in ft/s) | English (Velocity in kts)
Default: Metric (MKS)
Mass Type — Mass type
Simple Variable (default) | Fixed | Custom Variable
Mass type, specified according to the following table.
Mass Type Description Default For
Fixed Mass is constant throughout the simulation.
Simple Variable Mass and inertia vary linearly as a function of mass rate.
Custom Variable Mass and inertia variations are customizable.
The Simple Variable selection conforms to the equations of motion in Algorithms.
Programmatic Use
Block Parameter: mtype
Type: character vector
Values: Fixed | Simple Variable | Custom Variable
Default: Simple Variable
Representation — Equations of motion representation
Quaternion (default) | Euler Angles
Equations of motion representation, specified according to the following table.
Representation Description
Quaternion Use quaternions within equations of motion.
Euler Angles Use Euler angles within equations of motion.
The Quaternion selection conforms to the equations of motion in Algorithms.
Programmatic Use
Block Parameter: rep
Type: character vector
Values: Euler Angles | Quaternion
Default: 'Euler Angles'
Initial position in inertial axes [Xe,Ye,Ze] — Position in inertial axes
[0 0 0] (default) | three-element vector
Initial location of the body in the flat Earth reference frame, specified as a three-element vector.
Programmatic Use
Block Parameter: xme_0
Type: character vector
Values: '[0 0 0]' | three-element vector
Default: '[0 0 0]'
Initial velocity in body axes [U,v,w] — Velocity in body axes
[0 0 0] (default) | three-element vector
Initial velocity in body axes, specified as a three-element vector, in the body-fixed coordinate frame.
Programmatic Use
Block Parameter: Vm_0
Type: character vector
Values: '[0 0 0]' | three-element vector
Default: '[0 0 0]'
Initial Euler orientation [roll, pitch, yaw] — Initial Euler orientation
[0 0 0] (default) | three-element vector
Initial Euler orientation angles [roll, pitch, yaw], specified as a three-element vector, in radians. Euler rotation angles are those between the body and north-east-down (NED) coordinate systems.
Programmatic Use
Block Parameter: eul_0
Type: character vector
Values: '[0 0 0]' | three-element vector
Default: '[0 0 0]'
Initial body rotation rates [p,q,r] — Initial body rotation
[0 0 0] (default) | three-element vector
Initial body-fixed angular rates with respect to the NED frame, specified as a three-element vector, in radians per second.
Programmatic Use
Block Parameter: pm_0
Type: character vector
Values: '[0 0 0]' | three-element vector
Default: '[0 0 0]'
Initial mass — Initial mass
1.0 (default) | scalar
Initial mass of the rigid body, specified as a double scalar.
Programmatic Use
Block Parameter: mass_0
Type: character vector
Values: '1.0' | double scalar
Default: '1.0'
Empty mass — Empty mass
0.5 (default) | scalar
Empty mass of the body, specified as a double scalar.
Programmatic Use
Block Parameter: mass_e
Type: character vector
Values: double scalar
Default: '0.5'
Full mass — Full mass of body
2.0 (default) | scalar
Full mass of the body, specified as a double scalar.
Programmatic Use
Block Parameter: mass_f
Type: character vector
Values: double scalar
Default: '2.0'
Empty inertia matrix — Empty inertia matrix
eye(3) (default) | 3-by-3 matrix
Inertia tensor matrix for the empty inertia of the body, specified as 3-by-3 matrix.
Programmatic Use
Block Parameter: inertia_e
Type: character vector
Values: 'eye(3)' | 3-by-3 matrix
Default: 'eye(3)'
Full inertia matrix — Full inertia of body
2*eye(3) (default) | 3-by-3 matrix
Inertia tensor matrix for the full inertia of the body, specified as 3-by-3 matrix.
Programmatic Use
Block Parameter: inertia_f
Type: character vector
Values: '2*eye(3)' | 3-by-3 matrix
Default: '2*eye(3)'
Gain for quaternion normalization — Gain
1.0 (default) | scalar
Gain to maintain the norm of the quaternion vector equal to 1.0, specified as a double scalar.
Programmatic Use
Block Parameter: k_quat
Type: character vector
Values: 1.0 | double scalar
Default: 1.0
Include mass flow relative velocity — Mass flow relative velocity port
off (default) | on
Select this check box to add a mass flow relative velocity port. This is the relative velocity at which the mass is accreted or ablated.
Programmatic Use
Block Parameter: vre_flag
Type: character vector
Values: off | on
Default: off
Include inertial acceleration — Include inertial acceleration port
off (default) | on
Select this check box to add an inertial acceleration port.
To enable the A[b ff] port, select this parameter.
Programmatic Use
Block Parameter: abi_flag
Type: character vector
Values: 'off' | 'on'
Default: off
State Attributes
Assign a unique name to each state. You can use state names instead of block paths during linearization.
• To assign a name to a single state, enter a unique name between quotes, for example, 'velocity'.
• To assign names to multiple states, enter a comma-separated list surrounded by braces, for example, {'a', 'b', 'c'}. Each name must be unique.
• If a parameter is empty (' '), no name is assigned.
• The state names apply only to the selected block with the name parameter.
• The number of states must divide evenly among the number of state names.
• You can specify fewer names than states, but you cannot specify more names than states.
For example, you can specify two names in a system with four states. The first name applies to the first two states and the second name to the last two states.
• To assign state names with a variable in the MATLAB^® workspace, enter the variable without quotes. A variable can be a character vector, cell array, or structure.
Position: e.g., {'Xe', 'Ye', 'Ze'} — Position state name
'' (default) | comma-separated list surrounded by braces
Position state names, specified as a comma-separated list surrounded by braces.
Programmatic Use
Block Parameter: xme_statename
Type: character vector
Values: '' | comma-separated list surrounded by braces
Default: ''
Velocity: e.g., {'U', 'v', 'w'} — Velocity state name
'' (default) | comma-separated list surrounded by braces
Velocity state names, specified as comma-separated list surrounded by braces.
Programmatic Use
Block Parameter: Vm_statename
Type: character vector
Values: '' | comma-separated list surrounded by braces
Default: ''
Quaternion vector: e.g., {'qr', 'qi', 'qj', 'qk'} — Quaternion vector state name
'' (default) | comma-separated list surrounded by braces
Quaternion vector state names, specified as a comma-separated list surrounded by braces.
Programmatic Use
Block Parameter: quat_statename
Type: character vector
Values: '' | comma-separated list surrounded by braces
Default: ''
Body rotation rates: e.g., {'p', 'q', 'r'} — Body rotation state names
'' (default) | comma-separated list surrounded by braces
Body rotation rate state names, specified comma-separated list surrounded by braces.
Programmatic Use
Block Parameter: pm_statename
Type: character vector
Values: '' | comma-separated list surrounded by braces
Default: ''
Mass: e.g., 'mass' — Mass state name
'' (default) | character vector
Mass state name, specified as a character vector.
Programmatic Use
Block Parameter: mass_statename
Type: character vector
Values: '' | character vector
Default: ''
The equation of the integration of the rate of change of the quaternion vector follows. The gain K drives the norm of the quaternion state vector to 1.0 should ε become nonzero. You must choose the
value of this gain with care, because a large value improves the decay rate of the error in the norm, but also slows the simulation because fast dynamics are introduced. An error in the magnitude in
one element of the quaternion vector is spread equally among all the elements, potentially increasing the error in the state vector.
$\begin{array}{l}\left[\begin{array}{c}{\stackrel{˙}{q}}_{0}\\ {\stackrel{˙}{q}}_{1}\\ {\stackrel{˙}{q}}_{2}\\ {\stackrel{˙}{q}}_{3}\end{array}\right]=1}{2}\left[\begin{array}{cccc}0& -p& -q& -r\\ p&
0& r& -q\\ q& -r& 0& p\\ r& q& -p& 0\end{array}\right]\left[\begin{array}{c}{q}_{0}\\ {q}_{1}\\ {q}_{2}\\ {q}_{3}\end{array}\right]+K\epsilon \left[\begin{array}{c}{q}_{0}\\ {q}_{1}\\ {q}_{2}\\ {q}_
{3}\end{array}\right]\\ \\ \epsilon =1-\left({q}_{0}{}^{2}+{q}_{1}{}^{2}+{q}_{3}{}^{2}+{q}_{4}{}^{2}\right)\end{array}$
[1] Stevens, Brian, and Frank Lewis. Aircraft Control and Simulation. 2nd ed. Hoboken, NJ: John Wiley & Sons, 2003.
[2] Zipfel, Peter H. Modeling and Simulation of Aerospace Vehicle Dynamics. 2nd ed. Reston, VA: AIAA Education Series, 2007.
Extended Capabilities
C/C++ Code Generation
Generate C and C++ code using Simulink® Coder™.
Version History
Introduced in R2006a
|
{"url":"https://it.mathworks.com/help/aeroblks/simplevariablemass6dofquaternion.html","timestamp":"2024-11-06T11:34:47Z","content_type":"text/html","content_length":"156321","record_id":"<urn:uuid:95231255-4b71-4c90-a3a1-ccf8b25fa28e>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00017.warc.gz"}
|
235. Lowest Common Ancestor of a Binary Search Tree—LeetCode(Python)
235. Lowest Common Ancestor of a Binary Search Tree — LeetCode(Python)
I got you!
Given a binary search tree (BST), find the lowest common ancestor (LCA) node of two given nodes in the BST.
According to the definition of LCA on Wikipedia: “The lowest common ancestor is defined between two nodes p and q as the lowest node in T that has both p and q as descendants (where we allow a node
to be a descendant of itself).”
Example 1:
Input: root = [6,2,8,0,4,7,9,null,null,3,5], p = 2, q = 8
Output: 6
Explanation: The LCA of nodes 2 and 8 is 6.
Example 2:
Input: root = [6,2,8,0,4,7,9,null,null,3,5], p = 2, q = 4
Output: 2
Explanation: The LCA of nodes 2 and 4 is 2, since a node can be a descendant of itself according to the LCA definition.
Example 3:
Input: root = [2,1], p = 2, q = 1
Output: 2
• The number of nodes in the tree is in the range [2, 105].
• -109 <= Node.val <= 109
• All Node.val are unique.
• p != q
• p and q will exist in the BST.
1. Iterative Approach —
Explanation —
This problem has a very trivial solution since the nodes in a binary search tree are arranged in a given way.
If both p and q nodes have values smaller than the current node, we traverse only the left subtree.
If both p and q nodes have values greater than the current node, we traverse only the right subtree.
If, however, we find a node which has a value that is in between the values of both p and q nodes, we have found the split condition and thus have found the lowest common ancestor.
Time and Space Complexity —
We are only visiting one node per level in the tree, so we can say that our algorithm runs in logarithmic time complexity since it is a function of the height of the tree.
One more way to think of it is that at every iteration, we divide the tree into two equal halves — left and right. So we are dividing n nodes into two halves until we reach a given node. In the worst
case, we exhaust to only one node remaining in the tree. How many times can we divide n to get to 1?
Also, we require a constant amount of auxiliary space to solve this problem.
Thus, if n is the number of nodes in the tree.
Time Complexity: O(log(n))
Space Complexity: O(1)
2. Recursive Approach —
Explanation —
A similar approach is taken in order to solve this problem recursively.
Instead of shifting our root to the left or right subtree, we recurse on the left or right subtree whenever necessary.
And in the case that we do not need to recurse on either subtree, we return the root itself, which is our lowest common ancestor.
Time and Space Complexity —
The time complexity analysis is the same as above and we can easily conclude that the runtime for our our algorithm is logarithmic.
The space complexity, however, is not constant now because of the recursion stack. The auxiliary space required now is also logarithmic since it depends on the height of the tree.
Thus, if n is the number of nodes in the binary tree,
Time Complexity: O(log(n))
Space Complexity: O(log(n))
Feel free to ask any related questions in the comment section or the links provided down below.
I don’t have friends:
Let’s be friends!
Connect with me on:
Instagram (I know. Very professional)
Jai Shri Ram 🙏
|
{"url":"https://palashsharma891.medium.com/235-lowest-common-ancestor-of-a-binary-search-tree-leetcode-python-f415d804e91d","timestamp":"2024-11-09T17:25:09Z","content_type":"text/html","content_length":"128415","record_id":"<urn:uuid:303ac2e3-95c1-446c-8eab-7f1ea9f9db18>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00272.warc.gz"}
|
Farm Fencing Cost Calculator – Instant Estimate Tool
This tool helps you calculate the cost of fencing your farm based on the length needed and price per unit.
Farm Fencing Cost Calculator
This calculator helps you estimate the cost of constructing a fence on your farm based on various parameters. Fill in all the fields and click “Calculate” to get the results.
How to Use
• Enter the total length of the fencing in meters.
• Enter the height of the fencing in meters.
• Enter the cost of the fencing material per meter.
• Enter the distance between each post in meters.
• Enter the cost per post.
• Enter the labour cost per meter.
• Enter the cost per gate.
• Enter the number of gates required.
• Click the “Calculate” button to see the estimated costs.
Calculation Details
The total cost is calculated as follows:
1. The number of posts required is determined by dividing the total length by the distance between posts, rounded up to the nearest whole number.
2. Total post cost is calculated by multiplying the number of posts by the cost per post.
3. Total material cost is calculated by multiplying the length by the height and the material cost per meter.
4. Total labour cost is calculated by multiplying the length by the labour cost per meter.
5. Total gates cost is calculated by multiplying the number of gates by the cost per gate.
6. The overall total cost is the sum of total post cost, total material cost, total labour cost, and total gates cost.
Note that this calculator only provides an estimate. Actual costs may vary depending on various factors like local pricing, additional requirements, and specific conditions of the job site.
Use Cases for This Calculator
Determine Total Fencing Costs
As a farm owner, understanding the total cost of fencing is crucial for budgeting your agricultural projects. By inputting the length of fencing required and the type of materials, you can quickly
ascertain the overall expenditure needed for your fencing setup.
Explore Different Material Options
Choosing the right fencing material impacts both durability and cost. With the calculator, you can toggle between various materials like barbed wire, wood, or vinyl, and immediately see how these
choices affect your total cost.
Estimate Labor Costs
Labor can be a significant part of your overall fencing expenses. By using the calculator, you can factor in the labor rates in your area, helping you to make more informed choices regarding hiring
help versus doing it yourself.
Calculate Costs for Different Fencing Styles
The style of fencing—be it split rail, chain link, or privacy fencing—affects both functionality and aesthetics. Using the calculator, you can compare costs associated with different styles,
guiding you in selecting the most suitable option for your farm needs.
Include Additional Features
Fencing isn’t just about the material and length; additional features like gates and electrification also contribute to the total cost. The calculator allows you to add these extras as you go, giving
you a comprehensive picture of your total expenditure.
Analyze Seasonal Budgeting
Fencing costs can vary with the seasons, so it’s wise to assess when to undertake your project. By adjusting the forecasted costs in the calculator, you can plan your purchases based on seasonal
price fluctuations, ultimately saving money.
Assess Cost per Acre
Understanding the cost of fencing per acre is vital for expansive farms, as it helps in strategic financial planning. With the calculator, you can break down your total fencing costs by the specific
acreage, allowing you to gauge expenses within your operational budget more effectively.
Evaluate Return on Investment
Fencing can significantly affect the profitability of your farming operation, particularly in relation to livestock management. By assessing your costs through the calculator, you can better
understand how investing in quality fencing translates into enhanced productivity and revenue.
Make Adjustments Based on Terrain
Not all terrain is the same, and your fencing costs may vary based on the land’s conditions. The calculator enables you to input specific characteristics of your farm’s terrain, allowing you to
adjust material and labor needs accordingly.
Facilitate Planning for Future Expansion
If you’re considering expanding your farm, planning for additional fencing becomes essential. Using the calculator helps you project future costs based on your growth plans, allowing you to budget
more effectively and avoid surprises down the line.
|
{"url":"https://calculatorsforhome.com/farm-fencing-cost-calculator/","timestamp":"2024-11-06T18:32:28Z","content_type":"text/html","content_length":"147058","record_id":"<urn:uuid:6de4a2b4-41c4-43c7-a279-5af930601247>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00782.warc.gz"}
|
All Staff and Students
Professor Jinglai Li BSc PhD FIMA
School of Mathematics
Director of Research of Mathematics
Contact details
School of Mathematics
Watson Building
University of Birmingham
B15 2TT
Jinglai Li is a Professor at the School of Mathematics of the University of Birmingham.
For more information, please visit his personal webpage.
• PhD in Mathematics, SUNY Buffalo, 2007
• BSc in Applied Mathematics, Sun Yat-sen University, 2002
Jinglai Li received the BSc degree in Applied Mathematics from Sun Yat-sen University in 2002 and the PhD degree in Mathematics from SUNY Buffalo in 2007. After his PhD degree, Jinglai did
postdoctoral research at Northwestern University (2007-2010) and MIT (2010-2012) respectively.
He subsequently worked at Shanghai Jiao Tong University (Associate Professor, 2012-2017) and University of Liverpool (Reader, 2017-2020). Jinglai joined the University Birmingham as a Professor in
Semester 1
LM Computational Statistics
Jinglai is happy to discuss PhD project supervision with potential candidates, so please email him if you are interested.
Jinglai Li’s current research interests are in scientific computing, computational statistics, uncertainty quantification, and data science.
Research Themes
• Bayesian inference and inverse problems
• Reliability analysis and rare events simulation
• Monte Carlo methods
• Gaussian Process regression and their applications
• Data assimilation
Sample publications:
Cheng, C. and Li, J., 2022. ODEs learn to walk: ODE-Net based data-driven modeling for crowd dynamics. arXiv preprint arXiv:2210.09602.
Wang, H., Ao, Z., Yu, T. and Li, J., 2021. Inverse Gaussian Process regression for likelihood-free inference. arXiv preprint arXiv:2102.10583.
Ao, Z. and Li, J., 2023. Entropy estimation via uniformization. Artificial Intelligence, p.103954.
Wen, L. and Li, J., 2022. Affine-mapping based variational ensemble Kalman filter. Statistics and Computing, 32(6), pp.1-15.
Ao, Z. and Li, J., 2021, Entropy estimation via normalizing flow. in Thirty-Sixth AAAI Conference on Artificial Intelligence (AAAI 2022).
Yu, T., Wang, H. and Li, J., 2021. Maximizing conditional entropy of Hamiltonian Monte Carlo sampler. SIAM Journal on Scientific Computing, 43(5), pp.A3607–A3626.
Zhou, Q., Yu, T., Zhang, X. and Li, J., 2020. Bayesian Inference and Uncertainty Quantification for Medical Image Reconstruction with Poisson Data. SIAM Journal on Imaging Sciences, 13(1), pp.29-52.
Wang, H. and Li, J., 2018. Adaptive Gaussian process approximation for Bayesian inference with expensive likelihood functions. Neural Computation, 30(11), pp.3072-3094.
Hu, Z., Yao, Z. and Li, J., 2017. On an adaptive preconditioned Crank–Nicolson MCMC algorithm for infinite dimensional Bayesian inference. Journal of Computational Physics, 332, pp.492-503.
Wu, K. and Li, J., 2016. A surrogate accelerated multicanonical Monte Carlo method for uncertainty quantification. Journal of Computational Physics, 321, pp.1098-1109.
Yao, Z., Hu, Z. and Li, J., 2016. A TV-Gaussian prior for infinite-dimensional Bayesian inverse problems and its numerical implementations. Inverse Problems, 32(7), p.075006.
Li, J. and Marzouk, Y.M., 2014. Adaptive construction of surrogates for the Bayesian solution of inverse problems. SIAM Journal on Scientific Computing, 36(3), pp.A1163-A1186.
Li, J., Li, J. and Xiu, D., 2011. An efficient surrogate-based method for computing rare failure probability. Journal of Computational Physics, 230(24), pp.8683-8697.
|
{"url":"https://www.birmingham.ac.uk/schools/mathematics/people/navigation?ReferenceId=172798&Name=professor-jinglai-li","timestamp":"2024-11-08T05:44:19Z","content_type":"text/html","content_length":"23041","record_id":"<urn:uuid:9812acb1-0f20-40bd-ae3b-6106500de536>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00241.warc.gz"}
|
Matrix Poincaré inequalities and concentration
We show that any probability measure satisfying a Matrix Poincaré inequality with respect to some reversible Markov generator satisfies an exponential matrix concentration inequality depending on the
associated matrix carré du champ operator. This extends to the matrix setting a classical phenomenon in the scalar case. Moreover, the proof gives rise to new matrix trace inequalities which could be
of independent interest. We then apply this general fact by establishing matrix Poincaré inequalities to derive matrix concentration inequalities for Gaussian measures, product measures and for
Strong Rayleigh measures. The latter represents the first instance of matrix concentration for general matrix functions of negatively dependent random variables.
• Functional inequalities
• Matrix Poincaré inequalities
• Matrix concentration inequalities
• Matrix inequalities
• Strong Rayleigh measures
ASJC Scopus subject areas
Dive into the research topics of 'Matrix Poincaré inequalities and concentration'. Together they form a unique fingerprint.
|
{"url":"https://nyuscholars.nyu.edu/en/publications/matrix-poincar%C3%A9-inequalities-and-concentration","timestamp":"2024-11-08T04:33:38Z","content_type":"text/html","content_length":"51417","record_id":"<urn:uuid:2b13199f-fb63-4fa4-b443-ec6abc2b93b2>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00566.warc.gz"}
|
Solution - Livestock Lineup (USACO Bronze 2019 December)
USACO Bronze 2019 December - Livestock Lineup
Authors: Benjamin Qi, Kevin Sheng, Melody Yu, Ryan Chou
Video Solution
By Melody Yu
Time Complexity: $\mathcal{O}(N)$
#include <bits/stdc++.h>
using namespace std;
const int RESTRICT_LEN = 6;
// list of cows, in alphabetical order
const vector<string> COWS = {"Beatrice", "Belinda", "Bella", "Bessie",
"Betsy", "Blue", "Buttercup", "Sue"};
vector<vector<string>> orderings;
from typing import List
# list of cows, in alphabetical order
COWS = ["Beatrice", "Belinda", "Bella", "Bessie", "Betsy", "Blue", "Buttercup", "Sue"]
orderings = []
def build(ordering: List[str]):
# finished building permutation
$\mathcal{O}(N)$ Solution With Graphs
This solution is covered in the Introduction to Graphs module.
Join the USACO Forum!
Stuck on a problem, or don't understand a module? Join the USACO Forum and get help from other competitive programmers!
|
{"url":"https://usaco.guide/problems/usaco-965-livestock-lineup/solution","timestamp":"2024-11-10T18:28:26Z","content_type":"text/html","content_length":"366209","record_id":"<urn:uuid:086498e3-e7f6-426f-8a98-28d73fae3963>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00758.warc.gz"}
|
Problem Model Prompt
P_Combinatorics_19 GPT-4 Turbo Example C w/ H
Ground Truth Answer: n(n+1)*2^(n-2)
Use in Code
import champ_dataset # "pip install champ-dataset" if necessary
dataset = champ_dataset.load('v0')
problem = dataset['P_Combinatorics_19']
messages = problem.conversations['GPT-4 Turbo|Example C w/ H'].messages
for message in messages:
print(f'Role: {message.role}')
print(f'Text: {message.text}')
print(f'Error: {message.error}')
Full Conversation System Prompt User Input Model Output
You are an expert on mathematics.
Please give an example that applies the following concept:
The number of ways to choose a set of (unordered) k items from n items is C(n, k).
Certainly! The concept you're referring to is known as a "combination," which is a way of selecting items from a larger set such that the order of selection does not matter. The notation C(n, k) is often read as "n choose k" and is mathematically defined as:
\[ C(n, k) = \frac{n!}{k!(n-k)!} \]
where "!" denotes factorial, the product of all positive integers up to that number (e.g., 5! = 5 × 4 × 3 × 2 × 1 = 120).
Here's an example to illustrate this concept:
Suppose you have a group of 5 friends, and you want to invite 3 of them to a dinner party. You're interested in how many different groups of 3 friends you can invite without considering the order in which you invite them.
Using the combination formula, you can calculate the number of ways to choose 3 friends out of 5 as follows:
\[ C(5, 3) = \frac{5!}{3!(5-3)!} = \frac{5!}{3!2!} = \frac{5 \times 4 \times 3 \times 2 \times 1}{(3 \times 2 \times 1)(2 \times 1)} = \frac{5 \times 4}{2 \times 1} = 10 \]
So, there are 10 different ways to choose 3 friends from a group of 5.
Please give an example that applies the following concept:
If there are n steps of doing a task, and there are p_i ways to do the i-th step, then there are p_1*p_2*...*p_n ways to do the task.
This concept is an application of the fundamental counting principle, which states that if you have a series of decisions or steps to make, and each step has a certain number of independent choices, then the total number of ways to complete the entire series is the product of the number of choices at each step.
Here's an example:
Imagine you're getting dressed and you have to decide on your outfit, which consists of a shirt, pants, and a pair of shoes. You have the following options:
- Shirts: You have 4 different shirts to choose from.
- Pants: You have 3 different pairs of pants to choose from.
- Shoes: You have 2 different pairs of shoes to choose from.
To find the total number of different outfits you can put together, you multiply the number of choices for each item of clothing:
\[ p_1 \times p_2 \times p_3 = 4 \times 3 \times 2 \]
So, you have:
\[ 4 \text{ (shirts)} \times 3 \text{ (pants)} \times 2 \text{ (shoes)} = 24 \text{ different outfits} \]
Therefore, there are 24 different ways to choose an outfit from your wardrobe options.
Please give an example that applies the following concept:
If there are n actions, with p_i ways to perform the i-th action, and no two actions can be performed at the same time, then there are p_1+p_2+...+p_n ways to perform the action in total.
The concept you're referring to is another application of the fundamental counting principle, specifically when you have mutually exclusive choices for performing an action. This means that you can perform only one action at a time, and the total number of ways to perform the action is the sum of the individual ways to perform each action.
Here's an example:
Suppose you have a free evening and you're trying to decide what to do. You have three options, but you can only choose one because they all take up the entire evening:
1. Go to the movies: There are 5 different movies playing at your local theater that you're interested in seeing.
2. Read a book: You have 3 different books on your shelf that you've been meaning to read.
3. Play a video game: You have 2 new video games that you haven't started yet.
Since you can only choose one activity for the evening, the total number of ways you can spend your evening is the sum of the ways to perform each individual activity:
\[ p_1 + p_2 + p_3 = 5 + 3 + 2 \]
So, you have:
\[ 5 \text{ (movies)} + 3 \text{ (books)} + 2 \text{ (video games)} = 10 \text{ different ways to spend your evening} \]
Therefore, there are 10 different ways you can choose to spend your evening based on these mutually exclusive activities.
Solve the following problem. Make sure to show your work before giving the final answer.
Find a closed-form formula of sum_(k=1)^n C(n, k)*k^2.
Besides the concepts above, you may also find the following information useful:
1. Interpret the expression as the number of ways to choose a committee of any number of people from n people, while at the same time selecting a president and a secretary (who can be the same person) from the chosen committee.
2. Find the number of ways to select such a committee when the president and the secretary are the same person, by first selecting the president, and then the rest of the committee (of any number).
3. Find the number of ways to select such a committee when the president and the secretary are different people, by first selecting the president, then the secretary, and finally the rest of the committee (of any number).
To solve this problem, we will interpret the expression as the number of ways to choose a committee from n people, while also selecting a president and a secretary (who can be the same person) from the chosen committee.
Let's break it down into two cases:
**Case 1: The president and the secretary are the same person.**
In this case, we first choose the president (who will also be the secretary) from the n people, which can be done in n ways. Then, we choose the rest of the committee from the remaining n-1 people. Since the committee can be of any size, including just the president/secretary, we have \(2^{n-1}\) ways to choose the rest of the committee (each person can either be on the committee or not).
So, the total number of ways for this case is \(n \cdot 2^{n-1}\).
**Case 2: The president and the secretary are different people.**
In this case, we first choose the president from the n people, which can be done in n ways. Then, we choose the secretary from the remaining n-1 people, which can be done in n-1 ways. Finally, we choose the rest of the committee from the remaining n-2 people, which can be done in \(2^{n-2}\) ways.
So, the total number of ways for this case is \(n \cdot (n-1) \cdot 2^{n-2}\).
Now, let's sum the two cases to find the total number of ways to form the committee with a president and a secretary:
Total = Case 1 + Case 2
\[ = n \cdot 2^{n-1} + n \cdot (n-1) \cdot 2^{n-2} \]
\[ = n \cdot 2^{n-2} \cdot (2 + (n-1)) \]
\[ = n \cdot 2^{n-2} \cdot (n+1) \]
This is the closed-form formula for the sum of \(C(n, k) \cdot k^2\) from \(k=1\) to \(n\):
\[ \sum_{k=1}^{n} C(n, k) \cdot k^2 = n \cdot 2^{n-2} \cdot (n+1) \]
This formula represents the total number of ways to choose a committee of any number of people from n people, while at the same time selecting a president and a secretary (who can be the same person) from the chosen committee.
Now, summarize the answer above in one sentence, without any intermediate steps or explanations.
The closed-form formula for the sum \(\sum_{k=1}^{n} C(n, k) \cdot k^2\) is \(n \cdot 2^{n-2} \cdot (n+1)\).
|
{"url":"https://yujunmao1.github.io/CHAMP/solutions/p_combinatorics_19_gpt4t_examplecwh.html","timestamp":"2024-11-05T04:30:18Z","content_type":"text/html","content_length":"12425","record_id":"<urn:uuid:8a85f3e6-9f5c-4268-8855-15bdbaa1ad44>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00681.warc.gz"}
|
Graphing. Representing numerical information in a picture. Graph shows a picture of a relationship -how two processes relate -what happens when two events. - ppt download
Presentation is loading. Please wait.
To make this website work, we log user data and share it with processors. To use this website, you must agree to our
Privacy Policy
, including cookie policy.
Ads by Google
|
{"url":"http://slideplayer.com/slide/5357891/","timestamp":"2024-11-09T19:46:55Z","content_type":"text/html","content_length":"172666","record_id":"<urn:uuid:1952bff1-af5c-4e65-977f-67bf3d5c2256>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00788.warc.gz"}
|
On the infinitesimal rigidity of homogeneous varieties
• Let X double struck P signN be a variety (respectively an open subset of an analytic submanifold) and let cursive Greek chi X be a point where all integer valued differential invariants are
locally constant. We show that if the projective second fundamental form of X at cursive Greek chi is isomorphic to the second fundamental form of a point of a Segre double struck P signn double
struck P signm, n, m 2, a Grassmaniann G(2, n + 2), n 4, or the Cayley plane double struck O signdouble struck P sign2, then X is the corresponding homogeneous variety (resp. an open subset of
the corresponding homogeneous variety). The case of the Segre double struck P sign2 double struck P sign2 had been conjectured by Griffiths and Harris in [GH]. If the projective second
fundamental form of X at cursive Greek chi is isomorphic to the second fundamental form of a point of a Veronese v2(double struck P signn) and the Fubini cubic form of X at cursive Greek chi is
zero, then X = v2(double struck P signn) (resp. an open subset of v2(double struck P signn)). All these results are valid in the real or complex analytic categories and locally in the C category
if one assumes the hypotheses hold in a neighborhood of any point cursive Greek chi. As a byproduct, we show that the systems of quadrics I2(double struck P signm-1 (Square cup) double struck P
signn-1) S2double struck C signm+n, I2(double struck P sign1 double struck P signn-1) S2double struck C sign2n and I2(double struck S sign5) S2double struck C sign16 are stable in the sense that
if At S2T* is an analytic family such that for t 0, At A, then A0 A. We also make some observations related to the Fulton-Hansen connectedness theorem.
|
{"url":"https://vivo.library.tamu.edu/vivo/display/n37837SE","timestamp":"2024-11-14T02:20:36Z","content_type":"text/html","content_length":"23183","record_id":"<urn:uuid:17e653c4-e3dd-427a-9ab2-be21d037574a>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00539.warc.gz"}
|
Mathematics for Data Scientist
To excel in the field of data science, especially as a data scientist, I would recommend you have good command over the topics mentioned below. These are the topics from mathematics and statistics.
There are many YouTube channels that you can use for this purpose. Because this is 10+2 level mathematics, and it is just a matter of revision. So I am not offering any course unless there is a
specific need for some group, organization.
Linear Algebra
1. Introduction to Linear Algebra
2. Eigenvalues And Eigenvectors
3. Calculating Eigenvalues and Eigenvectors
4. Eigen decomposition of a Matrix
5. Eigenvectors: What Are They? Intuition behind.
Vectors, Matrices & Linear Transformations
** Vector & Vector Spaces **
1. Vectors: The Basics
2. Basis Vector
3. Norm of a vector
4. Identity matrix or operator
5. Determinant of a matrix
6. Column and Null Space
7. Rank of a matrix
8. Transpose of a matrix
9. Inverse of a matrix
10. Least Squares Approximation
11. Linear Transformations
12. Matrices: The Basics
13. Matrix Operations
14. Matrix operations and manipulations
15. Dot product of two vectors
16. Linear independence of vectors
Multivariable Calculus
1. Critical Points, Maxima and Minima
2. Differentiation
3. Functions and Derivatives
4. Functions: Primer
5. Multivariable Functions
6. Partial Derivatives
7. Taylor Series and Linearization
8. The Hessian
9. The Jacobian
10. Vector-Valued Functions
1. Introduction to probability – probability, events, additive & multiplicative rule
2. Basics of probability – random variables, probability distribution, expected value
3. Joint and Conditional Probability
4. Probability Rules
5. Bayes’ Theorem
1. Descriptive statistics
2. Inferential Statistics
3. Prescriptive statistics
4. What is sampling, different sampling techniques?
5. Random Variable, Predictor, Predicted variables
6. Data Distribution (continuous, discrete, Normal/Bernoulli, standard, binomial, Poisson, etc.)
7. CDF (Cumulative Distribution Function), PDF (Probability Distribution Function)
8. Statistical Measures (mean, mode, median, max, min)
9. Measure of dispersion (range, standard deviation, variance, covariance, correlation, error deviation)
10. Central Limit Theorem (CLT)
11. What is Regression? How it works? OLS (Ordinary Least Square), Multi-linear regression.
12. Standard Error
13. Dimensionality Reduction (PCA)
14. Parameter Properties (Bias, Consistency, Efficiency)
15. Statistical tests t-test, z-test, ANOVA test, Chi-Square test
16. Conditional Probability (Bayesian Theorem)
17. Type I/Type II errors
18. Hypothesis testing
19. Confidence Interval & Significance Level (alpha)
20. p-value and its interpretation
|
{"url":"https://dasarpai.com/dsblog/maths-for-ds","timestamp":"2024-11-12T13:34:52Z","content_type":"text/html","content_length":"62713","record_id":"<urn:uuid:24bb0ab6-50db-42ed-91e2-5d0ce17928eb>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00760.warc.gz"}
|
K-12 Math Advantage - Proportionality - Raitclub
STEM-Plus Content offers K-12 math and other STEM content in both English and Japanese.
The content aids in vocabulary acquisition for both English and Japanese learners.
How does it achieve this?
Through the content, learners grasp words and phrases more intuitively because they already know their meanings in their native language.
It is like learning the word “apple” by looking at a picture of one.
STEM-Plus Content facilitates easy learning of abstract yet useful words.
To ensure effective and efficient learning, the content is designed to be concise and illustrative.
Learners can easily understand the material through images and examples.
In addition, the content provides useful tips on math and other STEM subjects, supporting
academic success in these important fields.
How to use:
Carefully selected English words and phrases related to “Proportionality” are highlighted in yellow in the images below. The words are also summarized at the bottom of this page.
We recommend reviewing them regularly during your daily math studies to enhance your vocabulary.
Direct Proportionality
• Two quantities x and y are directly proportional when y=kx, where k is a nonzero constant.
y=kx でkがゼロでない定数のとき、ふたつの量xとyは比例するという。
• The number k is called the constant of proportionality.
• The ratio y/x is constant and equal to k.
比 y/x は一定で比例定数kに等しい。
• The graph of y=kx is a line that passes through the origin.
y=kx のグラフは原点を通る直線である。
Inverse Proportionality
• Two quantities x and y are inversely proportional when y=k/x, where k is a nonzero constant.
y=k/x でkがゼロでない定数のとき、ふたつの量xとyは反比例するという。
• The number k is called the constant of proportionality.
• The product xy is constant and equal to k.
積 xy は一定で比例定数kに等しい。
• The graph of y=k/x is a hyperbola.
y=k/x のグラフは双曲線である。
✅quantity 量 ⇔quality 質
✅constant 一定の
✅increase 増加する;を増やす/増加
✅decrease 減少する;を減らす/減少
Increasing Functions and Positive Slopes
When ‘x increases’ → How ‘y behaves’
↑ ↑
Developing the habit of fixing your attention here (‘x increases’) will bring various benefits in the future.
• terms like ‘increasing’ and ‘decreasing’ functions,
• expressions like a function is ‘increasing’ or ‘decreasing’ in a certain interval,
• and determining whether the slope is ‘positive’ or ‘negative’ as another way to say the above,
all refer to the ‘increase’ or ‘decrease’ of y when ‘x increases’.
It’s good to get accustomed to this early on.
About Inverse Proportionality
Inverse proportionality is indeed difficult.
As we all know, x cannot be zero.
But it’s hard to intuitively understand the behavior of y before and after this point.
As x approaches zero, like at -0.1, -0.01, y becomes a large negative number, like -10, -100,
and as x gets even closer to zero, y also becomes a larger negative number (eventually approaching negative infinity).
Then, the moment x exceeds zero by even a little, y jumps to positive infinity.
It’s baffling.
And not to mention, drawing its graph requires skill!
In some English-speaking countries, inverse proportionality is often introduced much later as a rational function.
Specifically, after studying quadratic, cubic, n-th degree functions, and even exponential and logarithmic functions.
However, it’s a relationship we often encounter in daily life.
So Don’t be too averse to it. Instead, let’s just recognize that it’s like a character shrouded in mystery.
|
{"url":"https://www.raitclub.com/en/stem-plus/direct-inverse-proportion/","timestamp":"2024-11-07T23:35:48Z","content_type":"text/html","content_length":"67530","record_id":"<urn:uuid:82b8f88c-ed90-468b-a5db-da59c2a7afb9>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00238.warc.gz"}
|
Quick sort algorithm | Learn How does Quick Sort Algorithm Work?
Updated March 31, 2023
Introduction to Quick sort algorithm
Quick Sort is a sorting technique that sorts the given range of elements and returns that range in sorted order as output. This Algorithm takes an array as input and divides it into many sub-arrays
until it matches a suitable condition, merges the elements then returns a sorted array.
Quick Sort follows the divide and conquers approach since the pivot element first divides the range of numbers into sub-arrays and then returns a sorted array as the final step by gathering all the
How does Quick Sort Algorithm Work?
Before moving on to the algorithm, let’s see how Quick Sort works by taking an example of an array of 7 elements.
The input array is shown in the below figure.
1. The very first step in Quick Sort includes selecting an element as a pivot element. A pivot element is an element from the array which is selected to act as a division to a range of numbers. If we
select ‘n’ as our pivot element, so the numbers from the array which are less than ‘n’ settle at the left side of n, and numbers that are greater than n go to the right of the pivot element. That’s
why our first step in Quick Sort is to select an element as a pivot element. The pivot element can be selected in many ways in which some are listed below:
• Pick the first element as the pivot.
• Pick the last element as the pivot.
• Pick a random element as the pivot.
• Pick median as the pivot.
In our example, let’s select the last element as the pivot element and continue the process.
2. The fundamental process in quicksort is a partition(). After selecting the pivot element, we need to rearrange the elements by moving elements that are less than pivot to the left side of the
pivot and which are greater than the pivot to the right of the pivot.
The process is shown below:
• A pointer must be fixed at the pivot element.
• Then the pivot element is compared with elements from the beginning.
• If the element is greater than the pivot, a second pointer is set for that element. Now, the pivot element is compared with other elements. If an element smaller than the pivot is found, then it
is swapped with the greater element, which is in the second pointer.
• Again the same process continues to set the next greater element as the second pointer. And it is swapped with the next smaller element.
• The process will continue until the second last element is reached.
• When the second last element is reached, the pivot element will be swapped with the second pointer.
3. Now, the array has been divided into two parts, the first part with elements less than the pivot, second part with elements greater than the pivot.
• Pivot elements are chosen for the left and right sub-arrays separately, and the above process is called recursively for each part until each sub-array is formed into a single element. When this
point occurs, the array is already sorted, and the final array is shown below.
Algorithm of Quick Sort
Before moving on to the actual implementation of the QuickSort, let us look at the algorithm.
Step 1: Consider an element as a pivot element.
Step 2: Assign the lowest and highest index in the array to low and high variables and pass it in the QuickSort function.
Step 3: Increment low until array[low] greater than pivot then stop.
Step 4: Decrement high until array[high] less than pivot then stop.
Step 5: Swap low and high and repeat the process until the second last element.
Step 6: Swap pivot and second last element then you will get an array that completed the first partition.
Step 7: Repeat the same process for the two arrays that are obtained until you can no more divide the array.
QuickSort Source Code
# Quick sort in Python
# function to find the partition position
def arraypartition(array, low, high):
# choose the last element as pivot
pivot = array[high] # second pointer for greater element
i = low - 1
# traverse through all elements by comparing each element with pivot
for j in range(low, high):
if array[j] <= pivot:
# if element smaller than pivot is found then swap it with the greater element pointed by i
i = i + 1
# swapping element at i with element at j
(array[i], array[j]) = (array[j], array[i])
# swap the pivot element with the greater element specified by i
(array[i + 1], array[high]) = (array[high], array[i + 1])
# return the position of partition
return i + 1
# function to perform quicksort
def quickSort(array, low, high):
if low < high:
# find pivot element such that
# element smaller than pivot are on the left
# element greater than pivot are on the right
pi = arraypartition(array, low, high)
# recursive call on the left of pivot
quickSort(array, low, pi - 1)
# recursive call on the right of pivot
quickSort(array, pi + 1, high)
array = [10, 9, 8, 3, 2, 11, 4] print("The Unsorted Array is: ")
quickSort(array, 0, len(array) - 1)
print('Sorted Array in Ascending Order:')
Time Complexities
Worst-case complexity
Worst Case Complexity O(n^2) occurs when the pivot element is either the greatest or the smallest among all the elements in the array. This leads to the case in which the pivot element lies at the
end of the array.
Best Case Complexity
Best Case Complexity O(n*log n) occurs when the pivot element lies in the middle or near the middle element in the array.
Average Case Complexity
Average Case Complexity O(n*log n) occurs when we don’t exactly get evenly balanced partitions of the array.
• Quick Sort follows the divide and conquers approach and sorts the given range of elements and returns that range in sorted order as output.
• A pivot element is an element from the array which is selected to act as a division to a range of numbers.
• You can select any element as a pivot element.
Recommended Articles
This is a guide to Quick sort algorithm. Here we discuss How does Quick Sort Algorithm Work along with the codes and outputs. You may also have a look at the following articles to learn more –
|
{"url":"https://www.educba.com/quick-sort-algorithm/","timestamp":"2024-11-13T04:16:10Z","content_type":"text/html","content_length":"324665","record_id":"<urn:uuid:c51a9fa4-eb8b-4328-a777-9a93d2d022dc>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00886.warc.gz"}
|
Hiroshi Hirai
Graduate School of Mathematics,
Nagoya University,
Furocho, Chikusaku, Nagoya, 464-8602, Japan
E-mail: hirai.hiroshi (at) math.nagoya-u.ac.jp
TEL: +81-052-789-2432
CV/research map --- teaching --- papers --- slides --- Japanese writings --- list of talks --- links
Research interests: Algorithm, Optimization, Discrete Mathematics (introduction in Japanese)
Editorial work: SIAM Journal on Applied Algebra and Geometry (SIAGA) (Associate Editor 2021-- ) Preprints and Publications:
• Gradient descent for unbounded convex functions on Hadamard manifolds and its applications to scaling problems, 2024 (with K. Sakabe). [pdf]
• Algebraic combinatorial optimization on the degree of determinants of noncommutative symbolic matrices, Mathematical Programming, Series A, to appear (with Y. Iwamasa, T. Oki, and T. Soma). [pdf]
• Interior-point methods on manifolds: theory and applications, 2023 (with H. Nieuwboer and M. Walter). [pdf]
• Polyhedral clinching auctions for indivisible goods, 2023 (with R. Sato). [pdf]
• On a manifold formulation of self-concordant functions, 2022 [pdf]
• Finding Hall blockers by matrix scaling, Mathematics of Operations Research, to appear (with K. Hayashi and K. Sakabe). [pdf]
• Two flags in a semimodular lattice generate an antimatroid, Order 41 (2024), 463--470 (with K. Hayashi). [pdf]
• Convex analysis on Hadamard spaces and scaling problems, Foundations of Computational Mathematics, to appear [pdf]
• Computing the nc-rank via discrete convex optimization on CAT(0) spaces, SIAM Journal on Applied Algebra and Geometry 5 (2021), 455--478 (with M. Hamada).[pdf]
• A cost-scaling algorithm for computing the degree of determinants, Computational Complexity 31 (2022) Article number: 10 (with M. Ikeda) [pdf]
• Node-connectivity terminal backup, separately-capacitated multiflow, and discrete convexity, SIAM Journal on Discrete Mathematics37 (2023),351--378 (with M. Ikeda). [pdf]
• Minimum 0-extension problems on directed metrics, Discrete Optimization 40 (2021), 100642 (with R. Mizutani) [pdf]
• Compression of M$^\natural$-convex Functions -- Flag Matroids and Valuated Permutohedra, Journal of Combinatorial Theory, Series A 185 (2022), 105525 (with S. Fujishige) [pdf]
• A combinatorial algorithm for computing the rank of a generic partitioned matrix with 2x2 submatrices, Mathematical Programming, Series A 195 (2022), 1--37 (with Y. Iwamasa). [pdf]
• Helly groups, Geometry & Topology, to appear (with J. Chalopin, V. Chepoi, A. Genevois, and D. Osajda) [pdf]
• A cost-scaling algorithm for minimum-cost node-capacitated multiflow problem, Mathematical Programming, Series A 195 (2022),149--181 (with M. Ikeda). [pdf]
• On a weighted linear matroid intersection algorithm by deg-det computation, Japan Journal of Industrial and Applied Mathematics 37 (2020), 677--696 (with H. Furue) [pdf]
• A nonpositive curvature property of modular semilattices, Geometriae Dedicata 214 (2021), 427--463. [pdf]
• Reconstructing phylogenetic tree from multipartite quartet system, Algorithmica 84 (2022),1875--1896 (with Y. Iwamasa) [pdf]
• Counting integral points in polytopes via numerical analysis of contour integration, Mathematics of Operations Research 45 (2020) 455--464 (with R. Oshiro and K. Tanaka) [pdf]
• Computing the degree of determinants via discrete convex optimization on Euclidean buildings, SIAM Journal on Applied Algebra and Geometry 3 (2019), 523--557. [pdf]
• Uniform semimodular lattices and valuated matroids, Journal of Combinatorial Theory, Series A 165 (2019), 325-359. [pdf]
• A tractable class of binary VCSPs via M-convex intersection, ACM Transactions on Algorithms 15 (2019), Article 44.(with Y. Iwamasa, K. Murota, and S. Zivny) [pdf]
• Uniform modular lattices and affine buildings, Advances in Geometry 20 (2020), 375--390. [pdf]
• Polyhedral clinching auctions for two-sided markets, Mathematics of Operations Research 47 (2022), 259--285 (with R. Sato). [pdf]
• Discrete Convex Functions on Graphs and Their Algorithmic Applications, In: T. Fukunaga and K. Kawarabayashi (eds.) Combinatorial Optimization and Graph Algorithms, Communications of NII Shonan
Meetings, Springer Nature, Singapore, (2017), pp. 67--101. [pdf]
• On integer network synthesis problem with tree-metric cost, JSIAM Letters 9 (2017), 73--76. (with M. Nitta) [pdf]
• A compact representation for modular semilattices and its applications, Order 37 (2020),479--507 (with S. Nakashima). [pdf]
• Maximum vanishing subspace problem, CAT(0)-space relaxation, and block-triangularization of partitioned matrix (with M. Hamada), 2017 [pdf]
• L-convexity on graph structures, Journal of the Operations Research Society of Japan 61 (2018), 71--109. [pdf]
• A compact representation for minimizers of k-submodular functions, Journal of Combinatorial Optimization, 36 (2018) 709 -- 741 (with T. Oki). [pdf]
• Computing DM-decomposition of a partitioned matrix with rank-1 blocks, Linear Algebra and Its Applications 547 (2018), 105--123. [pdf]
• Shortest (A+B)-path packing via hafnian, Algorithmica 80 (2018), 2478-2491 (with H. Namba). [pdf]
• On uncrossing games for skew-supermodular functions, Journal of the Operations Research Society of Japan 59 (2016), 218--223. [pdf]
• A dual descent algorithm for node-capacitated multiflow problems and its applications, ACM Transactions on Algorithms 15 (2018), 15:1--15:24. [pdf]
• A representation of antimatroids by Horn rules and its application to educational systems, Journal of Mathematical Psychology, 77 (2017), 82--93 (with H. Yoshikawa and K. Makino). [pdf]
• On k-submodular relaxation, SIAM Journal on Discrete Mathematics, 30 (2016), 1726--1736 (with Y. Iwamasa). [pdf]
• L-extendable functions and a proximity scaling algorithm for minimum cost multiflow problem, Discrete Optimization, 18 (2015), 1-37 [pdf]
• Weakly modular graphs and nonpositive curvature, Memoirs of the AMS, 268, no.1309, (2020), (with J. Chalopin, V. Chepoi, and D. Osajda). [pdf]
• A combinatorial formula for principal minors of a matrix with tree-metric exponents and its applications, Journal of Combinatorial Theory, Series A, 133 (2015), 261-279 (with A. Yabe). [pdf]
• Discrete convexity and polynomial solvability in minimum 0-extension problems, Mathematical Programming, Series A, 155 (2016), 1-55. [pdf]
• On half-integrality of network synthesis problem, Journal of the Operations Research Society of Japan, 57 (2014), 63-73 (with T. N. Hau and N. Tsuchimura). [pdf]
• Tree metrics and edge-disjoint S-paths, Mathematical Programming, Series A 147 (2014), 81-123, (with G. Pap). [pdf]
• On duality and fractionality of multicommodity flows in directed networks, Discrete Optimization 8 (2011), 428-445 (with S. Koichi). [pdf]
• On tight spans for directed distances, Annals of Combinatorics 16 (2012), 543-569 (with S. Koichi). [pdf]
• Half-integrality of node-capacitated multiflows and tree-shaped facility locations on trees, Mathematical Programming, Series A 137 (2013), 503-530. [pdf]
• A note on multiflow locking theorem, Journal of the Operations Research Society of Japan 53 (2010), 149-156. [pdf]
• The maximum multiflow problems with bounded fractionality, Mathematics of Operations Research 39 (2014), 60-104. [pdf]
• Folder complexes and multiflow combinatorial dualities, SIAM Journal on Discrete Mathematics 25 (2011), 1119-1143. [pdf]
• Bounded fractionality of the multiflow feasibility problem for demand graph K_3 + K_3 and related maximization problems, Journal of Combinatorial Theory, Series B, 102 (2012), 875-899. [pdf]
• Metric packing for K_3 + K_3, Combinatorica 30 (2010), 295-326. [pdf]
• Tight spans of distances and the dual fractionality of undirected multiflow problems, Journal of Combinatorial Theory, Series B 99 (2009), 843-868. [pdf]
• Electric network classifiers for semi-supervised learning on graphs, Journal of the Operations Research Society of Japan 50 (2007), 219-232 (with K. Murota and M. Rikitoku).
• Characterization of the distance between subtrees of a tree by the associated tight span, Annals of Combinatorics 10 (2006), 111-128. [pdf]
• A geometric study of the split decomposition, Discrete and Computational Geometry 36 (2006), 331-361. [pdf]
• Greedy fans: A geometric approach to dual greedy algorithms, RIMS Preprint-1508, 2005.
• SVM kernel by electric network, Pacific Journal of Optimization. 1 (2005), 509-526 (with K. Murota and M. Rikitoku).
• M-convex functions and tree metrics, Japan Journal of Industrial and Applied Mathematics, 21 (2004), 391-401 (with K. Murota).
• Gradient descent for unbounded convex functions on Hadamard manifolds and its applications to scaling problems. 65th IEEE Symposium on Foundations of Computer Science (FOCS 2024), pp.2387--2402
(with K. Sakabe)
• Polyhedral clinching auctions for indivisible goods. 19th Conference on Web and Internet Economics (WINE 2023), LNCS 14413, 2024, pp. 366--383 (with R. Sato)
• Interior-point methods on manifolds: theory and applications, 64th IEEE Symposium on Foundations of Computer Science (FOCS 2023), pp. 2021--2030.(with H. Nieuwboer and M. Walter)
• Minimum 0-extension problems on directed metrics, 45th International Symposium on Mathematical Foundations of Computer Science (MFCS 2020), LIPIcs 170, 2020, 46:1--46:13.(with R. Mizutani)
• Node-connectivity terminal-backup, separately-capacitated multiflow, and discrete convexity, 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020),LIPIcs 168, 2020,
65:1--65:19. (with M. Ikeda)
• A combinatorial algorithm for computing the rank of a generic partitioned matrix with 2 x 2 submatrices, Integer Programming and Combinatorial Optimization - 21st International Conference (IPCO
2020) LNCS 12125, 2020, 196--208. (with Y. Iwamasa)
• Reconstructing phylogenetic tree from multipartite quartet system, Proceedings of the 29th International Symposium on Algorithms and Computation (ISAAC'18), LIPIcs 123, 2018, 57:1--57:13. (with
Y. Iwamasa)
• Beyond JWP: A tractable class of binary VCSPs via M-convex intersection, Proceedings of the 35th International Symposium on Theoretical Aspects of Computer Science (STACS'18), LIPIcs 96, 2018,
39:1--39:14. (with Y. Iwamasa, K. Murota, and S. ivný)
• A compact representation for minimizers of $k$-submodular functions, Proceedings of 4th International Symposium on Combinatorial Optimization (ISCO'16), LNCS 9849, 2016, pp. 381--392. (with T.
• Discrete convexity and polynomial solvability in minimum 0-extension problems, Proceedings of the 24th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA'13), 2013. pp.1770-1788.
• The maximum multiflow problems with bounded fractionality, Proceedings of the 42nd ACM International Symposium on Theory of Computing (STOC'10), 2010 pp.115-120.
Other publications:
• A Linear programming formulation for routing asynchronous power systems of the Digital Grid, The European Physical Journal Special Topics 223, 2611-2620 (2014), (with Kyohei Shibano, Reo Kontani,
Mikio Hasegawa, Kazuyuki Aihara, Hisao Taoka, David McQuilkin, and Rikiya Abe).
• Optimization for centralized and decentralized cognitive radio networks, Proceedings of the IEEE 102 (2014), pp. 574--584 (with M. Hasegawa, K. Nagano, H. Harada, and K. Aihara).
• T_x-approaches to multiflows and metrics, In: S. Iwata(ed.), Combinatorial Optimization and Discrete Algorithms, RIMS Kokyuroku Bessatsu, B23 (2010), pp 107-130. [pdf]
|
{"url":"https://www.math.nagoya-u.ac.jp/~hirai.hiroshi/","timestamp":"2024-11-07T12:59:26Z","content_type":"text/html","content_length":"19932","record_id":"<urn:uuid:da80bd55-5de1-47bc-ac27-e0d8738638bd>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00075.warc.gz"}
|
Important Formulas For JEE (Main and Advanced); Download the Pdf
Learn and Download Important Formulas of Maths for JEE (Main and Advanced)
JEE Mathematics is one of the three subjects that are part of the Joint Entrance Exam (JEE). It is an important subject for aspirants who wish to pursue a career in engineering or related fields. The
syllabus for JEE mains Mathematics is vast and requires extensive practice and understanding of various formulas to excel in the exam.
In this article, we will discuss some essential JEE Mathematics formulas that aspirants must know and how to learn them.
👉 After taking the JEE Main exam, you can predict your rank using our JEE Main Rank Predictor 2024.
FAQs on Important Formulas for JEE (Main and Advanced) - Maths
1. Why are JEE Main Maths formulas important?
JEE Main maths formulas are important for aspirants who want to crack the JEE and get admission to top engineering colleges in India. Aspirants need to have a good understanding of these formulas to
solve problems and score well in the exam.
2. How can I memorize JEE Advanced maths formulas?
Regular practice and revision are the best ways to memorize JEE maths formulas. Aspirants must solve various practice problems and revise the formulas regularly to keep them fresh in their minds.
3. What are some of the essential JEE main and Advanced maths formulas?
Some of the essential JEE maths formulas include trigonometric formulas, quadratic equations, coordinate geometry formulas, differentiation formulas, and integration formulas.
4. How can I apply JEE maths formulas to solve problems?
To apply JEE maths formulas to solve problems, aspirants must first understand the problem statement and identify the relevant formula. They can then substitute the given values into the formula and
solve for the unknown variable.
5. How JEE Main and Advanced Maths Formula PDF help to score more in JEE exam?
Learning JEE Main and Advanced Math formulas requires a combination of understanding, practice, and regular revision. By downloading the PDF of JEE Main and Advanced maths formulas, you will be able
to cover the whole formulas needed for the JEE Main and Advanced Exams in one go.
|
{"url":"http://eukaryote.org/maths-formulas.html","timestamp":"2024-11-10T14:37:39Z","content_type":"text/html","content_length":"410932","record_id":"<urn:uuid:143d7986-9ceb-40ce-82e9-7bc4632bf35d>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00663.warc.gz"}
|
Matlab Average | Implementation of Matlab Average
Updated March 14, 2023
Introduction to Matlab Average
Average, in statistics and mathematics, is computed by taking the sum of any group of objects or values and dividing the sum by several objects or values. Average is also referred to as ‘mean’.
Average of any data provides us with the idea of the central tendency, i.e. it helps us in getting an intuition about a typical value in the data set. In MATLAB we use the ‘mean’ function to find the
For example, if the ages of people in a group of 5 are 22, 26, 34, 27, and 45, then the average age is given by (22 + 26 +34 + 27 + 45) / 5 = 30.8
Below is the syntax of Matlab Average:
A = mean (M)
Explanation: A = mean(M) will return the average of all the elements of the array M. For matrix M, A = mean(M) will return the average of every column in M, in the form of a row vector
Examples to Implement Matlab Average
Let us now understand the code of mean function in MATLAB using different examples:
Example #1
In this example, we will take a 2 x 2 matrix and will find its average using the mean function.
For our first example, we will follow the following steps:
1. Create the 2 x 2 matrix
2. Pass the input matrix as an argument to the mean function
M = [0 5; 3 6;] A = mean (M)
Explanation: First, Creating the 2 x 2 matrix. Passing the matrix ‘M’ as an input to the mean function. The mean function will find the average of elements in each column and will return a 1 x 2-row
vector. Mathematically, the averages of elements of columns 1 and 2 are 1.5 and 5.5 respectively. As we can see in the output, we have obtained the average of column 1 as 1.5 and column 2 as 5.5,
which is the same as expected by us.
Example #2
In this example, we will take a 3 x 3 matrix and will find its average using the mean function.
For this example, we will follow the following steps:
1. Create the 3 x 3 matrix
2. Pass the input matrix as an argument to the mean function
M = [3 5 9; 3 2 1; -4 5 8] A = mean (M)
Explanation: First, Creating the 3 x 3 matrix. Passing the matrix ‘M’ as an input to the mean function. The mean function will find the average of elements in each column and will return a 1 x 3-row
vector. Mathematically, the averages of elements of columns 1, 2, and 3 are 0.6667, 4, 6 respectively. As we can see in the output, we have obtained the average of column 1 as 0.6667, column 2 as 4,
and column 3 as 6, which is the same as expected by us.
In the above 2 examples, the average was calculated along each column because by default, this is how the mean function works. Next, we will learn how to find the average along the rows of a matrix
using the mean function.
Example #3
In this example, we will take a 2 x 2 matrix and will find its average along the rows, using the mean function.
For this example, we will follow the following steps:
1. Create the 2 x 2 matrix.
2. Pass the input matrix as the first argument to the mean function.
3. Pass the second argument as ‘2’.
M = [5 2; 4 11;] A = mean (M, 2)
Explanation: First, Creating the 2 x 2 matrix Passing the matrix ‘M’ as the first argument to the mean function. The second argument ‘2’ is passed to ensure that the mean is calculated along the rows
of the matrix. The mean function will find the average of elements in each row and will return a 2 x 1-row vector. Mathematically, the averages of elements of rows 1 and 2 are 3.5 & 7.5 respectively.
As we can see in the output, we have obtained the average of row 1 as 3.5 and row 2 as 7.5, which is the same as expected by us.
Example #4
In this example, we will take a 3 x 3 matrix and will find its average along the rows, using the mean function.
For this example, we will follow the following steps:
1. Create the 3 x 3 matrix
2. Pass the input matrix as the first argument to the mean function
3. Pass the second argument as ‘2’
M = [2 4 6; 4 5 11; 5 6 1] A = mean (M, 2)
Explanation: First, Creating the 3 x 3 matrix. Passing the matrix ‘M’ as the first argument to the mean function. The second argument ‘2’ is passed to ensure that the mean is calculated along the
rows of the matrix. The mean function will find the average of elements in each row and will return a 3 x 1-row vector. Mathematically, the averages of elements of rows 1, 2, and 3 are 4, 6.6667 & 4
respectively. As we can see in the output, we have obtained the average of row 1 as 4, row 2 as 6.6667, and row 3 as 4, which is the same as expected by us.
‘mean’ function is used in MATLAB to find the average of a matrix or an array. By default, the mean function computes the average along with the columns in the input matrix. We can pass a second
argument as ‘2’ if we need the average along the rows of the matrix
Recommended Articles
This is a guide to Matlab Average. Here we discuss an introduction to Matlab Average, syntax, examples with code, output, and explanation. You can also go through our other related articles to learn
more –
|
{"url":"https://www.educba.com/matlab-average/","timestamp":"2024-11-11T05:10:16Z","content_type":"text/html","content_length":"308490","record_id":"<urn:uuid:b43a6448-31a6-4bd2-afc1-2612f1ebd396>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00537.warc.gz"}
|
Exercise 6.2: Mathematical Expectation - Problem Questions with Answer, Solution
Exercise 6.2
1. Find the expected value for the random variable of an unbiased die
2. Let X be a random variable defining number of students getting A grade. Find the expected value of X from the given table
3. The following table is describing about the probability mass function of the random variable X
Find the standard deviation of x.
4. Let X be a continuous random variable with probability density function
Find the expected value of X .
5. Let X be a continuous random variable with probability density function
Find the mean and variance of X .
6. In an investment, a man can make a profit of ₹ 5,000 with a probability of 0.62 or a loss of ₹ 8,000 with a probability of 0 ⋅ 38 . Find the expected gain.
7. What are the properties of Mathematical expectation?
8. What do you understand by Mathematical expectation?
9. How do you define variance in terms of Mathematical expectation?
10. Define Mathematical expectation in terms of discrete random variable.
11. State the definition of Mathematical expectation using continuous random variable.
12. In a business venture a man can make a profit of ₹ 2,000 with a probability of 0.4 or have a loss of ₹ 1,000 with a probability of 0 ⋅ 6 . What is his expected, variance and standard deviation of
13. The number of miles an automobile tire lasts before it reaches a critical point in tread wear can be represented by a p.d.f.
Find the expected number of miles (in thousands) a tire would last until it reaches the critical tread wear point.
14. A person tosses a coin and is to receive ₹ 4 for a head and is to pay ₹ 2 for a tail. Find the expectation and variance of his gains.
15. Let X be a random variable and Y = 2X + 1. What is the variance of Y if variance of X is 5 ?
|
{"url":"https://www.brainkart.com/article/Exercise-6-2--Mathematical-Expectation_38978/","timestamp":"2024-11-11T08:34:10Z","content_type":"text/html","content_length":"38442","record_id":"<urn:uuid:aef7891a-9432-43ba-8f5b-1ec0d087b68d>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00186.warc.gz"}
|
Bending moment and shear force diagrams
Bending Moment and Shear Force diagrams
What is Bending Moment?
The element bends when a moment is applied to it. Every structural element has bending moment. Concept of bending moment is very important in the field of engineering especially Civil engineering and
Mechanical Engineering.
Unit of measurement: Newton-metres (N-m) or pound-foot or foot-pound (ft.lb)
Bending moment is directly proportional to tensile and compressive stresses. Increase in tensile and compressive stresses results in the increase in the bending moment. These stresses also depend on
the second moment of area of the cross section of the element.
What is Shear stress?
Shear stress is defined as the measure of force per unit area. Shear stress occurs in shear plane. There are many planes possible at any point in a structure which can be defined to measure stress.
Stress = Force/Unit area
Example: Bending Moment and Shear Force Calculations
Frame diagrams | Bending moment and shear force calculations
Simply supported bending moment
M[ab] = wl^2/8 = (22×4.14×4.14)/8
= 47.13 KN-m
M[bc] = wl^2/8 = (22×4.14×4.14)/8
= 47.13 KN-m
|
{"url":"https://civilprojectsonline.com/tag/bending-moment-and-shear-force-diagrams/","timestamp":"2024-11-13T11:08:13Z","content_type":"text/html","content_length":"50385","record_id":"<urn:uuid:ec314246-690a-4616-afeb-812f80c7caef>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00343.warc.gz"}
|
[C/C + + backend development and learning] 1 sorting, KMP, linked list
1 sort
Comparison of sorting algorithms
Comparison of sorting algorithms (Reprint)
Stability of sorting algorithm:
Assuming that there are multiple records with the same keyword in the sequence to be sorted, if the relative order of these records remains unchanged after sorting, the sorting algorithm is said to
be stable. In short, the same records in the sequence will not be exchanged with each other, reducing unnecessary overhead.
1.1 Shell sorting
Hill sort is also called reduced incremental sort. It makes use of the best time cost characteristics of insertion sort, first changes the sequence to be sorted into basic order, and then carries out
insertion sort to complete the final sort.
Working process: set different increments, divide the sequence into multiple groups of subsequences, and use insertion sorting for subsequences respectively; Then reduce the increment and repeat the
above process; Until the increment is 1, it is equivalent to the last insertion sort of the whole sequence. However, because the sequence is basically orderly, the efficiency of insertion sort is
very high.
Hill sorting diagram (the picture is reproduced)
Code implementation 1 - intuitive Hill sorting:
int shell_sort_orig(int *data, int length) {
int gap = 0; // increment
int i = 0, j = 0, k = 0;
for(gap = length/2; gap > 0; gap /= 2) // loop1, continuously divide by 2 to reduce the increment
for(i = 0; i < gap; i++) // loop2, grouped by increment
for(j = i + gap; j < length; j += gap) // loop3, an insertion sort outer loop of a group
int temp = data[j];
for(k = j - gap; k >= 0 && temp < data[k]; k -= gap) // loop4, the insertion sort inner loop of the group
data[k + gap] = data[k];
data[k + gap] = temp;
return 0;
In fact, loop2 and loop3 can be merged together.
Code implementation 2 - concise Hill sorting:
int shell_sort(int *data, int length)
int gap = 0; //increment
int i = 0, j = 0;
for (gap = length / 2; gap >= 1;gap /= 2) // loop1, incremental grouping
for(i = gap; i < length; i ++) // loop2+3, Meizu traversal
int temp = data[i];
for (j = i - gap; j >= 0 && temp < data[j]; j = j - gap)
{ //Sort within loop4 group
data[j+gap] = data[j];
data[j+gap] = temp;
return 0;
1.2 quick sort
The quick sort adopts the divide and conquer strategy. First, select a pivot. We hope to divide the sequence into two parts less than the pivot and greater than the pivot. Then select another pivot
in these two parts for the same operation until the sequence can no longer be divided, and then the sorting is completed. Is it like binary search tree BST? It can also be easily implemented by
In essence, every time a sub sequence is processed by quick sort, the axis value in the sub sequence will be placed in its correct position (that is, the position it should be after the sorting is
finally completed), corresponding to n axis values to be processed; In the best case, the subsequence can only be divided into log(n) times (it can be divided into half each time, which is similar to
the case that BST can be balanced), so the total time complexity is O(nlog(n)); However, if the axis value is not selected well, the worst division times will reach n times. At this time, the worst
time complexity is O(n^2).
Generally, the first element of the subsequence is selected as the axis value.
Code implementation:
//Each recursion determines the correct position of a value
int sort(int *data, int left, int right)
if (left >= right) return 0; // Recursive end
int i = left;
int j = right;
int key = data[left];
while (i < j)
while (i < j && key < data[j])
j --;
data[i] = data[j]; // Find an element smaller than the axis value on the right and move it to the left
while (i < j && key >= data[i])
i ++;
data[j] = data[i]; // Find an element larger than the axis value on the left and move it to the right
// Until i == j
data[i] = key; // The correct position of the axle value is determined
sort(data, left, i-1); // Enter the subsequence on the left
sort(data, i+1, right); // Enter the subsequence on the right
2 KMP algorithm
KMP function: used to improve the efficiency of string matching. After using KMP algorithm, the time complexity of matching is O(n+m).
Interview question: write a function to find the first position of the substring pattern (length m) in the string text (length n).
The time complexity of violent matching is O(n*m). In order to speed up the matching speed, we should consider how to reduce the number of comparisons. The idea is: each time the comparison fails,
you will get some information about text. You can use this information to skip some subsequent string comparison processes that are unlikely to succeed as much as possible, so as to achieve the
purpose of acceleration. (Reference: How to better understand and master KMP algorithm?)
For the main string s and pattern string P in the following figure, start the comparison from S[0]. When the matching fails between the last character P[5] of P and S[5], move P[0] backward for 3
times to reach S[3], and the next possible successful matching will occur. Since it is known that there are a pair of substrings "ab" before and after P, both can be matched with a pair of substrings
"ab" in S Match, so you can actually skip the two invalid comparisons in the middle, directly move P[0] to S[3] and continue the comparison from S[5].
Matching example (the picture is modified after Reprint)
So how to find the same substring?
2.1 introduction of some concepts
Prefix: the sequential combination of other consecutive characters except the last character;
Suffix: sequential combination of other consecutive characters except the first character;
Given a string "ABCABCD" , the length is n=7. In addition to the last character, six substrings with a length of 1 ~ 6 can be extracted. The pre suffix of each substring can be listed, and the
maximum number of common elements K of the pre suffix of each substring can be obtained. After finding the maximum common element, it can be used to speed up the string matching process described
above. It is not difficult to see that K is the pattern string P in the next round of matching, which can be directly skipped The length of is compared from P[k]. The process of finding this k is the
key of KMP algorithm.
(the picture is reproduced)
2.2 how to calculate the maximum number of common elements k (that is, find the next [] array)
The so-called next [] array is used to record where the next matching should continue when a mismatch occurs. When a mismatch occurs to P[j], it indicates that P[j-1] is the last matching position,
so let j=next[j-1], and then start the next round of matching from P[j]. It can be imagined that next[j-1] is the maximum number of common elements k of the pattern string with length j.
(the picture is reproduced)
2.2.1 intuitive method - direct traversal
For each pattern string, find out the maximum length that its prefix and suffix can completely match. This requires two layers of for loops. The outer loop constructs each pattern string, and the
inner loop calculates the maximum common length. It is easy to understand that its time complexity is O(m^2).
int make_next(char* pattern, int next[])
int patLen = strlen(pattern); // Length of pattern
int tailIdx = 0; // The subscript of the last character of the pattern string
int subPatLen = 0; // Length of infix
for(tailIdx = 0; tailIdx < patLen - 1; tailIdx++) // Outer loop, construct each pattern string, pay attention to patLen-1
next[tailIdx] = 0; // Initialize to 0 first
for(subPatLen = tailIdx; subPatLen > 0; subPatLen--) // The inner loop gradually reduces the pre suffix length to match the largest common element k
if(memcmp(&pattern[0], &pattern[tailIdx - (subPatLen - 1)], subPatLen) == 0)
{ // Compare whether the current pre suffixes are consistent. If they are consistent, the largest common element k is matched
next[tailIdx] = subPatLen;
return 0;
2.2.2 quick method
Core idea: pattern matches itself. In short, the next of the larger pattern string can be calculated quickly according to the calculated next of the smaller pattern string, so as to reduce the number
of times to search the largest common string by backtracking the pattern string.
Take P="abcabdabcd" as an example:
• First, next[0]=0, which is inevitable. Therefore, start from next[1].
• Point Q to the tail of a pattern string, as shown in the figure, q=1, pointing to the tail of the pattern string with length q+1; at first, let K point to the head of the pattern string, k=0, and
then move K backward to traverse the prefix of the pattern string for next calculation. Obviously, for the pattern string "ab", P [k]! = P [q], there is no maximum public prefix, next[1]=0, and K
remains unchanged, q=q+1=2.
• Obviously, when q=2, for the mode string "abc", P [k]= P [q], there is no maximum public prefix, next[2]=0, K remains unchanged, q=q+1=3.
• Then, for the pattern string "abca", P[k] == P[q], intuitively, there is next[3]=1. But in fact, q = 3 (pattern string "abca") is matched on the basis of q = 2 (pattern string "abc"), so it
should be considered that next[3]=next[2]+1; At this point, the matching starts. You can add 1 to K and Q at the same time and move back to see if there is a longer matching.
• According to this idea, the subsequent q=4 and k=1 are also matched. Therefore, for the larger pattern string "abcab", the maximum common element is extended based on the pattern string "abca",
so next[4]=next[3]+1.
• Until q=5 and k=2 in the figure below, there is a mismatch for "abcabd"; it can be seen intuitively from the figure that there must be next[5]=0. From the perspective of program, it is enough to
backtrack the prefix and suffix of pattern string "abcabd" according to the method in 2.2.1, but how to quickly calculate next[5] Based on the previously calculated next[0~4] What? The pattern
string at this time can't explain the problem. According to this idea, let's continue to traverse backward.
• Until we get to the following figure. In case of mismatch, we need to backtrack the pattern string "abcabdabc" to find the largest possible common substring that can be matched. It is not
difficult to find that the previously matched substring "abcab" is consistent with the pattern string "abcab" of next[4] (i.e. next[k-1]). Therefore, the problem is converted to the pattern
string "abcab" in next[4] Find a maximum substring that can continue the current matching. Before, when the pattern string "abcab" matches the maximum substring, there is k=next[4], we can
conclude that there should be k=next[k-1] at this time.
• Of course, after making k=next[k]=2, a substring that can be matched must appear. It must also require P[k]==P[q]. If it is satisfied, it means that the current pattern string can continue to
match on the basis of the corresponding substring of k=2, so there is next[q]=k+1; otherwise, it should continue to make k=next[k-1] , judge whether it matches again; until k=0, there is no
substring that can meet the matching, it indicates that there is no common element before and after the pattern string. next[q]=0, Q plus 1 to continue the matching of the next pattern string.
Code implementation:
int make_next(char* pattern, int next[])
int patLen = strlen(pattern);
int prefixTail = 0; // k - tail of prefix
int subfixTail = 0; // q - tail of suffix
next[0] = 0;
for(subfixTail = 0; subfixTail < patLen; subfixTail++)
while(prefixTail > 0 && pattern[subfixTail] != pattern[prefixTail])
prefixTail = next[prefixTail - 1]; // In case of mismatch, go back to the previous substring, k=next[k-1]
if(pattern[subfixTail] == pattern[prefixTail])
prefixTail++; // If it matches, continue to expand the prefix length, k++
next[subfixTail] = prefixTail;
2.3 implementation of KMP algorithm and test program
int kmp(char* text, char* pattern)
int next[20] = {0}; // The actual length of next should be determined according to the length of pattern
int t = 0; // Index of text
int p = 0; // Index of pattern
int patLen = strlen(pattern);
make_next(pattern, next);
while (text[t])
if(text[t] != pattern[p])
if(p == 0)
t++; // There is no matching pattern string, t move back
p = next[p - 1]; // Get the pattern string that can be matched, t do not move, and carry out the next matching
p++; // Move backward if same
if(p == patLen) // Complete matching, returnable
return t - patLen;
return -1;
/* Test procedure */
int main()
char *text = "eabcabcabcabcabcdabc";
char *pattern = "abcabcd";
int pos = 0;
int i;
pos = kmp(text, pattern);
printf("## KMP result: %d\n", pos);
printf("## %s\n## ", text);
for(i = 0; i < pos; i++)
printf(" ");
printf("%s\n", pattern);
return 0;
Test result output:
# KMP result: 10
## eabcabcabcabcabcdabc
## abcabcd
3 common problems of linked list
Define linked list nodes:
struct LinkNode {
int val;
LinkNode *next;
3.1 judge whether there are links in the linked list
Fast and slow pointer: it is also traversal, but the slow pointer moves one node at a time and the fast pointer moves two nodes at a time. If there is a ring, the two can finally meet. The time
complexity is reduced to O(n), but the intersection point forming the ring cannot be determined.
/* The speed pointer determines whether the linked list has a ring */
bool myLink::isListLooped(LinkNode* h) const
if(h == nullptr)
return false;
LinkNode *slow = h; // Slow pointer, advance 1 node at a time
LinkNode *fast = h->next; // Fast pointer, advance 2 nodes at a time
while(slow != nullptr && fast != nullptr)
if(slow == fast)
return true;
slow = slow->next;
fast = (fast->next != nullptr) ? fast->next->next : fast->next;
return false;
3.2 judge whether the two linked lists overlap
Violent solution: time complexity O(m*n)
Node number difference method: linked list A has m nodes, linked list B has n nodes, and M > N, then linked list A starts to traverse from the m-n node, and linked list B still starts to traverse
from 0. Complexity O(n).
For example: for n nodes in total, find the penultimate node
Practice: two pointers, one goes to k nodes first, and then the two go together; when the previous pointer goes to the last node, the latter pointer points to the penultimate node
/* O(n)Determine whether the linked list overlaps under complexity, and the intersection is returned */
LinkNode* myLink::isListCrossed(LinkNode* h1, LinkNode* h2) const
int nodeNum1 = countListNodeNum(h1); // Calculate the number of nodes
int nodeNum2 = countListNodeNum(h2); // Calculate the number of nodes
int nodeNumMinus = 0;
if(nodeNum1 > nodeNum2)
nodeNumMinus = nodeNum1 - nodeNum2;
h1 = getNthNode(h1, nodeNumMinus); // How many nodes does linked list 1 go first
nodeNumMinus = nodeNum2 - nodeNum1;
h2 = getNthNode(h2, nodeNumMinus); // How many nodes does linked list 2 go first
while(h1 != nullptr && h2 != nullptr)
if(h1 == h2)
return h1; // Find the intersection of the two
h1 = h1->next;
h2 = h2->next;
return nullptr;
3.3 linked list inversion
Traverse the linked list, and use three pointers to reverse the directions of the front and rear nodes in place.
LinkNode* myLink::reverseList(LinkNode* h)
LinkNode* prev = nullptr;
LinkNode* current = h;
LinkNode* next;
while (current != nullptr)
next = current->next; // This sentence must be executed only when current! = nullptr is ensured
current->next = prev;
prev = current;
current = next;
return prev;
3.4 how to find the penultimate node in a one-way lin k ed list
Using recursion is simpler, but the loop efficiency is higher.
Method: first, the first and the last two pointers. The latter goes through k nodes first, and then both go forward at the same time until the latter reaches the tail of the linked list. The node
pointed to by the former is the penultimate node
LinkNode* myLink::rfindNodeNo(LinkNode* h, const int & k) const
LinkNode* tail;
if(k <= 0)
return nullptr;
tail = getNthNode(h, k - 1); // Find the nth node. N starts from 0, so k-1
if(tail == nullptr)
return nullptr;
while(tail->next != nullptr)
{ // Both began to move forward at the same time
tail = tail->next;
h = h->next;
return h;
Supplement _3.5: how to clear the one-way linked list with rings?
Problem: after the intersection is released, the next pointer of the last node in the nominal linked list still points to the original address of the intersection. At this time, the pointer is not
nullptr, so it is impossible to directly judge whether it has traversed the tail of the linked list.
This problem may also exist when overlapping linked lists are released.
|
{"url":"https://www.fatalerrors.org/a/c-c-backend-development-and-learning-1-sorting-kmp-linked-list.html","timestamp":"2024-11-07T09:16:03Z","content_type":"text/html","content_length":"29966","record_id":"<urn:uuid:0ee9ea29-da2c-4bbf-89de-9fca5bb97935>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00658.warc.gz"}
|
Variables and Coefficients
In this lesson, we will learn about variables and coefficients in algebraic equations.
Key Terms:
• Equations show that two things are the same or equal using the equal sign "="
• Variables are lower case letters used in algebraic equations to replace an unknown number
• Coefficients are used in equations to multiply variables
Parts of Equations
Solving Algebraic Equations
Let's take a look at the above equation.
3n + 9
In order to solve the equation, we need to find what the variable "n" is. Let's say that n is equal to 17.
So, that means we have to multiply 3 x 17 is 51. Now just add 51 + 9 = 60.
The answer to the algebraic equation 3n + 9 is equal to 60.
No comments:
|
{"url":"http://www.broandsismathclub.com/2014/09/variables-and-coefficients.html","timestamp":"2024-11-06T14:36:41Z","content_type":"text/html","content_length":"53486","record_id":"<urn:uuid:fb2d8667-656d-4df1-869d-ac609d905385>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00266.warc.gz"}
|
Article 100 Definitions. PV String Circuit.
Code Change Summary: A new definition of a “PV String Circuit” was added to Article 100 and several previous PV related definitions were deleted.
In the 2023 NEC^®, a new definition of a PV String Circuit was added to Article 100. A PV string circuit includes PV source circuit conductors of one or more series-connected PV modules.
For years, common industry terminology for a group of PV modules wired in series has been “string”, “series string”, or “string circuit”. The NEC^® has never had a definition of a PV String Circuit
until now.
Additionally, several previous PV definitions have been deleted and/or consolidated and are now considered a “PV DC Circuit (PV System DC Circuit)” which applies to “any dc conductor in PV source
circuits, PV string circuits, and PV dc-to-dc converter circuits”.
The deleted PV related terminology is no longer used in Article 690 and therefore the definitions are no longer needed. The following changes occurred as well as the relocation of the definitions
from 690.2 to Article 100:
• DC-to-DC Converter Output Circuit – Deleted but now covered under the definition of a DC-to-DC Converter Circuit in Article 100, and also considered a PV DC Circuit.
• DC-to-DC Converter Source Circuit - Deleted but now covered under the definition of a DC-to-DC Converter Circuit in Article 100, and also considered a PV DC Circuit.
• PV Output Circuit – Deleted but now covered under the definition of a PV Source Circuit in Article 100, and also considered a PV DC Circuit.
In past editions of the NEC^® a PV Output Circuit was created when two or more series strings were connected in parallel. Now, this circuit will simply be incorporated into the revised definition of
a PV Source Circuit to reduce confusing terminology.
The move to update the PV terminology was based largely in part from a public input from the PV Industry Forum (PVIF) who suggested that the current definitions and usage of the terminology
throughout Article 690 has caused confusion among inspectors, installers, and system designers for years.
Below is a preview of the NEC^®. See the actual NEC^® text at NFPA.ORG for the complete code section. Once there, click on their link to free access to the 2023 NEC^® edition of NFPA 70.
2020 Code Language:
690.2 Definitions.
DC-to-DC Converter Output Circuit. The dc circuit conductors connected to the output of a dc combiner for dc-to-dc converter source circuits.
DC-to-DC Converter Source Circuit. Circuits between dc-to-dc converters and from dc-to-dc converters to the common connection point(s) of the dc system.
PV Output Circuit. The dc circuit conductors from two or more connected PV source circuits to their point of termination.
PV Source Circuit. The dc circuit conductors between modules and from modules to dc combiners, electronic power converters, or a dc PV system disconnecting means.
2023 Code Language:
Article 100 Definitions.
PV DC Circuit (PV System DC Circuit). Any dc conductor in PV source circuits, PV string circuits, and PV dc-to-dc converter circuits. (690)
• N PV DC Circuit (PV System DC Circuit). Any dc conductor in PV source circuits, PV string circuits, and PV dc-to-dc converter circuits. (690)
• N PV DC Circuit, Source. (PV Source Circuit). The PV dc circuit conductors between modules in a PV string circuit, and from PV string circuits or dc combiners, to dc combiners, electronic power
converters, or a dc PV system disconnecting means. (690)
• N PV DC Circuit, String. (PV String Circuit). The PV source circuit conductors of one or more series-connected PV modules. (690)
|
{"url":"https://www.electricallicenserenewal.com/Electrical-Continuing-Education-Courses/NEC-Content.php?sectionID=1399","timestamp":"2024-11-11T09:46:47Z","content_type":"text/html","content_length":"29338","record_id":"<urn:uuid:eed3d03b-46e4-4f41-b996-5a1100f09cda>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00662.warc.gz"}
|
gaboroszkar@protonmail.com Gábor Oszkár Dénes sci@gentoo.org Gentoo Science Project proxy-maint@gentoo.org Proxy Maintainers The evaluation of a mathematical expression is a standard task required in
many applications. It can be solved by either using a standard math expression parser such as muparser or by embedding a scripting language such as Lua. There are however some limitations: Although
muparser is pretty fast it will only work with scalar values and although Lua is very flexible it does neither support binary operators for arrays nor complex numbers. So if you need a math
expression parser with support for arrays, matrices and strings muparserx may be able to help you. beltoforion/muparserx
|
{"url":"https://ftp.dimensiondata.com/mirror/gentoo-portage/dev-cpp/muParserX/metadata.xml","timestamp":"2024-11-02T17:12:49Z","content_type":"application/xml","content_length":"1596","record_id":"<urn:uuid:1a3d1d5b-058f-4b16-b366-10273a05ae15>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00020.warc.gz"}
|
Design of Small Groups
Figure 2 shows a graph of the best possible decision making accuracy vs group size. To achieve this level requires that the decision be made unanimously following an extended discussion. The Rules
for Optimizing Small Groups shown at the top of the page must be followed to achieve this performance. These rules are designed to insure that all options are identified and that they are fully
discussed. If the group is composed of qualified individuals with the basic knowledge to make the decision and they behave in a rational manner, a single enlightened group member should be able to
persuade the group to select the right alternative. On this basis the only way the group would fail is if no one in the group selects a good alternative.
Majority Rules Decision Accuracy
There is a tendency in groups to make decisions by voting, in which the majority rules. While this speeds the decision making process it reduces the accuracy and should be used only for minor
To study this situation we resort to finding areas under the binomial distribution using the following relationship:
S ( n! ) (q^n-x)p^x
x!(n - x)!
n = number of group members
m = the minimum size for a majority
p = the probability of being right or .6
q = the probability of being wrong or .4
note: q = (1-p )
Figure 3 displays the results of majority rules decision making for various sized groups. Note the odd looking saw tooth appearance which gives even numbered groups a lower probability of making the
correct decision. The explanation is that these groups can have a tie. For example, a group of four will only have a majority 75% of the time. The other 25% of the time will be a tie vote which
obviously does not result in a correct decision. Odd numbered groups will have a majority on every vote. Yes, even number groups can often resolve tie votes but it costs more time and effort to do
Figure 4. shows that a group size of five is optimum. Five takes advantage of the desirability of odd numbers for majority rules decisions. For the unanimous decision making style a group of five
will have a 99% accuracy assuming 60% individual accuracies and that a single person with the right answer can convince the others. Even with only 50% individual accuracies the group accuracy will
average 96.9%. Adding additional members will not greatly improve accuracy. However, additional members will significantly increase group management problems since the number of possible social
interactions increases rapidly.
The data presented does not mean that every group has to contain exactly five members. There are other factors to consider such as the implementation of decisions. This often requires buy-in by the
stakeholders. Placing stakeholders in the group can speed implementation. However, important decision making groups should not be expanded unless there is an overriding reason to do so.
|
{"url":"http://www.intuitor.com/statistics/SmallGroups.html?ref=review.firstround.com","timestamp":"2024-11-11T18:14:39Z","content_type":"text/html","content_length":"25383","record_id":"<urn:uuid:28298a3a-6985-4048-ae7e-9f0ad8fc55b7>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00588.warc.gz"}
|
ZPEnergy.com - Revealing the hidden connection between pi and Bohr's hydrogen model
Revealing the hidden connection between pi and Bohr's hydrogen model
Date: Sunday, November 22, 2015 @ 00:26:38 UTC
Topic: Science
To: Dr. Carl R. Hagen, University of Rochester, Dr. Tamar Friedmann, University of Rochester
Cc: Dr. Drew Milsom, University of Arizona
Dears Dr. Hagen and Dr. Friedmann
Very interesting your work on the hidden connection between pi and Bohr’s hydrogen atom, because your work awakens an old unanswered question of Quantum Mechanics, regarding the incompatibility
between the Bohr’s model of atom and the model of atom adopted in Quantum Mechanics.
Revealing the hidden connection between pi and Bohr's hydrogen model
Dr. Drew Milson comments the work saying:
“It is ultimately unsurprising that the pi formula emerged from the quantum solution because, as Friedmann herself points out, "mathematical formulae come up in physics all the time". She adds that
finding the link "is a manifestation of the ultimate connection between math and physics", but whether there exists some deeper, fundamental correlation between the two remains unknown.”
However, the link you found between pi and Bohr's hydrogen model has not any chance to be considered seriously as a connection between the math and the physics, because the Bohr’s atom model is
considered to be wrong by the whole community of physicists, since his model is incompatible with the atom model of Quantum Mechanics, which is the theory considered to be correct. Therefore, as the
Bohr model is considered wrong, from this viewpoint his model cannot have any connection with the physical structure of the atom existing in the Nature, and thereby it makes no sense to look for a
link between pi and the Borh’s hydrogen atom, because the physical structure of the atom existing in the Nature cannot have connection with pi, as consequence that the atom model of Quantum Mechanics
cannot have connection with pi.
The link found by you, dears Dr. Hagen and Dr. Friedmann, makes sense only if the atom Model of Quantum Mechanics can be replaced by a new model of atom, compatible with pi.
In short, the link found by you requires to consider that the atom model of Quantum Mechanics is incomplete.
Happily, there are strong evidences suggesting that the atom of Quantum Mechanics cannot be entirely correct, and I discuss them ahead.
First of all, there is need to mention the fact that Schrödinger Equation cannot be applied to the atom model of Quantum Mechanics, as I emphasized along a discussion with the Nobel Laureate in
Physics Dr. Brian Josephson, in the beginning of 2015. My argument shown to Dr. Josephson is published in the Book Description of my book “The Evolution of Physics”, published in Amazon.com:
In resume, the reason why the Schrödinger Equation cannot be applied to the atom model of Quantum Mechanics is obvious, because he developed his equation by considering a free electron, and therefore
his equation cannot be applied to an electron moving within a potential, as happens when the electron is moving in the electrosphere of an atom.
In my book “The Missed U-Turn” it is shown that the Schrödinger Equation must be actually applied to my new model of hydrogen atom:
In the new hydrogen model of atom the electron moves with helical trajectory (zitterbewegung discovered by Schrödinger in the Dirac’s equation of the electron) within the electrosphere of the proton,
while the space of the electrosphere about the proton is non-Euclidian (and this is the reason why the electron moves with constant speed within the electrosphere, and so it justifies why the
Schrödinger Equation can be applied to the electron moving into the atoms). So, when Schrödinger has discovered his equation 90 years ago, he actually had discovered the equation for the electron
moving with helical trajectory within a non-Euclidian space.
But there is other unsolved puzzle proving that the atom model of Quantum Mechanics is incomplete, as explained in my book The Missed U-Turn, in the Chapter 19, entitled “END OF THE MYSTERY OF BOHR’S
SUCCESSES”, regarding the mystery of the successes of the Bohr’s hydrogen model of atom. Because one of the most astonishing mysteries of Physics is the following:
why is the Bohr’s theory able to supply so many spectacular successes, since his model is wrong, and incompatible with the atom model of Quantum Mechanics?
As emphasized by Schrödinger, the successes of the Bohr theory of the atom cannot be accidental. Something must be correct in the Bohr model, concerning the instant when the atom emits a photon.
However the mechanism of photon emission by the atom model considered in Quantum Mechanics can be 100% correct only if the mechanism of photon emission in the Bohr model is 100% incorrect, because
the two mechanisms are totally incompatible, since in the Bohr model the centripetal acceleration on the electron plays a role in the photon emission, and the centripetal force on the electron cannot
exist in the atom model of Quantum Mechanics. As the Bohr model cannot be 100% incorrect, it means that Quantum Mechanics cannot be 100% correct. This is one of the most intriguing paradox of Quantum
Mechanics. But the physicists do not use to mention it, since it implies that the atom model of Quantum Mechanics cannot be correct.
So, the mystery of the successes of the Bohr’s model has an important virtue: it proves that the Quantum Mechanics cannot be correct, because while the most physicists use to consider that the
successes of the Bohr’s model is accidental, however it is hard to believe it, as pointed out by Schrödinger in Chapter 4 page 89 at “On a Remarkable Property of the Quantum-Orbits of a Single
Electron”, regarding the successes of the Bohr atom, where he said:
“It is difficult to believe that this result is merely an accidental mathematical consequence of the quantum conditions, and has no deeper physical meaning.”
In the Chapter 19 of my book The Missed U-Turn, entitled “END OF THE MYSTERY OF BOHR’S SUCCESSES” it is explained why the atom model of Quantum Mechanics cannot be entirely correct, as follows:
Note that, although the Bohr model is wrong, it contains a modicum of truth: the centripetal acceleration really exists at the instant the photon is emitted. The error of the model is to consider
that this centripetal acceleration is directly tied to the mechanism of the photon’s emission and this is not true because the emission occurs due to a mechanism of resonance, in which the
centripetal acceleration does not participate because the electron, moving in a helical trajectory, is subjected to centripetal acceleration all the time; that is, it is always affecting the
electron’s motion. However, at the moment when the photon is emitted the radius, RHT, of the helical trajectory is equal to the Bohr radius and the force, FP of proton-electron attraction is equal to
the ether’s force, FP on the electron. These coincidences are responsible for the success of his model.
On the other hand, the model of Quantum Mechanics, in spite of being wrong, also contains a modicum of truth. It is correct first because, from the mathematical viewpoint, it makes sense to consider
that the electron has no trajectory, since the electron’s position along the helical trajectory can be viewed as a probability distribution about the axis of the helical trajectory. Second because
the model is correct in considering that the mechanism of emission is due to a process of resonance. But the model is wrong because there is no chance of the electron having centripetal acceleration
and, therefore, there is no way of justifying the coincidences of the Bohr model. That is why most theoreticians suport the viewpoint that the successes of Bohr are accidental, mere coincidences, and
his model has not a grain of truth. This is an uncomfortable position because, from the viewpoint of mathematical probability, it is impossible that it be mere coincidence. So, the theoreticians
prefer to bet that the impossible can happen and that the mathematics can be wrong rather than accept that the model of Quantum Mechanics can be wrong. In an article(16) in which the helical
trajectory of the electron for unifying the relativity with the quantum theory is proposed, the physicist Natarajan writes, commenting on the success of Bohr theory in explaining the spectral bands -
“But this significant success along with the other spectacular successes of Bohr’s theory of the hydrogen atom is now considered by physicists as ‘accidental’ after the development of Quantum
The new model of atom proposed in my theory conciliates the Bohr’s theory of the atom with the Schrödinger Equation, and the reason why Bohr model of atom is so successful is explained in the Figures
46 and 47 of my book The Missed U-Turn, shown ahead:
Fig. 46:
Fig. 47
Dears Dr. Hagen and Dr. Friedmann.
your work is an additional strong evidence showing that the atom model of Quantum Mechanics cannot be correct, since the number pi is connected to a circular trajectory of the electron when an atom
emits a photon (while the atom model of Quantum Mechanics is incompatible with the circular trajectory). As already explained before here, as the successes of the Bohr’s theory cannot be accidental,
and as the centripetal force appears in his calculations, then obviously the centripetal force plays a fundamental role in the mechanism of emission of the atom existing in the Nature. Therefore the
atom model of Quantum Mechanics cannot be hundred percent correct, since it makes no sense to consider the centripetal acceleration on the electron in the atom model of Quantum Mechanics.
So, your work is very interesting, because it reinforces the need of considering a new model of atom, working with new fundamental principles (where the centripetal force plays a fundamental role in
the mechanism of the photon emission, as occurs in the actual atom existing in the Nature), as proposed in my Quantum Ring Theory, published in 2006:
Finally, perhaps my new hydrogen model of atom may help both you to understand the missing link which "is a manifestation of the ultimate connection between math and physics" regarding the atom,
proving that concerning the atom there exists a deeper strong fundamental correlation between the two, since the math is just a description of the physical reality existing within the actual atom
existing in the Nature, unknown up to now by the community of physicist. As the true physical reality existing in the Nature does not exist in the atom of Quantum Mechanics, it is impossible to find
the link by considering the atom model of Quantum Mechanics. You found the link in the Bohr’s atom because his model is physically closer to the physical reality existing in the Nature than the atom
model of Quantum Mechanics, in spite of other laws postulated by Bohr are wrong.
All we have the duty of persecuting the true, no matter whether in the end of the persecution we find that Quantum Mechanics is not entirely correct.
Wladimir Guglinski
|
{"url":"http://zpenergy.com/modules.php?name=News&file=print&sid=3664","timestamp":"2024-11-06T05:45:13Z","content_type":"text/html","content_length":"15440","record_id":"<urn:uuid:443b0d2c-cc1b-4d9b-ac5c-68520c04f81a>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00098.warc.gz"}
|
Area of a Triangle Calculator
Area of a Triangle Calculator - Find Triangle Area
Select a calculation method and enter the values required.
Our Area of Triangle calculator is a quick and easy way to calculate the area of a given triangle using its Base and Height, Three Sides, Two Sides and Included Angle, or Two Angles and Included
You've probably seen triangles all around you – in the shape of a slice of pizza, a roof on a house, or even the path that a ball takes when you kick it into the air. Triangles are one of the most
common shapes in the world around us. But do you know how to calculate the area inside a triangle?
Knowing how to find the area of a triangle is an important skill that can be useful in many different situations. For example, if you want to cover a triangular garden bed with mulch or sod, you need
to know its area so you can purchase the right amount of materials. If you're making a sail for a boat, the sail will likely be a triangular shape, and you'll need to calculate its area. There are
lots of other cases where this comes in handy too!
How to Calculate the Area of a Triangle
To calculate the area of any triangle, you need to know two key measurements – the base and the height.
The base of a triangle is simply the length of one of its sides. It can be any of the three sides.
The height is the shortest distance from the base to the opposite vertex (corner point) of the triangle. It forms a perpendicular (90 degree angle) with the base.
Here's an illustration to help visualize the base and height:
Once you know the base and height measurements, you're ready to use the formula to calculate the area.
The Formula for Calculating the Area of a Triangle
The most well-known and straightforward formula for calculating the area of a triangle is:
Area = 0.5 × base × height
• 0.5 means one-half
• base is the length of the base side of the triangle
• height is the perpendicular distance from the base to the opposite vertex (also called the altitude)
This formula gives you the area because a triangle's area is equal to one-half the area of a rectangle with the same base and height.
However, sometimes you may not have the height measurement. In those cases, there are other formulas you can use depending on what information you do have about the triangle:
Three Sides (SSS):
If you know the lengths of all three sides (a, b, c), you can use Heron's formula:
Area = 0.25 × √((a + b + c) × (-a + b + c) × (a - b + c) × (a + b - c))
Two Sides and Included Angle (SAS):
When you know two side lengths (a, b) and the angle between them (γ), the area can be calculated using:
Area = 0.5 × a × b × sin(γ)
Two Angles and Included Side (ASA):
If you know two angles (β, γ) and the side between them (a), you can use this formula:
Area = a^2 × sin(β) × sin(γ) / (2 × sin(β + γ))
These alternative formulas allow you to find the area when you don't have the height, by utilizing other given information about the sides and angles.
Let's look at an example using the standard base × height formula:
Suppose a triangle has a base of 8 inches and a height of 6 inches.
Area = 0.5 × 8 in × 6 in
= 24 square inches
So by taking half of the base length (8 in) and multiplying it by the height (6 in), we get the area of 24 square inches.
The base × height formula is the most straightforward, but keep those alternative formulas in mind for cases when you don't have that specific information!
Equilateral Triangles
There's a special formula for finding the area of equilateral triangles. An equilateral triangle has all three sides equal in length.
The formula is:
Area = (√3/4) × side^2
• √3 is the square root of 3 (approximately 1.73)
• side is the length of any of the three equal sides
For example, if each side of the equilateral triangle is 6 inches:
Area = (√3/4) × 6^2
= (1.73/4) × 36
= 0.433 × 36
= 15.6 square inches
This formula works because in an equilateral triangle, the height is always (√3/2) × side length.
Practice Problems
Okay, time to practice what you've learned so far! Try calculating the area for these triangles:
1) A triangle has a base of 8 cm and a height of 6 cm. What is its area?
To solve:
Area = 1/2 × base × height
= 1/2 × 8 cm × 6 cm
= 24 square cm
2) An equilateral triangle has sides of length 9 inches. Find its area.
Using the special equilateral formula:
Area = (√3/4) × side^2
= (√3/4) × 9^2
= (1.73/4) × 81
= 34.6 square inches
3) A right triangle has a base of 5 feet and a height of 12 feet. Calculate its area.
Area = 1/2 × base × height
= 1/2 × 5 ft × 12 ft
= 30 square feet
See, not too hard! The key things to remember are:
• Identify the base and height
• Use the formula Area = 1/2 × base × height
• For equilateral triangles, use the special (√3/4) × side^2 formula
Other Similar Calculators
Check out other calculators that are similar to this one.
Can I use any two sides of the triangle to find the area?
No, you specifically need to know the base and the height corresponding to that base. Just knowing two random side lengths is not enough information.
What if I don't know the height?
Then you'll need to use one of the special formulas based on the other information you do know, like two sides and an angle, or all three sides
Do I have to use inches/feet/centimeters, or can I use any units?
You can use any units you like – inches, feet, centimeters, meters, etc. Just be sure to use the same units for both the base and height when calculating.
Why is the formula multiplied by 1/2?
Because a triangle is really half of a rectangle with the same base and height. So the formula gives the area of that half rectangle.
|
{"url":"https://thatcalculator.com/calculators/area-of-triangle/","timestamp":"2024-11-08T21:43:56Z","content_type":"text/html","content_length":"28679","record_id":"<urn:uuid:cc896963-f522-4842-8fa6-c1af980a58ba>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00276.warc.gz"}
|
This pattern is an oscillator.
This pattern is periodic with period 30.
This pattern runs in standard life (b3s23).
The population fluctuates between 27 and 40.
This evolutionary sequence works in multiple rules, from b3-ks23-kr through to b34ceqyz5jk6ace7c8s234cekz5ekr6-ac7c8.
Pattern RLE
Glider synthesis
#C [[ GRID MAXGRIDSIZE 14 THEME Catagolue ]]
#CSYNTH xp30_yb3s2k8xcczg8ge13y5301z11 costs 9 gliders (true).
#CLL state-numbering golly
x = 96, y = 32, rule = B3/S23
Sample occurrences
There are 3 sample soups in the Catagolue:
Official symmetries
Symmetry Soups Sample soup links
C1 2 • •
G2_2 1 •
Comments (0)
There are no comments to display.
Please log in to post comments.
|
{"url":"https://gol.hatsya.co.uk/object/xp30_yb3s2k8xcczg8ge13y5301z11/b3s23","timestamp":"2024-11-13T20:01:44Z","content_type":"text/html","content_length":"7633","record_id":"<urn:uuid:cc40c2ce-7e3a-4829-bda8-240095723729>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00390.warc.gz"}
|
Coinciding Lines (Explanation and Everything You Need to Know)
Coinciding Lines – Explanation and Examples
When it comes to lines, 3 kinds of lines are the most significant; parallel, perpendicular, and coinciding. In this section, we will be covering coinciding lines, which are defined as:
“The lines which lie exactly on top of one another such as they appear as one are defined as coinciding lines.”
In this section, we will be covering the following topics:
• What are coincident lines?
• What is the formula of coinciding lines?
• How to check if the lines are coincident or not?
• Examples
• Practice problems
What Are Coincident Lines?
Coinciding lines are basically 2 lines that completely lie on one another. There are neither parallel nor perpendicular but are completely identical. When such lines are graphed, they appear as one,
as shown in the figure below.
Although it may seem that there appears to be only one line, that is not the case. When drawn together, the two lines, one red and one blue appear as one line since these 2 lines are coinciding in
In the world of mathematics, multiple lines and curves exist. Some are oblique, some are parallel, some are perpendicular, or some may bend into a curve and form shapes like parabolas and ellipses.
Among all these lines and curves enveloping fundamental mathematics concepts, specifically in geometry, coinciding lines hold special importance.
Unlike parallel lines, which never intersect, and perpendicular lines directed at 90𝆩 to one another, coinciding lines are entirely different.
Coinciding lines do not vary in terms of either magnitude or direction. When we term them as ‘identical,’ it implies exactly that.
Some concepts may often result in confusion between parallel and coinciding lines since both are directed in the same direction, but that is not the case. Parallel lines, though they may be directed
in the same direction, cut the y-axis on different points. However, in coinciding lines, since they are already termed as ‘identical,’ they cut the y-axis on the same points. We can validate this
concept from the figure below:
So, the major difference in parallel and coinciding lines lies in the determination of their intercept. This concept is explained below:
The Intercept of Coinciding Lines
Let’s cover the concept of intercept first before jumping into the intercepts of coincident lines.
Intercept is defined as the point where a line cuts the x or y-axis. Every line has an intercept, which can either be obtained by extending the particular line or simply graphing the desired line
The intercept can exist on all axes depending on the coordinate system the lines are being graphed in. In the case of two-dimensional, we only have 2 said axes, namely the x and y-axis. So, in the
two-dimensional system, only 2 possible intercepts can exist, one on the x-axis and the other on the y-axis.
In the case of three-dimensional, a new axis, the z-axis, exists. So in the three-dimensional plane, 3 possible intercepts can exist; one on the x-axis, one on the y-axis, and one on the z-axis.
Now let’s analyze the concept of intercept in the coinciding lines. We mentioned earlier that the major difference in parallel and coinciding lines lies based on their intercept, so let’s evaluate
The coinciding lines are identical lines that fall exactly on top of one another and cut the respective axis on the same points. So, all the coinciding lines have the same intercept, whether on the
x-axis or the y-axis. This means that the difference of the intercept between the said coinciding lines is always zero since the said lines have the same intercept.
So, if you ever get confused between parallel lines and coinciding lines, check for their intercept difference. Parallel lines never intersect one another and hence will always have different
intercepts. In comparison, coinciding lines are entirely identical and lie on top of one another and hence will have the same intercept, resulting in zero intercept difference between the lines.
Formula Of Coinciding Lines
For coinciding lines, we can apply the following more specific formula from the generic equation of a straight line.
ax + by = c
Where ‘a’ and ‘b’ are the constants of the variables x and y, and ‘c’ is the intercept.
To evaluate the formula for coinciding lines, we will first analyze the formula of a straight line. The formula of a straight line is quite simple and is stated below:
y = mx + b
Where ‘m’ is the slope of the respective line, and ‘b’ is the line’s intercept on any particular axis.
This equation can be implied on any straight line, including parallel lines. For parallel lines, the particular lines would have the same slope ‘m’ but different intercepts ‘b.’
Now let’s consider the coinciding lines,
We have already mentioned above that the coinciding lines are identical and hence would have the same slope. We have also discussed that the coinciding lines have the same intercepts on any
particular axis. So if we analyze the above equation for a straight line, we can directly state that the variables ‘m’ and ‘b’ in coinciding lines are identical.
How To Check If The Lines Are Coinciding?
One method for checking whether the lines are coincident is the intercept method, and the other is with the help of the coinciding line equation.
Now that we have covered the concept of what coinciding lines are and how they are different from lines such as parallel lines, let’s evaluate whether the pair of lines coincide.
One method for checking whether the lines coincide or not has already been discussed above. In that discussed method, we check for the intercept difference. If the intercept difference between two or
more lines is zero, then the lines are entitled to be coinciding. However, this method is more commonly used to differentiate between parallel and coinciding lines and not exactly tells us how to
check whether the lines coincide or not.
To check for the coinciding lines, we will consider the following formula:
ax + by = c
The above formula of the linear equation for coinciding lines can also be written as below:
ax + by + c = 0
Now, consider that we actually have 2 linear lines. The coinciding line equation for each line can be written as below:
For line 1:
a1x + b1y = c1
For line 2:
a2x + b2y = c2
Since coinciding lines are completely identical, such lines have all the common points between them. Now, to check whether 2 lines are coinciding or not, we will rearrange the above formulas for each
line in the following manner such that we will be dividing the equation of line 2 with the equation of line 1. Upon dividing and evaluating the equations, we obtain the following result:
a1/a2 = b1/b2 = c1/c2
If this equality prevails, the lines are said to be coincident.
Hence, this pair of lines are said to be coincident, and they would be having an infinite number of solutions. This concept can be strengthened and proved with the help of examples.
Example 1
Check whether the following pair of lines are coincident or not:
x + y = 3 2x + 2y = 6
We will be making use of the following equation to determine whether the said pair of lines are coinciding or not.
a1/a2 = b1/b2 = c1/c2
From equation 1, it can be written:
x + y = 3
a1 = 1 b1 = 1 c1 = 3
Similarly, from equation 2 it can be written:
2x + 2y = 6
a2 = 2 b2 = 2 c2 = 6
Now, let’s apply the formula:
a1/a2 = 1/2
b1/b2 = 1/2
And similarly,
c1/c2 = 3/6
c1/c2 = 1/2
Hence, it is proved:
a1/a2 = b1/b2 = c1/c2
1/2 = 1/2 = 1/2
Since the equation is satisfied, hence the given pair of lines are coinciding lines.
Example 2
Validate whether the following pair of lines are coincident or not:
9x – 2y + 16 = 0 18x – 4y + 32 = 0
We will be making use of the following equation to determine whether the said pair of lines are coinciding or not.
a1/a2 = b1/b2 = c1/c2
From equation 1, it can be written:
9x – 2y + 16 = 0
a1 = 9 b1 = -2 c1 = 16
Similarly, from equation 2 it can be written:
18x – 4y + 32 = 0
a2 = 18 b2 = -4 c2 = 32
Now, let’s apply the formula:
a1/a2 = 9/18
a1/a2 = 1/2
b1/b2 = -2/-4
b1/b2 = 1/2
And similarly,
c1/c2 = 16/32
c1/c2 = 1/2
Hence, it is proved:
a1/a2 = b1/b2 = c1/c2
1/2 = 1/2 = 1/2
Since the equation is satisfied, hence the given pair of lines are coinciding lines.
Example 3
Confirm whether the following pair of lines are coincident or not:
2x + 3y + 1 = 0 2x + 7y + 1 = 0
We will be making use of the following equation to determine whether the said pair of lines are coinciding or not.
a1/a2 = b1/b2 = c1/c2
From equation 1, it can be written:
2x + 3y + 1 = 0
a1 = 2 b1 = 3 c1 = 1
Similarly, from equation 2 it can be written:
2x + 7y + 1 = 0
a2 = 2 b2 = 7 c2 = 1
Now, let’s apply the formula:
a1/a2 = 2/2
a1/a2 = 1
b1/b2 = 3/7
And similarly,
c1/c2 = 1/1
c1/c2 = 1
a1/a2 ≠ b1/b2 ≠ c1/c2
Hence, the given pair of lines are not coinciding lines.
Practice Problems
1. Check whether the pair of lines are coincident or not: x + y = 0 3x + 3y = 0
2. Confirm if the following pair is coincident or not: 12x + 4y + 14 = 0 36x + 12y + 42 = 0
3. Confirm if the following pair is coincident or not: 8x + 15y + 7 = 0 54x + 3y + 2 = 0
1. Yes
2. Yes
3. No
All the images are constructed using GeoGebra.
|
{"url":"https://www.storyofmathematics.com/coinciding-lines/","timestamp":"2024-11-08T07:33:04Z","content_type":"text/html","content_length":"149704","record_id":"<urn:uuid:8ef12bbc-99c0-40fb-b8be-c9831df8ab68>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00697.warc.gz"}
|
The sum of the numerator and denominator of a certain positive fraction is 8. If 2 is added to both the numerator and denominator, the fraction is increased by 4/35. Find the fraction - Ask TrueMaths!The sum of the numerator and denominator of a certain positive fraction is 8. If 2 is added to both the numerator and denominator, the fraction is increased by 4/35. Find the fraction
This question has been taken from Book:- ML aggarwal, Avichal publication, class10th, quadratic equation in one variable, chapter 5, exercise 5.5
This is an important ques and asked in exam
The sum of the numerator and denominator of a certain positive fraction is 8.
If 2 is added to both the numerator and denominator,
the fraction is increased by 4/35. Find the fraction
Question no10. ,ML Aggarwal, chapter 5, exercise 5.5, quadratic equation in one variable, ICSE board,
|
{"url":"https://ask.truemaths.com/question/the-sum-of-the-numerator-and-denominator-of-a-certain-positive-fraction-is-8-if-2-is-added-to-both-the-numerator-and-denominator-the-fraction-is-increased-by-4-35-find-the-fraction/","timestamp":"2024-11-14T02:13:49Z","content_type":"text/html","content_length":"131591","record_id":"<urn:uuid:bdee553a-0e05-4afd-a427-321f35208dea>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00224.warc.gz"}
|
Math Homework
Foolproof Methods To Get Help With Math Homework For Free
The majority of students don’t like to do math homework assignments. If you don’t understand some concept clearly or your skills in the calculation are poor, you’re likely to make a lot of mistakes
in your solutions. If you’re struggling with a math assignment, you should seek help. Unfortunately, competent assistance usually costs money. However, there are several methods to receive decent aid
for free.
Getting Assistance with Math Homework for Free
1. Consult your math teacher.
Many students think that they can get useful information from their teachers only during the classes. However, if you approach your teacher of mathematics after school hours and ask them for an
individual consultation on a particular issue, they will provide you with good and clear explanations.
2. Get help from your math teacher’s assistant.
Many teachers have young assistants who help them during the lessons and after school hours. Teaching assistants are also well-educated specialists. They just don’t have much experience in actual
teaching. You may ask them for explanations if you don’t want to go to your teachers for whatever reason.
3. Cooperate with your classmates.
You may invite a classmate or several classmates to solve home assignments together. While you have problems with math, you’re likely to get good grades in some other subjects. The same applies
to your classmates. Everyone has their own strengths and weaknesses. Working in a group, you’ll be able to help each other in difficult situations.
4. Attend school study groups.
You may decide to do your math tasks in a study group after classes. You’ll be in a room with some other students and a supervising math teacher. This option usually positively affects students
who have problems with their concentration. Moreover, if you cannot solve some tasks, your supervisor will be there to provide you with advice.
5. Get registered on a math student forum.
The online community can also provide you with good help. Find a large forum where math topics are discussed and post your problematic tasks there. You should quickly get feedback from forum
members. They should share correct solutions with you and provide you with detailed explanations.
Improving Your Knowledge of Mathematics
You should learn how to solve math homework on your own. There are many sources that can help you with this. You can either sign up for taking math courses in a local educational center or hire a
math tutor with a rich experience and good reputation. Both options are costly but effective.
|
{"url":"https://www.troyallschoolreunion.org/fail-safe-tips-on-getting-free-math-homework-assistance","timestamp":"2024-11-08T12:14:48Z","content_type":"application/xhtml+xml","content_length":"18201","record_id":"<urn:uuid:d4e7cdd0-5b66-4311-8e5f-2e7b06287af8>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00703.warc.gz"}
|
Polynomial Cascades for Data Mining
Polynomial Cascades for Data Mining - Classification and Regression for large, high dimensional datasets
Polynomial Cascades were originally developed by Greg Grudic et.al. for high-dimensional regression problems. An extension for classification, Polynomial MPMC Cascades, were developed by Sander
Bohte, Greg Grudic and myself. On this page I list some of the features of the regression and the classification method.
Features (Regression)
• Fast - can handle large, high-dimensional problems.
• Accurate models - minimizes mean-squared error
• Can fit non-linear functions without the use of kernels.
• No parameter tuning required - can be run by non-experts.
• Interpretable model - the learned hypothesis is one high-degree, multi-variate polynomial. The hypothesis is not an ensemble of classifiers.
Features (Classification)
• Fast - Runtime for using the features only is O(L x N x d) with L being the number of levels (data dependant, but usually <400), N being the number of training examples and d being the
dimensionality of the problem. The method scales to very large problems with millions of examples (read: millions of rows in terms of SQL Databases).
• No parameter tuning required - You can just run the algorithm and it will give you a good model. This means no cross validation is needed to determine a suitable kernel and kernel parameters.
• Can classify non linearly separable classification problems without fine tuning of parameters
• Competitive Performance on benchmarks with the Minimax Probability Machine for Classification (MPMC) and Support Vector Machines (SVM)
• Incremental building of the model - You can stop the learning at any point in time and have a usable model. This can be useful if you have time constraints and a classifier must be learned in
real time.
• Bounded misclassification error - The PMC generates a worst case bound on the misclassification error for future, unseen data.
• The underlying assumptions are minimal - No specific distributions are assumed.
• Interpretable model - the learned hypothesis is one high-degree, multi-variate polynomial. The hypothesis is not an ensemble of classifiers.
• Feature Selection - One feature per level is used, chosen by a theoretically well founded metric.
• Kernels - the algorithm can be easily extended by kernels.
Future Work - Possible extensions
• Handling multiple classes - currently the PMC only handles binary classification problems.
• Handling missing values by not accounting for them during the computation of mean and covariance.
You can download Matlab code for the classification cascade from here: Download Polynomial MPMC Cascades sourcecode.
|
{"url":"https://cervisia.org/cascades.php","timestamp":"2024-11-11T21:20:08Z","content_type":"text/html","content_length":"8718","record_id":"<urn:uuid:2e52fbba-e6e8-415e-a35e-ea03bcee43fd>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00026.warc.gz"}
|
Minimal Long-Term Storage Economic Model
A minimal model of the endowment needed to store one terabyte on disk forever under a set of assumptions which can be adjusted. It is an initial, greatly simplified version of earlier work by the
LOCKSS Program and students at the Storage Systems Research Center of UC Santa Cruz. This work was published here in 2012 and in later papers (see here).
The endowment is the money which, deposited with the data and invested at interest, suffices to pay for the storage of (in this case) a terabyte "forever", which in this model is 100 years.
This model's parameters are as follows.
Media Cost Factors
The initial cost per drive, assumed constant in real dollars.
The initial number of TB of useful data per drive (i.e. excluding overhead).
The annual percentage by which DriveTeraByte increases.
Working drives are replaced after this many years.
Percentage of drives that fail each year.
Infrastructure Cost factors
The initial non-media cost of a rack (servers, networking, etc) divided by the number of drive slots.
The annual percentage by which SlotCost decreases in real terms.
Racks are replaced after this many years
Running Cost Factors
The initial running cost per year (labor, power, etc) divided by the number of drive slots.
The annual percentage by which SlotCostPerYear increases in real terms.
The number of copies. This need not be an integer, to account for erasure coding.
Financial Factors
The annual real interest obtained by investing the remaining endowment.
• Unlike earlier published research, this model ignores the cost of ingesting the data in the first place, and accessing it later. Experience suggests the following rule of thumb: ingest is half
the total lifetime cost, storage is one-third the total lifetime cost, and access is one-sixth. Thus a reasonable estimate of the total preservation cost of a terabyte is three times the result
of this model.
• The model assumes that the parameters are constant through time. Historically, interest rates, the Kryder rate, labor costs, etc. have varied, and thus should be modelled using Monte Carlo
techniques and a probability distribution for each such parameter. It is possible for real interest rates to go negative, disk cost per terabyte to spike upwards, as it did after the Thai floods,
and so on. These low-probability events can have a large effect on the endowment needed, but are excluded from this model.
• There are a number of different possible policies for handling the inevitable disk failures, and different ways to model each of them. This model assumes that it is possible to predict at the
time a batch of disks is purchased what proportion of them will fail, and inflates the purchase cost by that factor. This models the policy of buying extra drives so that failures can be replaced
by the same drive model.
• The model assumes that drives are replaced after DriveLife years even though they are working. Continuing to use the drives beyond this can have significant effects on the endowment, see this
|
{"url":"https://economicmodel.dshr.org/","timestamp":"2024-11-10T08:41:30Z","content_type":"text/html","content_length":"14630","record_id":"<urn:uuid:df81d73a-941f-46da-aaea-3bcd8ebe044f>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00866.warc.gz"}
|
Math Professor Tom Hutchcroft Receives Packard Fellowship
/ Math Professor Tom Hutchcroft Receives Packard Fellowship
Math Professor Tom Hutchcroft Receives Packard Fellowship
Thomas Hutchcroft, a professor of mathematics at Caltech, has been awarded a 2024 Packard Fellowship for Science and Engineering, an honor that comes with a grant of $875,000 over five years to
pursue research.
Since 1988, the Packard Fellowships have "encouraged visionary work by providing maximum flexibility through unrestricted funds that can be used in any way the Fellows choose, including paying for
necessities like child care," according to the David and Lucile Packard Foundation, which bestows the awards. Hutchcroft is among 20 early-career scientists who are receiving the fellowships this
Hutchcroft studies probability theory—more specifically percolation theory. This is a study of the complex math that occurs when percolating systems, such as water traveling through ground espresso
beans or diseases spreading through populations, reach phase transitions. During these critical phases, fractal-like mathematical objects emerge.
"One of the things that makes this area so interesting, in my view, is that the same mathematics often describes different physical systems that don't have anything to do with each other," says
Hutchcroft's work has made significant progress in understanding phase transitions in curved geometries, or what mathematicians call non-Euclidean geometries. In 2017, he proved that percolation in
negatively curved spaces, such as those resembling a Pringle potato chip, always undergoes two phase transitions, in which there first emerges infinitely many infinite clusters that only later merge
into a single infinite cluster. In a preprint posted in 2023, with Caltech graduate student Philip Easo, Hutchcroft also solved Schramm's locality conjecture, a celebrated problem about percolation
in non-Euclidean geometries.
More recently, Hutchcroft has been focusing on one of the toughest problems in percolation theory—namely to find the mathematics that describe what happens during phase transitions in three
dimensions. While the problem has been solved for two dimensions and even higher dimensions beyond 11, the cases for three, four, five, and six dimensions have proved particularly hard to crack.
"The most basic question is to figure out if the phase transition occurs with a jump, what we call discontinuous, or more smoothy, what we call continuous," Hutchcroft said in a Caltech Q&A about his
work. "This problem has been open for a really long time and needs to be solved before we can move on to understanding all the cool fractal stuff that should be happening at the phase transition."
One avenue for approaching the problem involves what is called long-range percolation. Percolation problems are often described with grids, in which the edges are given different probabilities of
being open or closed. If, for instance the percolation model is describing liquid traveling through a porous medium, then the critical phase would occur when the edges are open just the right amount
to let liquid meander across the grid.
With long-range percolation, edges are not restricted to neighboring points in the grid but can appear between any two nodes with a probability that decays as the distance between nodes gets larger.
Changing this decay parameter, it turns out, has similar effects to changing dimensions, which allows mathematicians to, in essence, take a back road into possibly solving the percolation problem in
three dimensions.
"The Packard Fellowship will give me a huge amount of flexibility to pursue high-risk speculative projects and engage with the most fundamental problems in the area," Hutchcroft says. He notes that
one of the best things about math is getting totally lost in a problem for hours and hours. "You get these little incremental bits of knowledge and then eventually you build up and actually can solve
it," he says. "It's a really thrilling experience to solve a problem."
Hutchcroft earned his bachelor's degree in mathematics from Cambridge University in 2013 and his PhD in mathematics from the University of British Columbia, Canada, in 2017. He held internships at
Microsoft Research Theory Group during his graduate studies and later completed postdoctoral fellowships at the University of Cambridge from 2017 to 2021. He joined the Caltech faculty in 2021.
Recent Packard Fellows on the Caltech faculty include Zachary Ross, François Tissot, Kimberly See, Matt Thomson, Mansi Kasliwal (PhD '11), Konstantin Batygin (PhD '12), Hosea Nelson (PhD '13),
Mikhail Shapiro, and David Hsieh.
Written by
Whitney Clavin
Whitney Clavin
(626) 395‑1944
|
{"url":"https://www.caltech.edu/about/news/math-professor-tom-hutchroft-receives-packard-fellowship","timestamp":"2024-11-12T21:33:44Z","content_type":"text/html","content_length":"202406","record_id":"<urn:uuid:5ac6a82c-923d-48e9-a11e-4fde6606fba6>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00875.warc.gz"}
|
Statically Determinate and Indeterminate Structures - Structural Engineering | WeTheStudy
When a structure experiences static loads, it can either be statically determinate or indeterminate. The former can be analyzed using static equilibrium equations ONLY, while the latter requires more
than those equations.
How do we know if a structure is such? We first introduce two concepts: the Number of Unknowns \(N_u\) and the Number of Known Conditions \(N_k\) of a structure.
Number of Unknowns
The number of unknowns \(N_u\) refers to the number of reactions and internal forces we need to solve. It is the sum of (1) external reaction components \(r\) and (2) internal force components.
Below is a table of unknowns depending on the type of structure. The variable \(m\) refers to the number of members in the model.
Number of Known Conditions
The number of known conditions \(N_k\) refers to the number of equations we can use to solve the structure. It is the sum of (1) equilibrium equations and (2) conditional equations due to internal
connections \(e_c\).
Below is a table of these conditions depending on the type of structure. The variable \(j\) refers to the number of joints in the model.
How to Classify Determinacy?
To classify if a structure is determinate or not, we need to compare the number of unknowns \(N_u\) and the number of known conditions \(N_v\):
• If \(N_u = N_k\), the number of unknowns is the same as the number of known equations. The equilibrium principle is sufficient to solve the structure; hence, it is determinate.
• If \(N_u > N_k\), there are more unknowns than known equations. We will need more equations to solve the structure; hence, it is indeterminate.
• If \(N_u < N_k\), there are fewer unknowns than known equations. It violates external stability; hence, the structure is externally unstable.
Degree of Indeterminacy
If a structure is indeterminate, how many more equations do we need to study its behavior? The degree of indeterminacy \(D_i\) tells us exactly that. It's simply the difference between the unknowns
and known equations:
\(D_i = N_u -N_k\)
Again, there are three possible scenarios of \(D_i\). For example,
• If we have a \(D_i\) of two, the number of unknowns is greater than the number of known conditions. We would need two more equations to solve the structure.
• If \(D_i\) is equal to zero, then we won't need additional equations to solve it; hence, we can say that it is determinate.
• Lastly, if \(D_i\) is a negative number, the number of unknowns is less than the number of known conditions. Meaning it would violate the stability of the structure.
We can summarise this example as follows:
• \(D_i > 0\) means the structure is indeterminate; the number tells us how many additional equations we need more
• \(D_i = 0\) means the structure is determinate
• \(D_i < 0\) means the structure is externally unstable
This equation can also be an excellent strategy to determine whether a structure is determinate or indeterminate rather than comparing \(N_u\) and \(N_k\).
Statically determinate structures can be analyzed using static equilibrium equations only. Conversely, statically indeterminate structures cannot be analyzed using such equations only.
The number of unknowns \(N_u\) refers to the number of reactions and internal forces of the structure we need to solve.
The number of known conditions \(N_k\) refers to the number of equations we know to solve the structure.
Comparing \(N_u\) and \(N_k\) will give the determinacy \(D\) of the structure.
The degree of indeterminacy \(D_i\) can answer the number of additional equations needed for indeterminate structures and is equal to \(D_i = N_u - N_k\).
The equation \(D_i = N_u - N_k\) can help us tell whether the structure is determinate, indeterminate, or unstable.
|
{"url":"https://www.wethestudy.com/tree-posts/statically-determinate-and-indeterminate-structures","timestamp":"2024-11-04T02:38:16Z","content_type":"text/html","content_length":"78664","record_id":"<urn:uuid:e825a101-7cee-4566-b977-1ae75fde6963>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00724.warc.gz"}
|
Multiply - Area Diagrams & Standard Algorithm (examples, solutions, videos, worksheets, lesson plans)
Examples, solutions, videos and lessons to help Grade 5 students learn how to connect area diagrams and the distributive property to partial products of the standard algorithm without renaming.
Common Core Standards: 5.OA.1, 5.OA.2, 5.NBT.5
New York State Common Core Math Module 2, Grade 5, Lesson 6
The following figures give an example to compare multiplication using the area model, partial products, and the standard algorithm. Scroll down the page for more examples and solutions.
Lesson 6 Concept Development
Compare the Area Model to the Standard Multiplication Algorithm
Problem 1
64 × 73
Problems 2–3
814 × 39
624 × 82
Lesson 6 Problem Set
1. Draw an area model, and then solve using the standard algorithm. Use arrows to match the partial products from your area model to the partial products in the algorithm.
a. 48 × 35
b. 648 × 35
Lesson 6 Homework
3. Each of the 25 students in Mr. McDonald’s class sold 16 raffle tickets. If each ticket cost $15, how much money did Mr. McDonald’s students raise?
4. Jayson buys a car and pays by installments. Each installment is $567 per month. After 48 months, Jayson owes $1250. What was the total price of the vehicle?
Lesson 6 Homework
This video shows how to use more complex area models to solve double digit multiplication problems.
1. Draw an area model, and then solve using the standard algorithm. Use arrows to match the partial products from your area model to the partial products in the algorithm.
a. 27 × 36 = ___________________
b. 527 x 36 = _________________
4. Jayson buys a car and pays by installments. Each installment is $567 per month. After 48 months, Jayson owes $1250. What was the total price of the vehicle?
Try the free Mathway calculator and problem solver below to practice various math topics. Try the given examples, or type in your own problem and check your answer with the step-by-step explanations.
We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page.
|
{"url":"https://www.onlinemathlearning.com/area-diagrams-partial-products.html","timestamp":"2024-11-14T17:18:27Z","content_type":"text/html","content_length":"38038","record_id":"<urn:uuid:2394e198-dea2-4718-b98a-30ffb2adb210>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00116.warc.gz"}
|
This package includes the function Apply as its only function. It extends the apply function to applications in which a function needs to be applied simultaneously over multiple input arrays.
Although this can be done manually with for loops and calls to the base apply function, it can often be a challenging task which can easily result in error-prone or memory-inefficient code.
A very simple example follows showing the kind of situation where Apply can be useful: imagine you have two arrays, each containing five 2x2 matrices, and you want to perform the multiplication of
each of the five pairs of matrices. Next, one of the best ways to do this with base R (plus some helper libraries):
A <- array(1:20, c(5, 2, 2))
B <- array(1:20, c(5, 2, 2))
D <- aaply(X = abind(A, B, along = 4),
MARGINS = 1,
FUN = function(x) x[,,1] %*% x[,,2])
Since the choosen use case is very simple, this solution is not excessively complex, but the complexity would increase as the function to apply required additional dimensions or inputs, and would be
unapplicable if multiple outputs were to be returned. In addition, the function to apply (matrix multiplication) had to be redefined for this particular case (multiplication of the first matrix along
the third dimension by the second along the third dimension).
Next, an example of how to reach the same results using Apply:
A <- array(1:20, c(5, 2, 2))
B <- array(1:20, c(5, 2, 2))
D <- Apply(data = list(A, B),
target_dims = c(2, 3),
fun = "%*%")$output1
This solution takes half the time to complete (as measured with microbenchmark with inputs of different sizes), and is cleaner and extensible to functions receiving any number of inputs with any
number of dimensions, or returning any number of outputs. Although the peak RAM usage (as measured with peakRAM) of both solutions in this example is about the same, it is challenging to avoid memory
duplications when using custom code in more complex applications, and can usually require hours of dedication. Apply scales well to large inputs and has been designed to be fast and avoid memory
Additionally, multi-code computation can be enabled via the ncores parameter, as shown next. Although in this minimalist example using multi-core would make the execution slower, in applications
where the inputs are larger the wall-clock time is reduced dramatically.
In contrast to apply and variants, this package suggests the use of ‘target dimensions’ as opposite to the ‘margins’ for specifying the dimensions relevant to the function to be applied.
Additionally, it supports functions returning multiple vector or arrays, and can transparently uses multi-core.
In order to install and load the latest published version of the package on CRAN, you can run the following lines in your R session:
How to use
This package consistis in a single function, Apply, which is used in a similar fashion as the base apply. Full documentation can be found in ?Apply.
A simple example is provided next. In this example, we have two data arrays. The first, with information on the number of items sold in 5 different stores (located in different countries) during the
past 1000 days, for 200 different items. The second, with information on the price point for each item in each store.
The example shows how to compute the total income for each of the stores, straightforwardly combining the input data arrays, by automatically applying repeatedly the ‘atomic’ function that performs
only the essential calculations for a single case.
dims <- c(store = 5, item = 200, day = 1000)
sales_amount <- array(rnorm(prod(dims)), dims)
dims <- c(store = 5, item = 200)
sales_price <- array(rnorm(prod(dims)), dims)
income_function <- function(x, y) {
# Expected inputs:
# x: array with dimensions (item, day)
# y: price point vector with dimension (item)
sum(rowSums(x) * y)
income <- Apply(data = list(sales_amount, sales_price),
target_dims = list(c('item', 'day'), 'item'),
# store
# 5
|
{"url":"https://cran-r.c3sl.ufpr.br/web/packages/multiApply/readme/README.html","timestamp":"2024-11-07T09:16:57Z","content_type":"application/xhtml+xml","content_length":"16472","record_id":"<urn:uuid:c4b5f73d-3104-4d2d-a015-a8301db7b20c>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00245.warc.gz"}
|
Stochastic Approximation of Smooth and Strongly Convex Functions: Beyond the $O(1/T)$ Convergence Rate
Stochastic Approximation of Smooth and Strongly Convex Functions: Beyond the $O(1/T)$ Convergence Rate
Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:3160-3179, 2019.
Stochastic approximation (SA) is a classical approach for stochastic convex optimization. Previous studies have demonstrated that the convergence rate of SA can be improved by introducing either
smoothness or strong convexity condition. In this paper, we make use of smoothness and strong convexity simultaneously to boost the convergence rate. Let $\lambda$ be the modulus of strong convexity,
$\kappa$ be the condition number, $F_*$ be the minimal risk, and $\alpha>1$ be some small constant. First, we demonstrate that, in expectation, an $O(1/[\lambda T^\alpha] + \kappa F_*/T)$ risk bound
is attainable when $T = \Omega(\kappa^\alpha)$. Thus, when $F_*$ is small, the convergence rate could be faster than $O(1/[\lambda T])$ and approaches $O(1/[\lambda T^\alpha])$ in the ideal case.
Second, to further benefit from small risk, we show that, in expectation, an $O(1/2^{T/\kappa}+F_*)$ risk bound is achievable. Thus, the excess risk reduces exponentially until reaching $O(F_*)$, and
if $F_*=0$, we obtain a global linear convergence. Finally, we emphasize that our proof is constructive and each risk bound is equipped with an efficient stochastic algorithm attaining that bound.
Cite this Paper
Related Material
|
{"url":"https://proceedings.mlr.press/v99/zhang19a.html","timestamp":"2024-11-05T09:58:18Z","content_type":"text/html","content_length":"15900","record_id":"<urn:uuid:aaf85c53-c57e-4403-a72b-e142ef060226>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00701.warc.gz"}
|
Forced undamped vibrations
What is vibration?
In mechanical engineering, vibration is expressed as the term that occurs due to an object's backward and forward movement about a point of equilibrium. Generally, there are three types of vibrations
which are,
1. Forced vibration
2. Natural vibration
3. Damped vibration
What do you mean by forced undamped vibration?
Forced undamped vibration is described as the kind of vibration in which a particular system encounters an outside force that makes the system vibrate. Some of the examples of forced undamped
vibration are:
• Movement of laundry machine due to asymmetry
• The vibration of a moving transport due to its engine
• Movement of strings in guitar
The following is the free body diagram of the system, where an additional force is exerted on the block having mass m.
A mass-spring system with an external force
The equation of motion of the above system can be expressed as:
Here, m is the mass of the block, $\stackrel{..}{x}$ is the compression distance, and F is the external force.
The steady state solution of force in this case is,
$F={F}_{0}\mathrm{sin}\omega t$
Here, $\omega$ is the angular velocity of the block, and t is the time.
Now, the general solution of equation of motion for the system can be re-written in standard form:
$\stackrel{..}{x}+\frac{k}{m}x=\frac{{F}_{0}}{m}\mathrm{sin}\omega t$
The standard solution is not only a particular solution for this system because in this initial conditions can be employed to obtain various other cases.
Amplitude of forced vibration
In the case of forced vibrations, the amplitude of steady state relies on the fraction of the forced frequency with the natural frequency. The forced frequency is ${\omega }_{0}$ and the natural
frequency is ${\omega }_{n}$.
The magnification factor (MF) is known as the quantity which is the ratio of the amplitude of the steady-state vibration to the displacement covered by the deflection.
The relation of magnification factor in terms of frequency can be given as:
$MF=\frac{1}{1-{\left(\frac{{\omega }_{0}}{{\omega }_{n}}\right)}^{2}}$
The following is the graph drawn between the magnification factor and ratio of the forced frequency with the natural frequency.
Graph plotted between magnification factor and ratio of forced frequency and natural frequency
As seen from the above graph, the following things can be observed while discussing various cases.
• In the first case, resonance occurs when the natural frequency is equivalent to the forced frequency. In this case, large amplitude vibrations are produced. It is associated with high stress and
failure to the system.
• In the second case, when the forced frequency is approximately equal to zero, and the magnification factor is also approximately equal to zero, the forcing function is nearly static, eliminating
the static deflection and limited natural vibration.
• In the third case, if the magnification factor is more than 1, the forced vibration is greater than the natural vibration. The vibrations are in phase, and the amplitude of the vibration is more
than the static deflection.
• In the fourth case, if the magnification factor is smaller than 1, the forced vibration is more than the natural vibration. The vibrations are out of the phase, along with the motion of forcing
function. Also, the vibration's amplitude will be smaller than the static deflection, which is the opposite of the third case.
• In the fifth case, if the natural vibration is very much less than the forced vibration, then the force will quickly alter its direction for the motion of the block to respond.
Rotating unbalanced for forced vibration
In forced vibrations, one of the most usual causes in a given system is rotating unbalance. When in a mechanical system, the axis of revolution doesn't go through the center of mass of the system,
then rotating unbalance will occur. Due to rotating unbalance the axle will vary its direction as the center of mass revolves. Also, in this condition, the angular frequency of the system will be
equivalent to the forced angular frequency. Some of the common causes of rotating unbalance are:
• Blowholes in casting
• Distortion
• Eccentricity
• Corrosion
• Hydraulic imbalance
Several features of the steady state response of spring mass system to forced vibrations
The free-body diagram of the spring-mass system is illustrated below:
Spring mass system
The following are some of the characteristics of the steady-state response of the spring-mass system to forced vibration.
• The most essential feature in a spring-mass system to forced vibrations is that, the steady-state response will be harmonic. The frequency will be similar to the frequency of the force.
• The vibration's amplitude relies on the frequency of excitation, properties of the spring, and mass in the mechanical system.
• When the frequency of force is approximately equal to the natural frequency of the mass-spring system, the system can be slightly damped, and greater amplitude will occur. This occurrence is also
determined as resonance.
• In a spring mass system to forced vibration the phase lag among the system response and force is dependent on the properties of the system or frequency of excitation.
Difference between free vibration and forced vibration
• In the free vibration, a particular force is essential to begin the vibration, whereas continuous periodic function is needed to start the forced vibration.
• The frequency of free vibration relies on the particular object, so it is known as natural frequency, whereas the forced vibration frequency is dependent on the outside exerting force.
• The free vibration is self sustained vibration but on the other hand the forced vibration is externally sustained vibration.
• In forced vibration, the amplitude remains unchanged before the periodic force acts. In free vibration, the amplitude does get affected by time and it reduces with time.
• The energy of the system in forced vibration is preserved to be constant by the force exerted on it. On the other hand, in free vibration the energy remains unchanged in the absence of air and
drag coefficient, friction and other resistances. Also, the net energy reduces because of the damping force.
Advantages of vibration
• Vibrations can be utilized for agriculture purposes in harvesting by forced vibration.
• Drilling in geotechnical wells
• In geological investigations, vibrations are used to simulate natural disasters like earthquake.
Disadvantages of vibration
• Vibrations generates undesired stress and pressure in most of the mechanical devices or parts.
• In gears, bearings and other parts, the possibility of wear increase rapidly due to vibration.
• Vibration in excessive amount can cause damage to living organisms.
Common Mistakes
• Students always get confused about the reduction in total energy of a system in free vibration. The total energy decreases as the damping force reduces.
• One of the common misconceptions about forces exerting on the system is forced vibration. Students make mistakes and consider the fundamental forces, whereas the forces acting on the mechanical
system will be inertial drag forces.
• Free vibration and natural vibration are similar, but students get confused and make mistakes in problems related to natural vibrations.
• Always use standard international units for the calculation of frequency, angular frequency, vibration's amplitude and other physical quantities. For example the amplitude will be measured in
standard units meter.
• Don't consider the approximation values after decimal point, it may lead to variation in your answer. Always measure the quantity with full accuracy and precision.
Context and Application
The topic forced undamped vibration is very much significant in the several professional exams and courses for undergraduate, Diploma level, graduate, postgraduate. For example:
• Bachelor of Science in Physics
• Bachelor of Technology in Mechanical Engineering
• Bachelor of Technology in Civil Engineering
• Masters in Technology in Mechanical Engineering
• Doctor of Philosophy in Mechanical Engineering
• Diploma in Mechanical and Civil engineering
Practice Problems
Q1. What is the factor behind the large sound produced by a tuning fork placed on a block?
1. Forced Vibration
2. Free Vibration
3. All of the above
4. None of these
Correct answer: (a)
Q2. What is the phase lag in steady state forced vibration at resonance?
1. 45 degrees
2. 90 degrees
3. 30 degrees
4. 0 degrees
Correct answer: (b)
Q3. What is one of the essential characteristics of vibration other than amplitude?
1. beats
2. Frequency
3. Wavelength
4. None of these
Correct answer: (b)
Q4. When is the acceleration of the vibration zero?
1. Mean position
2. Actual position
3. Initial position
4. All of the above
Correct answer: (a)
Q5. Which of the following parameters indicate vibrations in free damped vibrations?
1. Rate of decay of amplitude
2. Natural frequency
3. Both a and b
4. None of the above
Correct answer: (c)
Want more help with your mechanical engineering homework?
We've got you covered with step-by-step solutions to millions of textbook problems, subject matter experts on standby 24/7 when you're stumped, and more.
Check out a sample mechanical engineering Q&A solution here!
*Response times may vary by subject and question complexity. Median response time is 34 minutes for paid subscribers and may be longer for promotional offers.
Search. Solve. Succeed!
Study smarter access to millions of step-by step textbook solutions, our Q&A library, and AI powered Math Solver. Plus, you get 30 questions to ask an expert each month.
Forced undamped vibrations Homework Questions from Fellow Students
Browse our recently answered Forced undamped vibrations homework questions.
Search. Solve. Succeed!
Study smarter access to millions of step-by step textbook solutions, our Q&A library, and AI powered Math Solver. Plus, you get 30 questions to ask an expert each month.
|
{"url":"https://www.bartleby.com/subject/engineering/mechanical-engineering/concepts/forced-undamped-vibrations","timestamp":"2024-11-14T05:06:02Z","content_type":"text/html","content_length":"301261","record_id":"<urn:uuid:0ff06834-f45a-4718-87cc-8bcacb4555d4>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00119.warc.gz"}
|
Bell's theorem rehashed
\(\renewcommand{\vec}[1]{{\bf{#1}}}\)Quantum mechanics suggests that a physical property of a particle doesn't exist until it is measured. For instance, an electron has a property called spin that,
unlike the angular momentum of a classical object, can only have two values, "up" and "down"; but the orientation of this "up" and "down" is determined by the measuring apparatus.
The spin of an electron may not exist until it is measured but conservation laws still apply. There are experiments that produce pairs of electrons whose combined spin must be zero. Once the spin of
the electron on one end is measured, the spin of the other electron will become known as long as the measuring devices are aligned in parallel on the two ends. This applies even if the measuring
devices are put in place after the two electrons are generated! And if the two measuring devices aren't aligned, a correlation will still exist between the two electrons that is difficult to explain
in classical terms.
Or is it? No less a scientist than Einstein proposed that it isn't: that the problem is really that our knowledge of the system is incomplete and that there are hidden variables that determine the
system's behavior fully.
Okay, so imagine some kind of an experiment that creates a pair of correlated electrons that fly off towards the $A$ and $B$ ends of the experimental setup. Let's assume that the outcome of the
experiment at the $A$ end depends on two factors: the orientation of the experimental device at $A$ which we represent with $\vec{a}$, and some hidden parameter(s) represented by the Greek letter $\
lambda$. In other words, $A(\vec{a},\lambda)=\pm 1$. Similarly at the $B$ end, the outcome of the experiment is a function of $\vec{b}$ (the orientation of the apparatus) and $\lambda$: $B(\vec{b},\
lambda)=\pm 1$.
What is really important to notice is that we specifically assume that $A$ does not depend on $\vec{b}$, and $B$ does not depend on $\vec{a}$. In other words, the experiment on one end only depends
on the setup of the experimental device on that end and the properties of the electron, but does not depend on the configuration of the experimental device at the other end.
$\lambda$ can have many values; however, what we do know is that it has a probability distribution function (representing, for each $\lambda$, the probability of that value occurring) and it is
The expectation value of a quantum mechanical experiment is, essentially, the average of the values measured over several experiments. From quantum mechanics, we can compute the expectation value of
the product of $A$ and $B$: in certain experiments (for instance, when the electron spin is measured with so-called Stern-Gerlach magnets where $\vec{a}$ and $\vec{b}$ is the magnets' orientation)
this expectation value will be the dot product of the two vectors $\vec{a}$ and $\vec{b}$ times $-1$. But we can also compute the expectation value, $P(\vec{a},\vec{b})$ using what we know from
probability calculus:
\[P(\vec{a},\vec{b})=\int\limits_{-\infty}^{+\infty}\rho(\lambda)\cdot A(\vec{a},\lambda)\cdot B(\vec{b},\lambda) d\lambda.\]
What Bell did was to prove that no matter how you choose $A$ and $B$, in general $P(\vec{a},\vec{b})$ cannot be the same as $-\vec{a}\cdot\vec{b}$. Which implies that our assumption, namely that the
outcome at $A$ depends only on $\vec{a}$ and $\lambda$, or the outcome at $B$ depends only on $\vec{b}$ and $\lambda$, must be false; the outcome at one end depends on the experimental setup at the
other and vice versa.
To prove this, let's introduce another symbol: $P^{xy}(\vec{a},\vec{b})$ is to represent the probability that the outcome of the experiment will be $x$ at $A$ and $y$ at $B$. So for instance, $P^{++}
(\vec{a},\vec{b})$ will represent the probability that we measure a $+1$ spin at both $A$ and $B$. We can then construct the following:
We can prove that $|E(\vec{a},\vec{b})+E(\vec{a}',\vec{b})+E(\vec{a},\vec{b}')-E(\vec{a}',\vec{b}')|\le 2$. But this inequality is not true in the quantum mechanical case. In the case of the
Stern-Gerlach magnets, the probabilities can be computed as follows: $P^{++}=P^{--}=\frac{1}{2}\sin^2\frac{\phi}{2}$, and $P^{+-}=P^{-+}=\frac{1}{2}\cos^2\frac{\phi}{2}$ (where $\phi$ is the angle
between the orientations of the two magnets), so $E=-\cos\phi$. For instance, if $\vec{a}$, $\vec{b}$, $\vec{a}'$ and $\vec{b}'$ are vectors pointing respectively at $0^\circ$, $45^\circ$, $90^\circ$
and $-45^\circ$, then $E(\vec{a},\vec{b})=E(\vec{a}',\vec{b})=E(\vec{a},\vec{b}')=-\cos 45^\circ$, and $E(\vec{a}',\vec{b}')=\cos 135^\circ$, so $|E(\vec{a},\vec{b})+E(\vec{a}',\vec{b})+E(\vec{a},\
vec{b}')-E(\vec{a}',\vec{b}')|=|-3\cos 45^\circ+\cos 135^\circ|=2\sqrt{2}$, which is decidedly greater than 2.
To prove that $|E(\vec{a},\vec{b})+E(\vec{a}',\vec{b})+E(\vec{a},\vec{b}')-E(\vec{a}',\vec{b}')|\le 2$, we first spell out $E$:
\[E=\int d\lambda\rho(\lambda)\left\{P_A^+(\vec{a},\lambda)-P_A^-(\vec{a},\lambda)\right\}\left\{P_B^+(\vec{b},\lambda)-P_B^-(\vec{b},\lambda)\right\},\]
where $P_A$ and $P_B$ represent the experimental probabilities at the two ends of the apparatus. Since these are probabilities, it follows that $0\le P_A\le 1$ and $0\le P_B\le 1$; therefore, $|P^+-P
^-|\le 1$. As a shorthand, we can write $\bar{A}$ and $\bar{B}$ for the two subexpressions in the curly braces:
\[E=\int d\lambda\rho(\lambda)\bar{A}(\vec{a},\lambda)\bar{B}(\vec{b},\lambda).\]
As before, $\bar{A}\le 1$ and $\bar{B}\le 1$.
We can then write:
\[E(\vec{a},\vec{b})+E(\vec{a},\vec{b}')=\int d\lambda\rho(\lambda)\bar{A}(\vec{a},\lambda)|\bar{B}(\vec{b},\lambda)+\bar{B}(\vec{b}',\lambda)|,\]
from which:
\[|E(\vec{a},\vec{b})+E(\vec{a},\vec{b}')|\le\int d\lambda\rho(\lambda)|\bar{B}(\vec{b},\lambda)+\bar{B}(\vec{b}',\lambda)|.\]
\[|E(\vec{a}',\vec{b}')-E(\vec{a}',\vec{b}')|\le\int d\lambda\rho(\lambda)|\bar{B}(\vec{b},\lambda)-\bar{B}(\vec{b},\lambda)|.\]
But from $\bar{B}\le 1$, it follows that:
\[|\bar{B}(\vec{b},\lambda)+\bar{B}(\vec{b}',\lambda)|+|\bar{B}(\vec{b},\lambda)-\bar{B}(\vec{b}',\lambda)|\le 2.\]
And since $\int\rho(\lambda)d\lambda=1$, we can assert that $|E(\vec{a},\vec{b})+E(\vec{a}',\vec{b})|+|E(\vec{a},\vec{b}')-E(\vec{a}',\vec{b}')|\le 2$, from which $|E(\vec{a},\vec{b})+E(\vec{a}',\vec
{b})+E(\vec{a},\vec{b}')-E(\vec{a}',\vec{b}')|\le 2$, which is just what we set out to prove.
. . .
What this all means is that the outcome of the experiment at $A$ can only be explained as a function of $\vec{a}$ (the setup of the measuring apparatus at $A$), $\lambda$ (whatever parameters are
"internal" to the electron) and $\vec{b}$! I.e., some information about the setup of the measuring apparatus at $\vec{b}$ will be "known" to the electron at $\vec{a}$ even if there is no conventional
means of transmitting this information from $B$ to $A$; in fact, even if $B$ is an apparatus operated by little green men at Alpha Centauri, who will only perform the measurement some four and a half
years from now, and haven't even built their measuring apparatus yet!
Bell, J. S., Speakable and unspeakable in quantum mechanics, Cambridge University Press, 1989
|
{"url":"https://vttoth.com/CMS/physics-notes/73-bells-theorem-rehashed","timestamp":"2024-11-14T01:16:09Z","content_type":"application/xhtml+xml","content_length":"23145","record_id":"<urn:uuid:388d719a-04d2-4765-8d4e-3f9b12ef6295>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00118.warc.gz"}
|
How do you translate the word phrase into a variable expression: the product of x and 7? | HIX Tutor
How do you translate the word phrase into a variable expression: the product of x and 7?
Answer 1
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7
|
{"url":"https://tutor.hix.ai/question/how-do-you-translate-the-word-phrase-into-a-variable-expression-the-product-of-x-8f9af8d2e6","timestamp":"2024-11-09T22:31:43Z","content_type":"text/html","content_length":"573807","record_id":"<urn:uuid:baad7fc2-5882-4780-9df4-0102c86cbb49>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00237.warc.gz"}
|
How Long to Attain Unanimity?
(A new question of the week)
Suppose we have a question that can be answered with Yes, No, or Maybe, and that whenever two people with different opinions meet, their discussion convinces each of them that neither can be right,
so they both change to the other opinion. Given initial numbers of people with each view, what is the least number of meetings before they can come to full agreement?
Here is the question as asked by Gilad on April 1:
3 groups argue.
Group A has 42 students that agree.
Group B has 36 students that disagree.
Group C has 43 students that say maybe.
When a student from any group meets a student from another group, they both change their answer to the third group’s opinion.
What is the minimum number of meetings needed in order for everyone to take the same stand?
Clarifying and modeling the problem
I had a couple concerns about the meaning of the problem (mostly due to language), so I responded with questions, along with some thoughts:
Hi, Gilad.
I haven’t solved the problem yet, so I can’t give a specific hint, but I’d like to ask you a couple questions, and tell you how I’ve started.
My first thoughts are to clarify exactly what the problem means; perhaps you can tell me if you think I am wrong.
As I understand it, “agree”, “disagree”, and “maybe” don’t refer to whether each group agrees among themselves, but how they would answer some question (such as, “Do you like this weather?”) to
which each can say Yes, No, or Maybe.
My first impression of students in a group “agreeing” was that they “agree with one another”; but that doesn’t fit the other two groups! Rather, they all agree, or all disagree, or all are
uncertain, about some statement they were asked about.
Also I think the “groups” are not fixed groups of people, but just “all who say yes”, “all who say no”, and “all who say “maybe”. So everyone who says yes (at the moment) is in group A, and so
on. Any two people with different opinions may meet, and then they both change their minds to the third opinion.
We could, in fact, just say that there are 121 people, and each person has a label, A, B, or C; and we can, at any step, change any two different labels to the other label.
My first impression of “Group A has 42 students that agree” was that, among students in one pre-existing group, 42 said yes; but, again, that didn’t fit the rest of the problem, so I reconsidered it.
The groups are defined by the students’ opinions.
And this led me to change my model of the situation from separate groups of students, to one large group of students, each of whom has a nametag that indicates his opinion. If he changes his tag, he
is in a different “group”.
This illustrates how a central part of understanding a problem is to develop a model (that is, a way to think about the problem). But there is not only one possible model; as we’ve seen in
combinatorial problems, we may change models several times as we work through a problem.
Another model is just to list the number of people with each label, and at each step subtract 1 from two of the counts and add 2 to the other. So the process might look like this:
A B C
-1 -1 +2
-1 +2 -1
and so on. The question we are asked is, what is the minimum number of steps in the process that will take us to a state (0, 0, 121), (0, 121, 0), or (121, 0, 0)? Or, perhaps, is that even
In the example, an A and a B meet and become C, then an A and a C meet and become B. We are representing a state of the system as an ordered triple of numbers.
That suggested a visual model: Each ordered triple could be a point in space; or, since the sum is constant, a point on a plane:
I can imagine also representing any given state by a point on a grid, where ordered pair (A, B) implies that C = 121 – A – B. Only three moves are allowed on the grid, and we are to get from (42,
36) to one of the goals (0, 0), (121, 0), or (0, 121).
We’ll see several of these grids below. The key idea is that we only need to show two of the numbers (I chose A and B), because they imply the value of C. The three possible moves are
• decrease B (and C) by 1, increasing A by 2 (moving “east-southeast”, down and right),
• decrease A (and C) by 1, increasing B by 2 (moving “north-northwest”, up and to the left), or
• decrease A and B by 1, increasing C by 2 (moving “southwest”, down and left).
Here are what each of the three moves looks like on the grid, where the horizontal axis is A and the vertical axis is B:
We move from point O to point A when a pair change their position to A, and so on.
While inventing a model, I tend to experiment (play) to see how it works:
I’ve tried playing the game with smaller numbers, starting with (1, 2, 3), and making a diagram of state transitions; it looks like this example can’t reach the goal. That’s why I added the
possibility that the answer may be that it is not possible, so there is no minimum. One way to prove that might be to find an invariant that none of the goals satisfy.
All this talk of states reminded me of something!
(Having said that, I realize that our post Invariants for a State Machine may suggest some useful techniques. In fact, I think it’s all you need.)
At this point, I was convinced I had the tools needed to actually solve the problem, so I sent my answer with a closing:
Now it’s your turn. Please answer my questions, and correct my interpretation if you think I’m wrong. Possibly a specific topic you are learning will suggest a way to approach the problem, using
one of my models, or something else entirely.
And maybe another of us will have a different idea, in case this isn’t an approach you can handle.
Answers and first attempts
Gilad replied:
Dear Dr. Peterson,
Thank you for your elaborated answer.
You understand correctly, the groups are actually a set of members with the same label.
This is a riddle asked in our work group, I have not come to a single answer yet, I suspected it has no fixed solution and this is the reason I posted here. Some members of my group repeated the
steps to move member between groups and got to the number 42 (hint) but did not post the solution yet.
We’ll see that the suggested answer is correct. But we need to convince ourselves. Gilad continued my “playing” by considering all possible “moves” from the given initial state of (42, 36, 43):
Possible transitions from this state:
A student from Group A meets a student from Group B: (A – 1, B – 1, C + 2)
After this meeting: (41, 35, 45)
A student from Group A meets a student from Group C: (A – 1, B + 2, C – 1)
After this meeting: (41, 38, 42)
A student from Group B meets a student from Group C: (A + 2, B – 1, C – 1)
After this meeting: (44, 35, 42)
Analyze the possible transitions from each of these states:
From (41, 35, 45):
A student from Group A meets a student from Group B: (40, 34, 47)
A student from Group A meets a student from Group C: (40, 37, 44)
A student from Group B meets a student from Group C: (43, 34, 44)
From (41, 38, 42):
A student from Group A meets a student from Group B: (40, 37, 44)
A student from Group A meets a student from Group C: (40, 40, 41)
A student from Group B meets a student from Group C: (43, 37, 39)
From (44, 35, 42):
A student from Group A meets a student from Group B: (43, 34, 44)
A student from Group A meets a student from Group C: (43, 37, 39)
A student from Group B meets a student from Group C: (46, 34, 39)
I tried to continue this process iteratively, exploring possible transitions from each state. However, the total number of students remains constant throughout these transitions.
Given this observation, it seems there isn’t a minimum number, however I am not 100% sure of this solution.
In principle, we could continue this list (of possible states after the first move and after the second move) to make a network (that is, a graph, in the sense we discussed in Graph Coloring: Working
Through a Proof) of all possible paths through the “state space”; but this would be far too many to actually do. Nevertheless, such “play” can help one think about what is happening … especially if
you use a more visual model. Gilad continued:
One possible approach to solve this problem is to construct a graph where each node represents a state (combination of the number of students in each group) and each edge represents a possible
transition between states.
We can then explore this graph to identify any cycles or patterns that will help us determine the minimum number of meetings needed for everyone to take the same stand.
Sadly I am not an expert using those tools and could not get further down the solution.
The full answer will be posted in a few days, I will keep you updated!
These, again, are nice ideas to keep in mind (and they reveal how much Gilad knows), but will not be directly fruitful.
Doctor Rick jumped in with a couple ideas; as I had suggested, we like to collaborate on interesting questions!
Hi, Gilad.
I was thinking about this overnight. Thinking globally (rather than step by step), I can see that there must be a minimum of 39 meetings, because at least 36 + 42 minds must be changed, and each
meeting changes two minds.
I have come up with a way to accomplish the goal in 42 meetings (if I didn’t make a mistake somewhere). What I can’t do yet is to prove that this is the fewest meetings possible.
We’ll continue to think about it!
It can’t possibly take less than 39 “moves”, because the two smallest numbers in the initial state are 36 and 42, so at least that many minds must be changed, two per step, if two numbers are to be
reduced to 0.
But the minimum actually possible may be more than that. Since he has found a way to take 42 moves, the minimum number must be between 39 and 42, inclusive.
Gilad responded:
I tried 39, but whatever I do I get to 42 meetings … that’s easy. But as you mentioned I couldn’t prove that can be done for less than 42 meetings.
Making a model on a spreadsheet
Now I had more to say:
I used the ideas in the post I referred to (finding a pair of invariants), to show that of the three possible goals, (0, 0, 121), (0, 121, 0), and (121, 0, 0), only the first is possible. And it
seems clear that this can’t be reached in less than 42 steps.
(If I had shown that none of the goals were possible, it would have been an interesting trick question; the answer would be, sort of, infinity! What did you mean by “it seems there isn’t a
minimum number”?)
To make this visible, I made a spreadsheet that lets me see the results of any sequence of steps; here is an example of the start of a (bad) sequence:
I enter in column B the group to change to (for example, C tells it to add 2 to C and subtract 1 from A and B, moving “southwest”). Then it plots a path of points as shown here:
We started at (42, 38), then went “southwest” to (41, 35), subtracting from A and B and adding 2 to the unseen C; then we went “north-northwest” to (40, 37), subtracting 1 from A (and C) and adding 2
to B; and so on.
This doesn’t look like a good way to reach unanimity, does it? To play the game, you need to aim for one of the three goals, which on this graph will look like (121, 0), or (0, 121), or (0, 0).
My claim is that the first two of these are unreachable, and the last can be attained in 42 steps. We’ll look soon at the invariants I used to make that conclusion, but it’s more fun to play the game
first! You can find the spreadsheet here:
The strategy for getting to unanimity as fast as possible is obvious (at first); just head straight in that direction. There’s a little twist at the end, though.
I also made a graph as I suggested, showing the ordered pairs (A, B) along the path starting at (42, 36). Here is the winner:
We could change where we take the two “NNW” steps, but it seems clear that we can’t do any better. I haven’t tried to make this an actual proof.
So there isn’t only one actual path, but every such path will be a rearrangement of the same 42 steps, which in my spreadsheet consists of 40 C’s and 2 B’s. Here is another of those (the x-axis is A,
the y-axis is B):
Observe that each move subtracts 1 from A, so it has to take 42 steps to get to 0. But what about the other two goals?
Here is an example of failure to attain one of the other goals (again, 42 steps):
Each step here (adding 2 to B, subtracting 1 from A and C) moves to the “north-northwest”, reducing A from 42 to 0; but we end up with 120, not 121, for B, because there is still 1 in group C. It
seems clear that we can’t do any better; but my claim is that we can never get to (0, 121, 0) at all.
Gilad replied:
This is a math riddle still on progress that was asked in a Math forum in Israel.
I was unsure about the solution so needed an expert opinion, and you definitely shed some light on the need to prove – which I struggled with.
Since there was no credit riding on it; I could feel free to say more (but still not give a full solution).
Solving the problem
I answered:
I should probably make it explicit that a complete proof is implicit in what I’ve said. It’s obvious that if we are to end with 0 saying yes, then we have to reduce A from 42 to 0, which can take
no less than 42 steps; and I’ve mentioned that I can prove, using a pair of invariants, that the other possible goals (including increasing A to 121) can’t be accomplished at all. I’ve left that
part for you to do, following the post I referred to, or whatever other knowledge you have.
We’ve also seen that we can’t increase B to 121 in less than 42 steps, because that would require adding 2 to B at least \(\frac{121-36}{2}=42.5\) times. But we can show more than that.
For completeness, here’s the best I could do for that goal (again, x is A, y is B):
This reaches 120 in 42 steps; but it isn’t so obvious we can’t reach 121.
This is where the invariants are necessary to convince you it’s impossible.
We didn’t hear back about the solution; but let’s look into this matter of invariants.
Using invariants to make a proof
If you’ve taken the bait and looked at Invariants for a State Machine, then you will have seen how Doctor Jacques found and used invariants to show that a goal was unreachable; if you haven’t, go
read it. I’ll wait …
So how do we find invariants for our problem?
We’ll try by inspection to find an invariant or two: expressions that must remain the same after any move. We’ll stick with my graphs of (A, B), which I’ll now call \((x,y)\), rather than work with
all three variables (since \(x+y+z=121\) is itself an invariant).
First, observe what the three moves look like graphically; each is a vector:
Also, note that \(u+v+w=0\), so they are not independent, and we can use only two of them:
Now consider all sums of these vectors:
The accessible points (relative to whatever point we start at) are the lattice points (integer coordinates) on the lines \(y-x=3m\) and \(2x+y=3n\):
So we can define two invariants: \(\alpha=x-y\) and \(\beta=2x+y\) always remain the same (modulo 3); that is, they always change by a multiple of 3. So a point is reachable if, and only if, \(\alpha
\) and \(\beta\) differ from their original values by a multiple of 3.
Now, our starting point is (42, 36), for which \(\alpha=42-36=6\) and \(\beta=2(42)+36=120\) are both multiples of 3; so any reachable point must do the same. We have three possible goals:
• (0, 0):
This is reachable.
• (121, 0):
This is unreachable because the remainders are not both 0.
• (0, 121):
This is unreachable because the remainders are not both 0.
So only the first goal is reachable, and we have seen that it requires 42 steps, so the answer is confirmed.
Now, you might want to play with the problem itself, by changing the starting point. Can you make one from which we can reach unanimity in more than one way?
Leave a Comment
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
{"url":"https://www.themathdoctors.org/how-long-to-attain-unanimity/","timestamp":"2024-11-06T11:26:48Z","content_type":"text/html","content_length":"132245","record_id":"<urn:uuid:7c965c18-0ece-4d18-a40c-2328bc519a3b>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00426.warc.gz"}
|
Contests with Random Noise and a Shared Prize
Munich Personal RePEc Archive
Contests with Random Noise and a
Shared Prize
Sheremeta, Roman and Masters, William and Cason,
May 2009
Online at
Contests with Random Noise and a Shared Prize
Roman M. Sheremeta
, William A. Masters
and Timothy N. Cason
* Department of Economics, Krannert School of Management, Purdue University
**Department of Agricultural Economics, Purdue University
403 W. State St., West Lafayette, IN 47906-2056, U.S.A.
May 2009
This note introduces a model of contests with random noise and a shared prize that combines features of Tullock (1980) and Lazear and Rosen (1981). Similar to results in Lazear and Rosen, as the
level of noise decreases the equilibrium effort rises. As the noise variance approaches zero, the equilibrium effort of the shared-prize contest approaches that of a Tullock lottery contest.
JEL Classifications: C72, D72, D74
Keywords: Contests, All-pay auctions, Tournaments, Random noise, Shared prize
1. Introduction
A wide variety of competitions arise in economic life, and new competitions are regularly
introduced to attract effort and reward achievement. Such competitions are commonly modeled
as contests, in which players compete over a prize by expending costly resources. There are an
enormous variety of possible contests (Konrad, 2009), but the three canonical models are based
on Tullock (1980), Lazear and Rosen (1981), and Hillman and Riley (1989).
Tullock (1980) models a probabilistic contest between two players, in which player ’s
where and are the efforts of players and . The player expending the highest effort has a
higher probability of winning, but the other player still has a chance to win. The most popular
version of the Tullock contest is a simple lottery, in which . In this contest a unique pure
strategy Nash equilibrium exists where players earn positive payoffs.
Lazear and Rosen (1981) examine rank-order tournaments, in which players with greater
achievements always win. They show that when players’ cost of effort is sufficiently convex and
is translated into achievement with random noise, rank-order tournaments can also generate pure
strategy Nash equilibria. Hillman and Riley (1989) study a closely related version of the Tullock
contest, in which . This is known as a first-price all-pay auction, where the winner is
always the player who expends the highest effort. Instead of a pure strategy Nash equilibrium, in
this contest a symmetric mixed strategy Nash equilibrium exists in which players choose efforts
randomly over some interval.
A number of studies have tried to establish common links between these three building
blocks of contest theory. For example, Che and Gale (2000) provide a link between the
rank-order tournament of Lazear and Rosen and the all-pay auction of Hillman and Riley. They
partially characterize the equilibrium and show that with insufficient noise no pure strategy
equilibrium exists in the rank-order tournament. Hirshleifer and Riley (1992) show how an R&D
race between two players that is modeled as a rank-order tournament is equivalent to a lottery
contest for certain assumptions on the noise distribution. Baye and Hoppe (2003) identify
conditions under which a variety of rent-seeking contests, innovation tournaments, and
patent-race games are strategically equivalent to the lottery contest.
This note introduces a model of contests with random noise that combines features of
contestants receive a prize share that is proportional to their achievement (Long and Vousden,
1987). This type of contest imitates some forms of competition between firms, whose marketing
or lobbying effort may be rewarded through a share of industry profit. Shared-prize contests may
also be used within firms to reward workers, or as a type of procurement contract to elicit effort
among suppliers (Zheng and Vukina, 2007).
The analytical results in this paper are restricted to the case of two symmetric players,
and multiplicative random noise with a uniform distribution, although we also consider different
noise distributions using numerical methods. We show that, similar to the rank-order tournament
of Lazear and Rosen, in the shared-prize contest the equilibrium effort increases as the noise
variance becomes smaller. Furthermore, as the noise variance approaches zero, the equilibrium
effort with the shared prize approaches the equilibrium effort in a Tullock lottery contest.
2. The Model
Consider a simple contest in which two risk-neutral players and compete for a prize .
Both players expend individual efforts and . The output of player is determined by a
production function
, (1)
where is a random variable that is drawn from the distribution . The random component ,
can be thought of as production luck or measurement error, and is not observable to either of the
players. Player ’s probability of winning the prize is defined by a contest success function:
. (2)
Every player who exerts effort has to bear cost . The expected payoff for player
. (3)
The Nash equilibrium depends on the specific conditions of the contest.
2.1. Equilibria in Standard Contests
To obtain the standard lottery contest in Tullock (1980) we set , , and
. In this case the Nash equilibrium is unique and it is given by
. (4)
For the all-pay auction of Hillman and Riley (1989), we set , , and
. The complete characterization of the equilibrium can be found in Baye et al. (1996). In the
all-pay auction no pure strategy equilibrium exists, unlike in the lottery contest. The mixed
strategy Nash equilibrium is characterized by the cumulative distribution function,
for . (5)
To obtain a rank-order tournament of Lazear and Rosen (1981), we set , ,
and , . Noise in the production function and convexity of the cost function is
necessary in order to generate a pure strategy equilibrium. When it exists, the pure strategy Nash
equilibrium effort can be obtained from the following expression:
. (6)
The main difference between a rank-order tournament and the other two contests is the
noise component . Once a distribution of is specified it is easy to analyze the effect of noise
on equilibrium effort. For example, if is uniformly distributed on the interval – then
. It follows from (6) that the equilibrium effort decreases in the variance
of the distribution. This major finding of Lazear and Rosen (1981) agrees with economic
2.2. Equilibrium with Random Noise and a Shared Prize
The contest we study closely resembles the Tullock lottery, with the difference that
payoffs are deterministic and proportional to performance, subject to random noise. That is, the
in (2) is interpreted as the proportion of the prize value, rather than the probability of winning
the prize. When modeling a conventional Tullock competition with risk neutral agents, adding a
noise component would be redundant since the winner of such a contest is already chosen
probabilistically (Fullerton and McAfee, 1999). However, in many economic competitions
players are rewarded proportionally to some measure of performance which depends on effort
and a random component.
To model this type of competition, we consider a contest where all players receive a
portion of a fixed and known prize. Such a contest arises when . This is exactly the same
restriction as in the standard Tullock (1980) lottery contest, but now the contest success function
is interpreted as a deterministic share of the prize. Randomness enters only through the
production function (1). The analysis that follows assumes that , where is a random
variable that is drawn from the distribution on the interval . This multiplicative noise
production function has been used by O’Keefe et al. (1984), Hirshleifer and Riley (1992), and
Gerchak and He (2003). A contest with this production function can also be interpreted as a
contest where players have different, unknown abilities (Rosen, 1986). More importantly,
multiplicative noise implies that the contest success function (2) satisfies the axioms introduced
by Skaperdas (1996). In particular, the contest success function satisfies the conditions of a
Multiplicative noise also guarantees that the contest success function is homogeneous, i.e.,
for all .
Given the restrictions and , the expected payoff (3) can be rewritten as:
. (7)
Taking the first order condition and assuming a symmetric equilibrium, the optimal effort
can be obtained from the following expression:
The equilibrium effort depends on the value of the prize, the convexity of the cost
function, and the distribution of the noise. An increase in the size of the prize increases
individual effort. However, it is not straightforward to evaluate how the equilibrium effort in (8)
is affected by the variance of the noise distribution. If we assume the cost function is linear,
, and that and are independent and uniformly distributed on the interval
– , where scales the variance of the distribution, then
. (9)
The expected payoff at the symmetric equilibrium (9) is positive
, (10)
and the second order condition evaluated at the equilibrium is satisfied:
From (9), it is straightforward to show that , i.e. as the level of noise
and Rosen using .1 The crucial difference in our model is that we model the success
function (a share of the prize) as the Tullock lottery (r = 1). We can solve for equilibrium as the
variance of noise approaches to zero, by evaluating at the limit as :
With L'Hopital’s rule we can show that as . Therefore, as the variance of
noise approaches zero, the equilibrium of this shared-prize contest approaches the equilibrium of
a simple lottery contest without noise (4). A smooth transition exists between this type of contest
with a random noise and a lottery contest. There is no such transition between a rank-order
tournament and an all-pay auction (Che and Gale, 2000).
The assumption that the error term is uniformly distributed permits a closed form solution
for the equilibrium effort. Our main conclusions are also robust to other noise distributions. To
examine robustness we computed numerical solutions for three extreme cases: a (truncated)
normal distribution, a U-shaped quadratic distribution, and the exponential distribution. We only
restricted the mean of the distribution to equal 1. Figure 1 displays the equilibrium effort as a
function of the distribution variance, and the two main conclusions drawn for the equilibrium
with the uniform distribution (9) still hold. First, increases in the noise variance decrease
equilibrium effort. Second, as the noise variance approaches zero the equilibrium efforts
converge to a simple Tullock lottery contest without noise.
Figure 1 – Equilibrium Effort as a Function of Noise Variance (V[ is normalized to 1)]
3. Conclusions
This note presents a contest in which agents are rewarded proportionally to their
achievement, where this achievement depends on both effort and random noise. Our approach
offers a structural model of contests in which a Tullock success function is linked to an explicit
source of random noise. Similar to Lazear and Rosen, the equilibrium effort increases as the
variance of noise decreases. As the noise variance approaches zero, it approaches the equilibrium
effort of a Tullock lottery contest.
Our restrictions on the production function and the distribution of noise were chosen in
order to obtain a closed form solution for equilibrium effort. Using numerical simulations we
demonstrate that our results are robust to three very different noise distributions, but as shown by
Gerchak and He (2003) even in a standard rank-order tournament, changing the production
function and the distribution of noise can significantly alter results. Therefore, a natural
0.14 0.16 0.18 0.20 0.22 0.24 0.26
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
Variance of the Distribution Exponential U-quadratic
extension of this study is to examine how the production function and the noise distribution
affect equilibrium behavior in this type of contest. Other extensions could evaluate the effect of
asymmetry, incomplete information, and number of players. We leave these issues for future
Amegashie, J. (2006). A contest success function with a tractable noise parameter, Public Choice, 126, 135-144.
Baye, M.R., de-Vries, C.G., & Kovenock, D. (1996). The all-pay auction with complete information, Economic Theory, 8, 291-305.
Baye, M.R., Hoppe, H.C. (2003). The Strategic Equivalence of Rent-Seeking, Innovation, and Patent-Race Games. Games and Economic Behavior, 44, 217-226.
Che, Y.K., & Gale, I. (2000). Difference-form contests and the robustness of all-pay auctions, Games and Economic Behavior, 30, 22-43.
Dasgupta, A., & Nti, K.O. (1998). Designing an optimal contest. European Journal of Political Economy, 14, 587–603.
Fullerton, R.L., & McAfee, R.P. (1999). Auctioning Entry into Tournaments. Journal of Political Economy, 107, 573-605.
Gerchak, Y., & He, Q.M. (2003). When will the range of prizes in tournaments increase in the noise or in the number of players? International Game Theory Review, 5, 151-166.
Hillman, A., & Riley, J.G., (1989). Politically contestable rents and transfers. Economics and Politics, 1, 17-40.
Hirshleifer, J., & Riley, J. G. (1992). The analytics of uncertainty and information. New York: Cambridge University Press.
Konrad, K.A. (2009). Strategy and Dynamics in Contests. Oxford University Press.
Lazear, E., and Rosen, S. (1982). Rank-Order Tournaments as Optimum Labor Contracts, Journal of Political Economy, 89, 841-864.
Long, N.V., & Vousden, N. (1987). Risk-averse sent seeking with shared rents. Economic Journal, 97, 971 -985.
O’Keefe, Mary, W. Kip Viscusi and Richard J. Zeckhauser (1984). Economic contests: comparative reward schemes, Journal of Labor Economics, 2, 27-56.
Rosen, S. (1986). Prizes and incentives in elimination tournaments. American Economic Review, 76, 701-715.
Skaperdas, S. (1996). Contest Success Functions. Economic Theory, 7, 283-290.
Tullock, G. (1980). Efficient Rent Seeking. In James M. Buchanan, Robert D. Tollison, Gordon Tullock, (Eds.), Toward a theory of the rent-seeking society. College Station, TX: Texas A&M University
Press, pp. 97-112.
|
{"url":"https://1library.net/document/yd27ge6q-contests-with-random-noise-and-a-shared-prize.html","timestamp":"2024-11-14T07:45:14Z","content_type":"text/html","content_length":"163609","record_id":"<urn:uuid:ee8ec9f8-19bb-41c6-9d61-bef8aba2d98e>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00219.warc.gz"}
|
Thrall graduated in 1935 with BA from Illinois College and in 1937 with MA and PhD in mathematics from the University of Illinois. From 1937 to 1969 he was a professor of mathematics at the
University of Michigan in Ann Arbor. In 1969 he became a professor in the newly founded department of Mathematical Sciences (i.e. applied mathematics) at Rice University. He chaired the department
from 1969 to 1974. In 1977 he received a joint appointment in Rice's newly established Graduate School of Business, where he taught decision analysis to MBA Students. He retired from Rice University
as professor emeritus in 1984.^[3]
At the beginning of his career, Thrall's research was in group theory, ring theory, and representation theory.^[3] His research accomplishments during that period include the celebrated hooklength
formula for the dimension of an irreducible representation of a symmetric group, or equivalently the number of standard Young tableaux of a given shape (with J. Sutherland Frame and G. de B. Robinson
) and the influential Brauer-Thrall conjectures (with Richard Brauer).
For two years, from 1940 to 1942, he was a visiting scholar at the Institute for Advanced Study.^[4] During WW II he began to study operations research and development of mathematical models for
military applications. From 1957 to 1961 he was the editor-in-chief of Management Science, as successor to C. West Churchman. From 1961 to 1965 Thrall was an associate editor for the journal. He was
the 16th president of The Institute of Management Sciences (TIMS) (now INFORMS) for a one-year term in 1969–1970. He was elected to the 2002 class of Fellows of the Institute for Operations Research
and the Management Sciences.^[5] With William W. Cooper, Rajiv Banker, and other collaborators, he wrote a number of important papers on data envelopment analysis (DEA). Thrall was the author or
co-author of over 100 articles in scholarly journals, as well as several books.^[3]
He married Natalie Hunter in 1936. His wife died in 2004. Upon his death he was survived by a daughter, two sons, three grandchildren, and three great-grandchildren.^[2]
Selected publications
• Artin, Emil; Nesbitt, Cecil J.; Thrall, Robert M. (1944), Rings with Minimum Condition, University of Michigan Publications in Mathematics, vol. 1, Ann Arbor, Mich.: University of Michigan Press,
MR 0010543^[6]
• Spivey, W. Allen; Thrall, Robert M. (1970). Linear optimization. New York: Holt, Rinehart and Winston. ISBN 0030841739. LCCN 70125474; xii+530 p.; illus.{{cite book}}: CS1 maint: postscript
• Thrall, Robert M.; Tornheim, Leonard (1 January 1970). Vector Spaces and Matrices. Courier Corporation. ISBN 978-0-486-62667-3.1st edition. Wiley. 1957.^[8]
|
{"url":"https://www.knowpia.com/knowpedia/Robert_M._Thrall","timestamp":"2024-11-06T21:07:29Z","content_type":"text/html","content_length":"103317","record_id":"<urn:uuid:5e93be5f-7b3b-4156-91af-921655a5c19b>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00649.warc.gz"}
|
Math Symbols Bulletin Board Set: Subtraction!
The Math Symbols Bulletin Board Set is an educational tool that includes a collection of visual aids featuring various mathematical symbols and their meanings to enhance math literacy among students
in a classroom setting.
Educators use the Math Symbols Bulletin Board Set to teach and reinforce the language of mathematics.
This set typically includes symbols for operations (like addition, subtraction, multiplication, and division), relational symbols (such as equals, greater than, and less than), and other specialized
symbols used in algebra, geometry, and more advanced math courses.
The bulletin board set serves as a reference for students to familiarize themselves with these symbols, assisting them in understanding mathematical concepts and solving problems effectively.
For example, a Math Symbols Bulletin Board Set might include:
Plus (+) and minus (-) signs for addition and subtraction
Multiplication (×) and division (÷) symbols
Inequality symbols (<, >, ≤, ≥)
Pi (π), square root (√), and sigma (Σ) symbols for more advanced concepts
The Math Symbols Bulletin Board Set is an indispensable classroom resource, promoting a clear understanding of essential math symbols crucial for students’ academic success in mathematics.
Key Takeaway
The Math Symbols Bulletin Board Set visually reinforces mathematical concepts and facilitates student understanding through clear symbol representation.
It prominently displays commonly used mathematical symbols and helps students internalize symbols and their meanings.
The set promotes coherence in understanding across different mathematical topics and encourages active participation and discussion among students.
It enhances the classroom environment with colorful decorations and serves as a long-term reference for independent learning.
Importance of Math Symbols Bulletin Board Set
The importance of the Math Symbols Bulletin Board Set lies in its ability to visually reinforce mathematical concepts and facilitate student understanding through clear and consistent symbol
By prominently displaying commonly used mathematical symbols, such as addition, subtraction, multiplication, division, equality, inequality, and others, the bulletin board set serves as a constant
visual reference for students.
This visual reinforcement helps students internalize the symbols and their meanings, leading to improved retention and comprehension of mathematical concepts.
Additionally, the consistent representation of symbols across different mathematical topics promotes coherence in understanding, making it easier for students to connect and apply mathematical
principles across various problem-solving scenarios.
Ultimately, the Math Symbols Bulletin Board Set plays a crucial role in supporting students’ grasp of fundamental mathematical concepts.
Key Features of the Bulletin Board Set
Prominently featured symbols serve as the focal point of the bulletin board set, enhancing visual reinforcement of mathematical concepts for students.
The key features of this bulletin board set include clear and vibrant display of essential math symbols such as addition, subtraction, multiplication, division, equality, inequality, and various
geometric shapes. In addition to these math symbols, this bulletin board set also includes a detailed explanation of the sigma math symbol explained for students to easily understand its significance
in mathematical expressions. The set is designed to be visually appealing and educational, making it a valuable resource for any math classroom. With its engaging visuals and informative content,
this bulletin board set is sure to enhance students’ understanding and appreciation of mathematical concepts.
These symbols are accompanied by concise explanations, making the set a valuable educational tool for students at all levels.
Additionally, the set incorporates real-world applications of these symbols, fostering a deeper understanding of their significance in everyday life.
The inclusion of large, easy-to-read fonts ensures visibility from a distance, making it an effective teaching aid for classrooms of various sizes.
Overall, the bulletin board set offers a visually engaging and informative resource for reinforcing mathematical concepts in an accessible and comprehensive manner.
Benefits of Using Math Symbols Bulletin Board Set
One of the primary benefits of using the Math Symbols Bulletin Board Set is its ability to visually reinforce mathematical concepts for students. The set serves as a constant visual aid, promoting a
deeper understanding of mathematical principles.
Here are some additional benefits of incorporating the Math Symbols Bulletin Board Set into the learning environment:
Benefits Description
Visual Reinforcement Helps students to grasp and remember complex mathematical symbols and concepts.
Classroom Engagement Encourages active participation and discussion among students during math lessons.
Decorative and Informative Addition to the Classroom Enhances the classroom environment with colorful and informative math-related decorations.
Long-Term Reference Serves as a reference point for students to revisit and reinforce their learning independently.
Benefits of incorporating the Math Symbols Bulletin Board Set into the learning environment
Ways to Incorporate the Set in Learning Spaces
The Math Symbols Bulletin Board Set can be effectively incorporated into learning spaces through interactive math games, providing students with a hands-on approach to understanding mathematical
Additionally, using the set as a visual math reference can help reinforce key ideas and facilitate a deeper understanding of mathematical symbols and operations.
Furthermore, collaborative problem-solving activities can be enhanced by the presence of the bulletin board set, fostering teamwork and critical thinking skills in students.
Interactive Math Games
Integrate the Math Symbols Bulletin Board Set into interactive math games to enhance learning experiences in educational environments.
Use the set to create activities such as symbol matching games, where students match mathematical operations with the corresponding symbols.
Another interactive game could involve a scavenger hunt, where students search for specific math symbols around the classroom or school.
Additionally, incorporate the set into a math relay race, where students solve math problems using the symbols before passing the baton to the next team member.
These interactive games not only make learning math fun but also reinforce students’ understanding of mathematical symbols and operations.
By incorporating the Math Symbols Bulletin Board Set into such engaging activities, educators can create dynamic and effective learning spaces for their students.
These interactive games help students develop a deeper understanding of mathematical concepts while having fun. This makes the transition into the subsequent section about ‘visual math references’
Collaborative Problem-Solving Activities
When using the Math Symbols Bulletin Board Set in learning spaces, educators can incorporate it into collaborative problem-solving activities to enhance student engagement and mathematical
By utilizing the symbols and visual aids on the bulletin board, teachers can create interactive problem-solving tasks that encourage teamwork and critical thinking.
For instance, teachers can present real-world mathematical problems and ask students to work together to solve them using the symbols and concepts displayed on the board.
This fosters a collaborative learning environment and helps students grasp the practical applications of mathematical symbols.
Moreover, educators can use the bulletin board set as a backdrop for group discussions where students explain their problem-solving strategies, further reinforcing their comprehension of mathematical
Incorporating the set into collaborative activities enriches the learning experience and cultivates a deeper understanding of mathematical principles.
Enhancing Math Learning With the Bulletin Board Set
The Math Symbols Bulletin Board Set provides a visually stimulating way to reinforce mathematical concepts, making it an effective tool for enhancing math learning.
By creating an interactive learning space, the bulletin board set encourages student engagement and participation, fostering a more dynamic and effective classroom environment for math instruction.
These points highlight the potential of the bulletin board set to elevate the learning experience and support students in grasping mathematical concepts.
Visual Math Reinforcement
Regularly incorporating visual math reinforcement through the bulletin board set can significantly enhance students’ math learning experience. Visual aids help make abstract mathematical concepts
more tangible and accessible, catering to diverse learning styles.
The bulletin board set serves as a constant visual reminder of key math symbols, formulas, and problem-solving strategies, reinforcing classroom instruction.
Here’s a table highlighting the benefits of visual math reinforcement:
Benefits of Visual Math Reinforcement
1. Enhances understanding of concepts
2. Supports memory retention
3. Encourages active engagement
4. Fosters a positive learning environment
5. Assists in connecting ideas and applications
Interactive Learning Space
Incorporating the math symbols bulletin board set into an interactive learning space can significantly enhance students’ understanding and retention of mathematical concepts.
Here’s how it can be achieved:
1. Hands-On Engagement: The bulletin board set provides a tangible and visually appealing resource that students can interact with, fostering active engagement in the learning process.
2. Visual Reinforcement: By displaying key mathematical symbols, formulas, and concepts, the bulletin board serves as a constant visual reminder, reinforcing the material covered in class.
3. Collaborative Learning: Utilizing the bulletin board as a focal point for group activities and discussions encourages collaborative learning, allowing students to share ideas and problem-solving
strategies in a dynamic, interactive environment.
Engaging Classroom Environment
When creating an engaging classroom environment to enhance math learning with the bulletin board set, it is essential to consider the impact on students’ active participation and comprehension of
mathematical concepts.
A well-designed bulletin board can serve as a visual aid, sparking curiosity and reinforcing key math symbols and concepts.
Here is an example of how the bulletin board set can be utilized:
Column 1 Column 2 Column 3
Math Symbol 1 Math Symbol 2 Math Symbol 3
Math Concept 1 Math Concept 2 Math Concept 3
Example 1 Example 2 Example 3
How the bulletin board set can be utilized
Tips for Maximizing the Set’s Impact
To ensure an effective use of the Math Symbols Bulletin Board Set, educators should consistently reinforce its relevance in classroom activities.
Here are some tips for maximizing the set’s impact:
1. Integration into Lessons: Incorporate the math symbols into daily lessons, encouraging students to refer to the bulletin board when working on math problems.
2. Interactive Activities: Create interactive activities that require students to use the symbols from the bulletin board, such as matching games or problem-solving tasks.
3. Regular Review: Regularly review the symbols with the class, asking students to explain the meaning and usage of each symbol.
By following these tips, educators can ensure that the Math Symbols Bulletin Board Set becomes an integral part of the classroom, promoting a deeper understanding of mathematical concepts.
Creative Display Ideas for the Bulletin Board Set
One effective way to enhance the impact of the Math Symbols Bulletin Board Set is through creative and visually engaging displays. A well-designed bulletin board can serve as a powerful visual aid to
reinforce mathematical concepts and symbols.
Here’s a sample layout idea for the bulletin board set:
Column 1 Column 2
Symbol Meaning
Plus (+) Addition
Minus (-) Subtraction
Multiplication (x) Multiplication
Division (÷) Division
Equals (=) Equality
sample layout idea for the bulletin board set
Maintenance and Care of the Bulletin Board Set
Proper maintenance and care of the Math Symbols Bulletin Board Set are essential to preserve its visual impact and functionality in the learning environment.
To ensure its longevity and effectiveness, follow these maintenance tips:
1. Regular Cleaning: Use a soft cloth or duster to gently remove dust and dirt from the bulletin board and symbols. Avoid using harsh chemicals that may damage the material.
2. Secure Mounting: Check the mounting of the bulletin board to ensure it is secure and stable. Loose fittings or improper installation can lead to accidents and damage.
3. Periodic Inspection: Regularly inspect the symbols, borders, and any additional components for signs of wear, tear, or damage. Replace any worn-out or damaged parts promptly to maintain the set’s
visual appeal and educational value.
The math symbols bulletin board set serves as a superb supplement for math instruction. Its key features, benefits, and creative display ideas contribute to an engaging and effective learning
By incorporating this set in learning spaces and maintaining it with care, educators can enhance math learning and maximize its impact. Overall, the bulletin board set is a valuable tool for
bolstering mathematical mastery.
Leave a Reply Cancel reply
|
{"url":"https://symbolismdesk.com/math-symbols-bulletin-board-set/","timestamp":"2024-11-07T17:25:19Z","content_type":"text/html","content_length":"144187","record_id":"<urn:uuid:f36f1567-1461-4847-9681-66027256c26d>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00211.warc.gz"}
|
Grade 12 Calculus and Vectors
Chapter 1: Introduction to Calculus and Vectors
This is an introductory chapter to Calculus and Vectors. It is important for students to be very familiar with the material presented in this chapter since the following chapters will be building
from this knowledge. This chapter will investigate slopes of straight lines, secants and tangents as well as their relationship to limits of functions and some of the main properties of limits.
Chapter will conclude with techniques for identifying continuous functions and non-continuous functions using limits.
1.1 Rationalizing Denominators and Radical Expressions
1.2 Average Rate of Change
1.3 Slope of Tangents
1.4 Limit of a Function
1.5 Properties of Limits
1.6 Continuity
The Courses we offer
• English / French
• Mathematics / Statistics
• Advanced Functions
• Calculus and Vectors
• Algebra and Geometry
• All Science Courses
• Physics / Chemistry / Biology
• Anatomy / Pathophysiology / Kinesiology
• Thermodynamics / Fluids Dynamics
• Pronunciation and Language Help
• Computer Science / Programming
• CAD / Software and Applications Trainings
View all courses and support offered »
Programs and Services provided
• In-Home 1-on-1 tutoring
• Small Group Neighbourhood tutoring
• Mentoring / Course scheduling
• FREE supplementary Video Tutorials
• Extra practice materials
• Distant learning assistance
• Projects / Workshops / Camps
• Science Competitions
• Accelerated/Enriched programs
• Reinforcement programs
See what sets us apart »
|
{"url":"http://www.tutorialcanada.ca/rationalizing-denominators-radicals-slopes-secant-tangent-limit-continuity-ch1.html","timestamp":"2024-11-14T17:10:05Z","content_type":"application/xhtml+xml","content_length":"16061","record_id":"<urn:uuid:75fa3fa0-baf4-454f-a40c-c91991581541>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00765.warc.gz"}
|
- G
Given a table, equation or written description of an exponential function, graph that function and determine its key features.
Clarification 1:
Key features are limited to domain; range; intercepts; intervals where the function is increasing, decreasing, positive or negative; constant percent rate of change; end behavior and asymptotes.
Clarification 2: Instruction includes representing the domain and range with inequality notation, interval notation or set-builder notation.
Clarification 3: Within the Algebra 1 course, notations for domain and range are limited to inequality and set-builder.
Clarification 4: Within the Algebra 1 course, exponential functions are limited to the forms b is a whole number greater than 1 or a unit fraction or
General Information
Subject Area: Mathematics (B.E.S.T.)
Grade: 912
Strand: Algebraic Reasoning
Date Adopted or Revised: 08/20
Status: State Board Approved
Benchmark Instructional Guide
Connecting Benchmarks/Horizontal Alignment
Terms from the K-12 Glossary
• Coordinate plane
• Domain
• Exponential Function
• Function Notation
• Range
• $x$-intercept
• $y$-intercept
Vertical Alignment
Previous Benchmarks
Next Benchmarks
Purpose and Instructional Strategies
In grade 8, students graphed linear equations. In Algebra I, students graph exponential functions and determine their key features, including asymptotes and end behavior. Students are first
introduced to asymptotes in Algebra I. In later courses, asymptotes are important in the study of other types of functions, including rational functions.
• Instruction provides the opportunity for students to explore the meaning of an asymptote graphically and algebraically. Through work in this benchmark, students will discover asymptotes are
useful guides to complete the graph of a function, especially when drawing them by hand. For mastery of this benchmark, asymptotes can be drawn on the graph as a dotted line or not drawn on the
• For students to have full understanding of exponential functions, instruction includes MA.912.AR.5.3 and MA.912.AR.5.4. Growth or decay of a function can be defined as a key feature (constant
percent rate of change) of an exponential function and useful in understanding the relationships between two.
• Instruction includes the use of $x$-$y$ notation and function notation.
• Instruction includes representing domain and range using words, inequality notation and set-builder notation.
□ Words If the domain is all real numbers, it can be written as “all real numbers” or “any value of $x$, such that $x$ is a real number.”
□ Inequality notation If the domain is all values of $x$ greater than 2, it can be represented as $x$ > 2.
□ Set-builder notation If the domain is all values of $x$ less than or equal to zero, it can be represented as {$x$|$x$ ≤ 0} and is read as “all values of $x$ such that $x$ is less than or
equal to zero.”
• Instruction includes the use of appropriately scaled coordinate planes, including the use of breaks in the $x$- or $y$-axis when necessary.
Common Misconceptions or Errors
• Students may not fully understand how to use proper notation when determining the key features of an exponential function.
Strategies to Support Tiered Instruction
• Instruction includes student understanding that growth and decay is not the same as a function increasing or decreasing.
□ For example, the exponential function $y$ = −2(0.5)^$x$is an exponential decay function because the value of $b$ is in between 0 and 1. Note that it is the magnitudes of the $y$-values that
are decaying exponentially, eventually getting closer to zero. However, the value of the function increases as the value of $x$ increases. To help students visualize this, graph the function
using graphing technology.
• Instruction includes using an exponential function formula guide like the one below.
Exponential Growth Exponential Decay
$b$ > 1 $b$ < 1
$y$ = $a$(1 + $r$)$t$ $y$ = $a$(1 − $r$)$t$
Instructional Tasks
Instructional Task 1
• The bracket system for the NCAA Basketball Tournament (also known as March Madness) is an example of an exponential function. At each round of the tournament, teams play against one another with
only the winning teams progressing to the next round. If we start with 64 teams going into round 1, the table of values looks something like this:
□ Part A. Graph this function.
□ Part B. What is the percentage of teams left after each round?
Instructional Task 2
• Ashanti purchased a car for $22,900. The car depreciated at an annual rate of 16%. After 5 years, Ashanti wants to sell her car.
□ Part A. Write an equation that models the value of Ashanti’s car?
□ Part B. What would be the range of the graph of the value of Ashanti’s car?
□ Part C. What would be the $y$-intercept of that graph and what does it represent?
□ Part D. Will her car ever have a value of $0.00 based on your equation?
□ Part E. What would be a sensible domain for this function? Justify your answer.
Instructional Items
Instructional Item 1
• An exponential function is given by the equation $y$ = −14($\frac{\text{1}}{\text{4}}$)^$x$. What is the asymptote for the graph?
Instructional Item 2
• An exponential function is given.
= 50(1.1)
□ Part A. Does this function represent exponential growth or decay?
□ Part B. What is the constant percent rate of change of $y$ with respect to $t$.
*The strategies, tasks and items included in the B1G-M are examples and should not be considered comprehensive.
Related Courses
This benchmark is part of these courses.
Related Access Points
Alternate version of this benchmark for students with significant cognitive disabilities.
Given a table, equation or written description of an exponential function, select the graph that represents the function.
Related Resources
Vetted resources educators can use to teach the concepts and skills in this benchmark.
Formative Assessments
Lesson Plans
Original Student Tutorials
Perspectives Video: Teaching Idea
Problem-Solving Tasks
MFAS Formative Assessments
Comparing Functions - Exponential:
Students are asked to use technology to graph exponential functions and then to describe the effect on the graph of changing the parameters of the function.
Graphing an Exponential Function:
Students are asked to graph an exponential function and to determine if the function is an example of exponential growth or decay, describe any intercepts, and describe the end behavior of the graph.
Loss of Fir Trees:
Students are asked to sketch a graph that depicts the exponential decline in the population of fir trees in a forest.
Original Student Tutorials Mathematics - Grades 9-12
Exponential Functions Part 1:
Learn about exponential functions and how they are different from linear functions by examining real world situations, their graphs and their tables in this interactive tutorial.
Exponential Functions Part 2: Growth:
Learn about exponential growth in the context of interest earned as money is put in a savings account by examining equations, graphs, and tables in this interactive tutorial.
Student Resources
Vetted resources students can use to learn the concepts and skills in this benchmark.
Original Student Tutorials
Exponential Functions Part 3: Decay:
Learn about exponential decay as you calculate the value of used cars by examining equations, graphs, and tables in this interactive tutorial.
Type: Original Student Tutorial
Exponential Functions Part 2: Growth:
Learn about exponential growth in the context of interest earned as money is put in a savings account by examining equations, graphs, and tables in this interactive tutorial.
Type: Original Student Tutorial
Exponential Functions Part 1:
Learn about exponential functions and how they are different from linear functions by examining real world situations, their graphs and their tables in this interactive tutorial.
Type: Original Student Tutorial
Problem-Solving Tasks
Do two points always determine an exponential function?:
This problem complements the problem "Do two points always determine a linear function?'' There are two constraints on a pair of points R1 and R2 if there is an exponential function f(x) = ae^bx
whose graph contains R1 and R2.
Type: Problem-Solving Task
A Saturating Exponential:
This task provides an interesting context to ask students to estimate values in an exponential function using a graph.
Type: Problem-Solving Task
Graphing Exponential Equations:
This tutorial will help you to learn about exponential functions by graphing various equations representing exponential growth and decay.
Type: Tutorial
Parent Resources
Vetted resources caregivers can use to help students learn the concepts and skills in this benchmark.
Problem-Solving Tasks
Do two points always determine an exponential function?:
This problem complements the problem "Do two points always determine a linear function?'' There are two constraints on a pair of points R1 and R2 if there is an exponential function f(x) = ae^bx
whose graph contains R1 and R2.
Type: Problem-Solving Task
A Saturating Exponential:
This task provides an interesting context to ask students to estimate values in an exponential function using a graph.
Type: Problem-Solving Task
|
{"url":"https://www.cpalms.org/PreviewStandard/Preview/15591","timestamp":"2024-11-10T14:57:27Z","content_type":"text/html","content_length":"130181","record_id":"<urn:uuid:70eb1d5b-320b-496e-a28c-280439b92636>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00884.warc.gz"}
|
Excel Formula for Checking Sentences and Categories
In this tutorial, we will learn how to use an Excel formula to check if a sentence in one column matches any sentences in another column and insert the corresponding category. This formula is
particularly useful for German Excel users who want to automate the categorization of sentences based on a predefined list.
To achieve this, we will use the VLOOKUP function in Excel. The VLOOKUP function allows us to search for a value in a table range and retrieve a corresponding value from a specified column.
The formula we will use is as follows:
Let's break down the formula step by step:
1. The VLOOKUP function searches for the value in cell B1, which is the first cell in column B.
2. The table range is specified as $L$1:$M$100, where column L contains the sentences and column M contains the corresponding categories. The dollar signs ($) are used to make the range absolute, so
it doesn't change when the formula is copied to other cells.
3. The column index is set to 2, which means the function will return the value from the second column of the table range (column M).
4. The last argument, FALSE, specifies that an exact match is required.
5. The IFERROR function is used to handle cases where no match is found. If the VLOOKUP function returns an error, the IFERROR function returns an empty string ("") instead.
To use this formula, you can simply copy it to cell C1 and drag it down to apply it to the entire column. The formula will check if the values in column B contain any of the sentences in column L and
insert the corresponding category from column M in column C. If no match is found, an empty string will be displayed.
Here's an example to illustrate how the formula works:
B L M
1 Sentence 1 Cat1
2 Sentence 2 Cat2
3 Sentence 3 Cat3
4 Sentence 4 Cat4
5 Sentence 5 Cat5
The formula =IFERROR(VLOOKUP(B1,$L$1:$M$100,2,FALSE),"") would return the following results in column C:
The formula checks if the values in column B contain any of the sentences in column L and inserts the corresponding category from column M in column C. If no match is found, an empty string is
Now that you know how to use this formula, you can easily categorize sentences based on a predefined list in Excel. This can be helpful for various tasks such as data analysis, content organization,
and more. Happy Excel coding!
An Excel formula
Formula Explanation
This formula uses the VLOOKUP function to check if the values in column B contain any of the sentences in column L. If a match is found, it retrieves the corresponding category from column M and
inserts it in column C.
Step-by-step explanation
1. The VLOOKUP function searches for a value in the first column of a table range and returns a value from a specified column in the same row.
2. The value to search for is taken from cell B1, which is the first cell in column B.
3. The table range is specified as $L$1:$M$100, where column L contains the sentences and column M contains the corresponding categories. The dollar signs ($) are used to make the range absolute, so
it doesn't change when the formula is copied to other cells.
4. The column index is set to 2, which means the function will return the value from the second column of the table range (column M).
5. The last argument, FALSE, specifies that an exact match is required.
6. The IFERROR function is used to handle cases where no match is found. If the VLOOKUP function returns an error, the IFERROR function returns an empty string ("") instead.
For example, if we have the following data in columns B, L, and M:
| B | L | M |
| | | |
| 1 | Sentence 1 | Cat1 |
| 2 | Sentence 2 | Cat2 |
| 3 | Sentence 3 | Cat3 |
| 4 | Sentence 4 | Cat4 |
| 5 | Sentence 5 | Cat5 |
The formula =IFERROR(VLOOKUP(B1,$L$1:$M$100,2,FALSE),"") would return the following results in column C:
| C |
| |
| Cat1 |
| Cat2 |
| Cat3 |
| Cat4 |
| Cat5 |
The formula checks if the values in column B contain any of the sentences in column L and inserts the corresponding category from column M in column C. If no match is found, an empty string is
|
{"url":"https://codepal.ai/excel-formula-generator/query/06H6JxdZ/excel-formula-check-sentences-category","timestamp":"2024-11-03T01:09:12Z","content_type":"text/html","content_length":"101021","record_id":"<urn:uuid:58380ea2-af6c-4586-9c52-5139c154d2a1>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00281.warc.gz"}
|
Proof That Loss of Fluid Per Unit Volume Equals Divergence of Velocity Field
The x component of the velocity at the centre of the face ABCD above is
The x component of the velocity at the centre of the face EFGH above is
The volume of fluid crossing ABCD per unit time is
The volume of fluid crossing EFGH per unit time is
The rate at which water is lost is the difference of these two and equals
Similarly the net loss of fluid in the y and z directions are
Adding these gives the rate of loss of fluid overall as
|
{"url":"https://astarmathsandphysics.com/university-physics-notes/fluid-mechanics/1575-proof-that-loss-of-fluid-per-unit-volume-equals-divergence-of-velocity-field.html","timestamp":"2024-11-11T17:13:02Z","content_type":"text/html","content_length":"68639","record_id":"<urn:uuid:1ad4de88-52ec-41e2-ae78-65b5a47e1e4b>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00217.warc.gz"}
|
Nested first-passages of tracer particles in flows of blood and control suspensions: Symmetry and lorentzian transformations
Theory of molecular Taylor-Aris dispersion (TAD) is an accepted framework describing tracer dispersion in suspension flows and determining effective diffusion coefficients. Our group reported a
pseudo-Lagrangian method to study dispersion in suspension flows at FEDSM-2000. Tracer motions were studied in a steadily moving inertial reference frame (SMIRF) aligned with the flow direction;
increments of change of axial position of individual tracers were collected to demonstrate how the tracer moved as they, individually, interacted with similar collections of other bodies brought to
and from the region. First, individual tracers with no apparent axial velocity component (NAAVC) were identified; they exhibited fixed positions in video recordings of images collected in the SMIRF.
Then, time increments were measured for tracers to pass at least 5, but usually 10 pre-selected, nested distances in the up- or downstream direction laid out with respect to the zero-site in the
SMIRF. Such data were richer than measurements of tracer spread over time because stations along each path were serial first-passages (FP) with probabilistic meaning. Dispersion of various types of
suspension and two transformation rules for combining velocity components are discussed herein. Traditional low-speed continuum theory and particle dynamics use Galilean transforms. Yet, to recognize
the limited speed in laws for channel flows, Lorentzian transformations may be appropriate. In a four-space, deterministic paths would begin at NAAVC sites and continue in time-like conical regions
of four-space. Distances in this space are measured using Minkowski's metric; at the NAAVC site and on the boundary of the space-time cone, this metric has the format of the Fürth, Ornstein, and
Taylor (FOT) equation when only terms to order 2t are used. Data shown at FEDSM-2000 can be reinterpreted as "prospective paths" in time-like regions that were consolidated in normalized cumulative
probability distributions to provide retrospective descriptions. The ad hoc sign alteration of the FOT equation to fit the data of FEDSM-2000 is now taken as a part of measuring lengths using a
Minkowski metric, which signifies a hyperbolic geometry, for which an inherent scaling constant is a negative curvature. The space also has an intrinsic distance of ℓ = δτ, obtained from fitting
parameters (δτ) for the FOT equation. Integrals of the area under the FOT curve have units of volume, which are considered as describing an average volume of dispersion on S3,the 3-sphere. Path
motion through this volume was kinematic dispersion, δτ2t, which was the form for effective diffusivity in continuum theory used in FEDSM-2000. Weiner and Wilmer describe transformations in
four-spaces in terms of commutating rotations on orthogonal planes, a concept readily linked to symmetries in the hyperbolic space typical of Lorentzian transformations; they also describe a second
order ODE like the FOT equation.
Publication Title
American Society of Mechanical Engineers, Fluids Engineering Division (Publication) FEDSM
Recommended Citation
Eckstein, E., Bhal, V., Lavine, J., Ma, B., Leggas, M., & Goldstein, J. (2017). Nested first-passages of tracer particles in flows of blood and control suspensions: Symmetry and lorentzian
transformations. American Society of Mechanical Engineers, Fluids Engineering Division (Publication) FEDSM, 1C-2017 https://doi.org/10.1115/FEDSM2017-69549
|
{"url":"https://digitalcommons.memphis.edu/facpubs/5202/","timestamp":"2024-11-03T15:51:49Z","content_type":"text/html","content_length":"47058","record_id":"<urn:uuid:5aaa342a-2098-4953-a454-169c543c5481>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00001.warc.gz"}
|
Reflex Angle– Definition, Degree, Diagram, Examples
Angles are one of the basic blocks of geometrical shapes. Reflex angle is one such type of angle which is very common in academic disciplines and real-life examples. It is an angle that lies between
obtuse angles and a full rotation, i.e., a complete angle of 360 degrees. In this article, we will study some of the fascinating properties of this special angle and understand its various aspects
using a diagram.
Reflex Angle
A Reflex angle is an angle whose values lie between 180 and 360 degrees. In other words, the value of this special angle is always less than 360 degrees and always more than 180 degrees. The value of
this angle always remains between a straight angle (180 degrees) and a complete angle (360 degrees). It is one of the six types of angles found in the geometrical world. The other five angles are
acute angle, right angle, obtuse angle, straight angle, and complete angle. Let us understand this special angle in more detail through its definition and a diagram.
Reflex Angle Definition
Mathematically, any angle whose value is more than 180 degrees and less than 360 degrees is a reflex angle. Simply put, the values of this angle always oscillate between 180 degrees and 360 degrees,
i.e., the value of this angle always be more than an obtuse angle and always be less than a complete angle. The sum of the this angle and its corresponding angle on the other side always forms a
complete angle (360 degrees).
Reflex Angle Diagram
The shape of this particular angle can be easily understood by its diagram. The diagrammatic view of this angle will help you understand this angle more clearly. The below figure shows the diagram of
this angle whose value will be between 180 degrees and 360 degrees.
Reflex Angle Degree
For an angle to be qualified as a reflex, it must be more than 180° and less than 360°. Let X be an angle of reflex type, then X should satisfy the following condition
Reflex Angle Formula
As we know that the value of the this angle and its corresponding angle on the other side is always 360°, hence, using this concept, we can easily find out the value of this angle. Let us comprehend
this concept with an example.
Let there be an angle whose value be a.
The corresponding angle on the other side will be of reflex type
So using the above concept we can say,
reflex angle = 360° – a
It is interesting to note that this angle can be represented as a sum of straight angle (180°) and three basic angles, i.e., acute angle, right angle, and obtuse angles.
Y = Straight angle (180°) + X
where, Y = an angle that is reflex in nature
where, X can be an acute, right, or obtuse angle
Reflex Angle Examples
As we know the value of this special angle is always greater than 180 degrees and smaller than 360 degrees. So some of its examples are 181°, 256°, 240°, 227°, 307°, 338°, 197°, 356°, 278°, etc.
Reflex Angle Real-Life Examples
Due to its ubiquitous nature, many examples of this angle can be found in the real world. Some of these examples are listed hereunder:
• The angle between the hour hand and minute hand at 8 O’clock when measured in clockwise direction.
• The spokes that connect the wheel’s center to its passenger cabins form this angle with the horizontal as it turns.
• The angle formed by pizza slices after the removal of one slice.
• The angle created by the steering wheel and the center position can be this special angle when a car makes a quick turn.
Reflex Angle Triangle
It is not possible to construct a triangle with a reflex angle. It is due to the fact that the sum of all the angles of triangle is exactly 180°, while the measure of this angle alone is more than
180°. So it is not possible to create such a triangle in any possible known way. But it may be noted that the exterior angles of a triangle are always reflex in nature.
Reflex Angle Properties
Like any other geometrical entity, this special angle too have some unique features that help it to stand apart from the crowd. Using these properties of this angle, you will easily be able to solve
many intricate problems related to this angle. Some of its crucial properties are given below.
• The value of this angle is always more than 180 degrees and less than 360 degrees
• This angle lies between obtuse angle and complete angle
• The exterior angle portion (remaining after extracting the interior angles) of a triangle are always reflex in nature
• It is not possible to construct a triangle having any angle of reflex type
• It can be represented as a sum of straight angle and any of the angles from acute, right, and obtuse angle
• It always occur in pairs with its corresponding angle which can be an acute, right, or an obtuse angle
Reflex Angle Solved Examples
Some of the solved examples related to this topic is given below. By practicing these solved questions, students will have a better grasp on this topic.
Example 1: What will be the value of the angle formed by the hands of a clock at 7 PM when measured in clockwise direction?
Solution: As we know at 7 PM, the hour hands point to 7 and the minute hand points to 12
So the number between digits present between them in clockwise direction = 7
One digit traversal corresponds to 30°.
So the angle formed = 30 x 7
Hence, the angle formed will be 210°
Example 2: What will be the reflex angle of 150°?
Solution: As we know that reflex angle + other angle = 360°
Let the unknown value be X
So, X = 360° – 150°
X = 210°
Example 3: In a triangle XYZ, the value of angle X is 50° and the value of angle Y is 60°. What will be the value of the outside angle at Z.
Solution: As we know that, the sum of three sides of a triangle equals 180°.
Hence, X + Y + Z = 180°
50° + 60° + Z = 180°
110° + Z = 180°
Z = 180° – 110°
Z = 70°
As the outside angle will be reflex in nature, so using its formula we get
outside angle at Z= 360° – interior angle at Z
outside angle = 360° – 70°
outside angle = 290°
|
{"url":"https://www.adda247.com/school/reflex-angle/","timestamp":"2024-11-09T20:18:08Z","content_type":"text/html","content_length":"656133","record_id":"<urn:uuid:ac36c179-0520-492b-ad32-915616d6cd3c>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00480.warc.gz"}
|
The angles of depression of two ships A and B. - WorkSheets Buddy
The angles of depression of two ships A and B.
(i) The angles of depression of two ships A and B as observed from the top of a lighthouse 60 m high are 60° and 45° respectively. If the; two ships are on the opposite sides of the lighthouse, find
the distance between the two ships. Give your answer correct to the nearest whole number. (2017)
(ii) An airplane at an altitude of 250 m observes the angle of depression of two boats on the opposite banks of a river to be 45° and 60° respectively. Find the width of the river. Write the answer
correct to the nearest whole number. (2014)
(i) Let AD be the height of the lighthouse CD = 60 m
Let AD = x m, BD = y m
More Solutions:
Leave a Comment
|
{"url":"https://www.worksheetsbuddy.com/the-angles-of-depression-of-two-ships-a-and-b/","timestamp":"2024-11-02T21:59:34Z","content_type":"text/html","content_length":"141971","record_id":"<urn:uuid:171ef8e2-e3f0-466c-8944-672aac6ed073>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00592.warc.gz"}
|
Combination regular formula with the seprate events – Q&A Hub – 365 Data Science
Combination regular formula with the seprate events
why we can't use the Combination regular formula with the seprate events?
1 answers ( 0 marked as helpful)
when we talk about "separate events," we're delving into the realm of probability and combinatorics where different events or processes are independent.
For instance, if you want to find the number of ways to choose 2 books from a set of 5 and 3 pens from a set of 7, you can't simply use the combination formula directly. Each of these is a separate
Choosing 2 books from 5: C(5,2)
Choosing 3 pens from 7: C(7,3)
To find the total number of ways to do both events simultaneously, you'd multiply the number of ways for each event:
Total ways=C(5,2)×C(7,3)
This is because for each way to choose 2 books, there are C(7,3) C(7,3) ways to choose 3 pens, so the events are combined multiplicative.
So, to answer your question: The regular combination formula applies to situations where you're selecting items from a single set. When you have separate events or selections from different sets, you
typically need to compute combinations for each event separately and then combine them using the rules of counting (like multiplication for independent events).
|
{"url":"https://365datascience.com/question/combination-regular-formula-with-the-seprate-events/","timestamp":"2024-11-11T11:33:41Z","content_type":"text/html","content_length":"110485","record_id":"<urn:uuid:d0ac8ba5-1086-4108-8a3d-546003c41cc0>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00371.warc.gz"}
|
KSEEB Solutions for Class 6 Maths Chapter 11 Algebra Ex 11.4
Students can Download Chapter 11 Algebra Ex 11.4 Questions and Answers, Notes Pdf, KSEEB Solutions for Class 6 Maths helps you to revise the complete Karnataka State Board Syllabus and score more
marks in your examinations.
Karnataka State Syllabus Class 6 Maths Chapter 11 Algebra Ex 11.4
Question 1.
Answer the following:
a) Take Sarita’s present age to be y years
i) What will be her age 5 years from now?
Saritha’s present age + 5
= y + 5
ii) What was her age 3 years back
3 years ago, Saritha’s age = Saritha’s present age – 3
y – 3
iii) Sarita’s grandfather is 6 times her age. what is the age of her grandfather?
Grand father’s age = 6 × Sarita’s present age = 6y
iv) Grandmother is 2 years younger than grandfather. What is grandfather’s age?
Grand father’s age = Grand father’s present age – 2 = 6y – 2
v) Saritha’s father’s age is 5 years more than 3 times Saritha’s age. What is her
father’s age?
Father ’s age = 5 + 3 x saritha’s persent age = 5 + 3y
b) The length of a rectangular hall is 4 meters less than 3 times the breadth of the hall. What is the length, if the breadth is b meters?
Length = 3 × Breadth – 4
l = (3b – 4) metres
c) A rectangular box has height h cm. its length is 5 times the height and breadth is 10 cm less than the length. Express he length and the breadth of the box in terms of the height.
Length = 5 × Height
l = 5h cm
Breadth = 5 × Height – 10
b = (5h – 10)
d) Meena, Beena and Leena are climbing the steps to the hill top. Meena is at step s, Beena is 8 steps ahead and Leena 7 steps behind, of steps to the hill top is 10 less than 4 times what Meena has
reached. Express the total number of steps usings.
She at which Beena is = (step at which Meen is) + 8
= S + 8
Step at which Leena is = (steps at which Meena is ) – 7
= S = 7
Total steps = 4 × (step at which meena is) – 7
= S – 7
Total steps = 4 × (steps at which Meena is ) – 10
= 4s – 10
e) A bus travels at v km per hour. It is going from Daspur to Beespur, After the bus travelled 5 hours, Beespur is still 20 km away. What is the distance from Daspur to Beespur? Express it using v.
Speed = vkm/hr
Distance travelled in 5hrs = 5 × v = 5v km
Total distance between Daspur and Beespur = (5v + 20) km
Question 2.
Change the following statements using expressions into statements in ordinary language.
(For example, Give salim scores r runs in a cricket match, Nalin scores (r +15) runs. In ordinary language – Nalin scores 15 runs more than salim)
a) A note book costs Rs P.A book costs Rs 3 p.
A book costs three times the cost of a note book
b) Tony puts q marbles on the table. He has 8q marbles in his box.
Tony’s box contains 8 times the numbers of marbles on the table
c) Our class has n students. The school has 20n students.
Total number of students in the school is 20 times that of our class.
d) Jaggu is Z years old. His uncle is 4z years old and his aunt is (4z – 3) years old.
Jaggu’s uncle is 4 times older than jaggu and jaggu’s aunt is 3 years younger than his uncle.
e) In an arrangement of dots there are r rows. Each row contains 5 dots.
The total number of close is 5 times the number of rows.
Question 3.
a) Given Munnurs age to be x years, can you guess what (9x – 2) may show?
(Hint: Think of Munnu’s younger brother.)
Can you guess what (x + 4) may shows? What (3x + 7) may show?
(x – 2) represents that the person whose age is (x – 2) years, is 2 years younger to munnu
(X + 4) represents that the person, Whose age is (x + 4) Years, is 4 years elder to munnu
(3x+7) represents that the person whose age is (3x + 7) years, is elder to munnu and his age is 7 years more than three times of the age of munnu
b) Given sara’s age today to be y years. Think of her age in the future or in the past. What will the following expression indicate? y + 7, y – 3, y + 4\(\frac{1}{2}\), y – 2\(\frac{1}{2}\)
in future
After m years from now, sara’s age will be (y + h) years in past n years ago , sara’s age was (y – n) years
(y + 7) represents that the person, whose age is (y + 7) years, is 7 years elder to sara (y – 3) represents that the person, whose age is (y – 3) years, is 3 years younger to sara
(y + 41/2) represents that the person
Whose age is (y + 4\(\frac{1}{2}\) ) years, is 4 1/2 years elder to sara (y – 2 1/2) represents that the person
Whose age is (y – 2 1/2) years, is 2 1 /2 years younger to sara
c) Given n students in the class like football, What may 2n show? What may \(\frac{n}{2}\) show?
(Hint: .Think of games other than football)
2n may represent the number of students who like either football or some other game such as cricket Where as 11/2 represents the number of students who like Cricket, out of the total number of
students who like football.
|
{"url":"https://www.kseebsolutions.com/kseeb-solutions-for-class-6-maths-chapter-11-ex-11-4/","timestamp":"2024-11-13T03:27:34Z","content_type":"text/html","content_length":"69193","record_id":"<urn:uuid:100e4ef9-3e41-4ef7-8bd5-4895f97eef18>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00726.warc.gz"}
|
Betting Market Power Rankings
by Michael Beuoy
The purpose of this post is to use the point spreads from recent weeks of the season to derive an implied power ranking. Basically, the point is to try to figure out what the betting market thinks
are the best and worst teams in the NFL. From a broader perspective, I hope to provide insight into how the betting market “thinks” in general. One result that emerged from this analysis was a
measure of how much the betting market reacts to the result of a particular game.
The challenge in deriving a power ranking from the point spreads is that the point spread only tells you the relative strength of the two teams. For example, Green Bay is favored by 7.0 points on the
road against the NY Giants this week. We know that home teams are favored on average by 2.5 points, so after removing the home team bias, the betting market appears to think that Green Bay is 9.5
points better than the Giants. New England is favored by 21(!) points at home against Indianapolis So the betting market thinks that New England is 18.5 points better than Indianapolis.
The question is, does the betting market think New England or Green Bay is the better team? It’s impossible to answer just using the spreads from this week (you have 32 unknowns and only 16
equations). My approach below is to look back over the past five weeks of point spreads and results to come up with a best fit ranking, where the ranking is calibrated such that it best predicts the
point spread according to the following formula:
Point Spread = Home Team Rank - Visiting Team Rank + 2.5
I figured I would cut to the chase and provide the rankings themselves, and save the methodology explanation for the end. I followed the format of the Advanced NFL Stats (ANS) Team Efficiency
rankings, and also provided the actual ANS rankings as a point of comparison.
Here is a glossary of terms:
LSTWK - The betting market rank as of the prior week (using the same methodology). It’s interesting to see who the big movers are.
GPF - Stands for Generic Points Favored. It’s what you would expect a team to be favored by against a league average opponent at a neutral site.
GWP - Stands for Generic Win Probability. I converted the GPF into a generic win probability using the following formula: GWP = 1/(1+exp(-GPF/7)). This gives a more direct comparison to the ANS
ANS RNK - The Advanced NFL Stats Team Efficiency rankings for the same week (week 12 in this case)
ANS GWP - The Advanced NFL Stats Generic Win Probability for the same week.
Here are the rankings:
│RANK│TEAM│LSTWK│GPF │GWP │ANS RNK │ANS GWP │
│1 │GB │2 │9.5 │0.80│2 │0.74 │
│2 │NE │1 │9.0 │0.78│4 │0.69 │
│3 │NO │3 │7.0 │0.73│5 │0.69 │
│4 │PIT │4 │5.5 │0.69│3 │0.72 │
│5 │BAL │6 │5.0 │0.66│10 │0.59 │
│6 │ATL │12 │4.5 │0.66│14 │0.53 │
│7 │DAL │7 │4.0 │0.64│6 │0.65 │
│8 │SF │8 │3.5 │0.63│13 │0.53 │
│9 │NYJ │10 │3.5 │0.62│15 │0.52 │
│10 │TEX │5 │3.0 │0.61│1 │0.82 │
│11 │PHI │9 │1.5 │0.55│7 │0.62 │
│12 │NYG │11 │1.0 │0.54│8 │0.62 │
│13 │CHI │13 │0.5 │0.52│11 │0.57 │
│14 │DET │15 │0.5 │0.51│9 │0.61 │
│15 │MIA │19 │0.5 │0.51│18 │0.48 │
│16 │CIN │14 │0.0 │0.51│16 │0.50 │
│17 │RAI │17 │0.0 │0.50│12 │0.55 │
│18 │SD │16 │0.0 │0.50│19 │0.45 │
│19 │TEN │18 │-0.5 │0.47│21 │0.44 │
│20 │DEN │25 │-2.5 │0.42│23 │0.38 │
│21 │BUF │22 │-2.5 │0.42│17 │0.48 │
│22 │TB │20 │-2.5 │0.40│26 │0.36 │
│23 │WAS │28 │-3.0 │0.39│20 │0.44 │
│24 │SEA │23 │-3.5 │0.37│28 │0.34 │
│25 │MIN │26 │-3.5 │0.37│29 │0.34 │
│26 │CAR │27 │-4.0 │0.36│25 │0.38 │
│27 │ARZ │29 │-4.5 │0.34│30 │0.33 │
│28 │JAC │21 │-4.5 │0.34│22 │0.40 │
│29 │CLE │24 │-5.0 │0.33│24 │0.38 │
│30 │KC │31 │-5.5 │0.31│31 │0.27 │
│31 │STL │30 │-6.0 │0.30│27 │0.35 │
│32 │IND │32 │-10.0│0.19│32 │0.23 │
Some observations:
The top team and bottom team shouldn’t come as any surprise. In addition, there is the proverbial “50 feet of crap” (or 4 points) between the Colts and the next worse team.
Despite San Francisco’s place near the top of the “conventional” power rankings (ESPN, CBS, etc.), the market has them ranked much lower at number 8; not as low as the ANS rank of 13, but in the
I was surprised to see New England ranked so closely to Green Bay (they were actually a half point ahead of them last week).
The first step was to see how many prior weeks of point spreads I had to feed into the model in order to get an optimized estimate of the point spreads for future games. The drawback of using prior
weeks is that you’re using stale information. The point spread from a few weeks ago will not accurately reflect the market’s latest assessment of their strength. I attempted to address this somewhat
by using a recency weighted average. If I was using 7 weeks of spreads, the most recent week would get a weight of 7, the week before a weight of 6, and so on. This allowed me to arrive at an answer
while still giving preferential treatment to the more recent market estimates. Through trial and error optimization, I found that using the most recent five weeks of point spreads produced the lowest
mean squared estimate error of the point spread for the coming week. The calculation itself is equivalent to a weighted linear regression with 32 dummy variables, 1 for each team.
How the Betting Market Reacts to Game Results
Although the approach above generated a set of rankings, it ignores some potentially useful information that could be used to better match the coming week’s point spreads. For example, the week 13
rankings used the point spreads from weeks 9 through 13. In week 9, New England was favored by 9.5 points over the New York Giants. However, the Giants ended up winning by 4 in that game. So, the
outcome of the game deviated from the market’s expectation by 13.5 points. One would expect that the market would factor that result into future estimates of both New England’s and New York’s
strength. I assumed that the betting market would recallibrate itself according to the following formula:
revised “best estimate” spread = original spread + (credibility coefficient) x (deviation from expected)
I then determined what that credibility coefficient (CC) was by trial and error optimization. I found that a coefficient of 15% generated the most accurate prediction of the coming week’s spreads. In
other words, the betting market appears to treat the outcome of each game with 15% credibility when revising its estimates of each team’s strength. So, in the New England/ New York example above, if
those two teams had been scheduled to play each other at New England again, the new spread would have been revised down from 9.5 points to 7.5 points ( = 9.5 + 0.15 * (-9.5 - 4).
Prediction of This Week’s Point Spreads
See below for a comparison if how well the ranking methodology predicted this week’s point spreads. Note that this uses rankings that factor in the results from last week’s games, but does not factor
in the spreads of this week’s games into the rolling 5 week average (this keeps the estimate independent):
│GAME │PRED LINE │ACT LINE│DIFF│
│ATL @ TEX │6.0 │-2.0 │-8.0│
│BAL @ CLE │-6.0 │-6.5 │-0.5│
│CAR @ TB │5.0 │3.5 │-1.5│
│CIN @ PIT │8.5 │6.5 │-2.0│
│DAL @ ARZ │-6.5 │-4.5 │2.0 │
│DEN @ MIN │2.0 │0.0 │-2.0│
│DET @ NO │10.0 │8.5 │-1.5│
│GB @ NYG │-4.5 │-7.0 │-2.5│
│IND @ NE │21.5 │21.0 │-0.5│
│KC @ CHI │8.5 │8.0 │-0.5│
│NYJ @ WAS │-5.0 │-3.0 │2.0 │
│PHI @ SEA │-4.0 │-3.0 │1.0 │
│RAI @ MIA │2.5 │3.0 │0.5 │
│SD @ JAC │1.0 │-2.5 │-3.5│
│STL @ SF │11.0 │13.5 │2.5 │
│TEN @ BUF │0.5 │1.5 │1.0 │
The biggest miss in the line prediction is on the ATL/TEX game where it appears that the market values Matt Schaub’s talents over his replacement by a significant margin. I think this may also be
inflating ATL’s overall ranking somewhat. Its favorable point spread over Houston is being compared (on a recency weighted basis) against point spreads when Matt Schaub was playing.
If there’s interest, I can produce these weekly. I’ve got this boiled down to a quick piece of R code (which anyone is welcome to if they’re curious about the details of the methodology).
31 comments:
Very fascinating. I definitely wouldn't mind checking these out each week.
Definitely interested in seeing this weekly. The results look fairly similar to the "betting expert" rankings that ESPN insider posts from "Vegas gambling experts" every week
I might be interested in the R code; I'd definitely be interested in where to find easily-extracted historical point spreads.
Very happy to see this becoming part of the mainstream analytics conversation. Quick notes:
*Hope you'll consider also tracking game win probabilities using the no-juice moneylines (splitting the difference between favorite price and dog payback). Talked about this in a comment to BB
last week. Would allow for more direct comparisons to BB's work beyond just looking at the rankings.
*There's a problem here with the methodology I think when backup quarterback come into play. The market doesn't really "glide" over five weeks to the new place for the backup (though it can also
glide based on developments with the backup). So, maybe, using scale from the chart above:
Houston with Schaub: 5.0
Houston with Leinart: 3.0
Houston with Yates: -1.0
Chicago with Cutler:+2.0
Chicago with Hanie:-1.0
Oddsmakers have different power ratings for each starting quarterback...or at least have a mental adjustment ready (partial disclosure: I've been ghostwriting on and off for some oddsmakers over
the last two decades). We're in the midst of a tricky sequence with a few teams in recent weeks. You might consider computing separate ratings for all 32 teams with the 2 QB's most likely to get
*Atlanta looks to be a bit warped by what's happened with this week's Houston game. Based on their other recent games, and how the opponents rank above for you, Atlanta would be:
3.5 against Minnesota (-10 at home)
0.0 vs. Tennessee (-3.5 at home)
5.0 vs. New Orleans (-1 at home)
-0.5 vs. Indy (-6.5 on the road)
I'm using 3 rather than 2.5 because oddsmakers tell me they generally use a blanket 3...though I do think that some very poor teams may only get 2 to 2.5 in some cases. That would be a composite
2.0 above for Atlanta as a four-game average...which is consistent with where you say they ranked last week. I don't personally believe the market has Atlanta as high as you're representing.
*Just as an FYI thing as you go forward. There are different components within the market that tend to give away their thoughts depending on what stage you are in the week.
Opening Lines: Oddsmakers assessments
First Moves: Professional wagerers assessments (called "sharps" in Vegas lingo, they will bet for value on openers if they believe a number is bad...the public doesn't bet early, tending to wait
until the weekend to really become a factor)
Weekend Moves: Generally those inspired by public money...though it can get messy because the biggest sharps will try to manipulate the line for value...and then will jump in with both fists if
they get their number.
The point is that it can be tricky truly defining the "market" because it's in transition through the week. If you only use opening lines...it's mostly the oddsmakers. If you only use Thursday
lines, the public hasn't cast much of a vote yet. If you only use closers, then they can get warped the wrong way if there's a final hour self-defense mechanism on an extremely one-sided game (in
terms of money).
A common recommendation is to use what's called "widely available" Sunday morning lines a couple of hours before kickoff. Nothing's perfect, but that's a good spot to settle in. When you hear
analytical types in the field talk about the perfected market theory, they're often referring to where a line kind of finally locks in after the opener has been shaped by the smart money. They
would suggest that THAT line can't be beat because the value has been bet out of it. Not everyone believes in the perfected market theory though. The sharps who do attack the openers...and then
attack late if public sentiment has moved a number away from what they believe the right spot is on game day.
Not sure if this helps the project...but wanted to throw it out there for you.
I would add my vote to those who would like to see this study or variations of it posted weekly...
Sorry wasn't clear there in the Atlanta comment. I was using 3.0 for home field rather than 2.5 for home field based on what oddsmakers say they typically use...
May I suggest, further to Jeff's idea of also comparing moneyline implied probabilities, that you do not take said probabilities from a bookie, but take them from Betfair, as then the juice is
not an issue and you can take the (rather more accurate) midpoint between back and lay prices.
For instance, Betfair has Atlanta with a 57% chance of beating the Texans. Betfair is (idealistically) driven by the efficient-market hypothesis, so it avoids the bias that bookies can play with.
And by bias, I mean the fact that the bookies will put the juice where they think makes most sense, skewing the probabilities if you just split the juice evenly.
I've been publishing a similar ranking system based on the point spreads for the last few years (see link on my name). My methodology is just to use iterative SRS (as described on the p-f-r blog)
with point spreads rather than game results. For the recency issue I found that a simple 3-2-1 weighting of the last three games gave the best fit. Any farther back made the fit worse in my
I believe another difference is that I don't factor in last week's game results, but I do include this week's point spreads. At the time I implemented this, I didn't really care about predicting
the point spreads or measuring the market reaction to results as you have. Rather, I only wanted to compare teams that weren't playing each other this week. One problem this presents is dealing
with bye weeks. I didn't want to simply carry over a team's rating through its bye week because, in theory, incorporating updated spreads for non-bye teams should give you additional information
about the strengths of the bye teams via opponent adjustments. In practice, bye week teams' ratings tend to be a little too volatile. Another weakness is that the largest individual game spreads
seem to skew the ratings more than they should and those teams tend to have the greatest error relative to the actual spreads. I never got around to playing with various solutions to these
Anyway, I found this to be a very interesting exercise and am glad to see someone else has too!
I use the same pointspread to moneyline conversion as you ml=exp(ps/7) or ps=7*ln(ml) in my power rankings (pointshare) however this means that if team A is 5 points better than B and B is 5
points than C then if you say a is 10 points better than c the moneyline odds would give you a different answer if you used a gwp or log 5 approach then converted that win prob to a pointspread.
As the relationship is not linear so a moneyline approach would have team a as less than 10 point favourites. Could you redo your analysis first converting to odds then working out the rankings
then convert back to pointspreads.
To avoid including stale information, it's best to look at future lines, not past ones. Some websites and Vegas sportsbooks offer odds on next week's games and "games of the year"; these odds
will reflect the cutler and schaub injuries and also whatever we learned by watching last week's games.
This is a good concept, but you're measuring the wrong target.
j holz, that was my thought, too, but there isn't enough "interconnected" data between teams not playing each other to use only future lines. One idea I had that might help is to incorporate the
odds to win the Super Bowl in addition to the current week's spreads, although you'd have to be careful to recognize that divisional/conference alignments can affect those. The tradeoff is you
can either treat the future games spreads as representing immutable fact regarding the relative strengths of those teams and fit the rest of the teams around that the best you can. Or you can
spread the errors around more evenly, which is the method I chose.
I found that my 3-2-1 weighting of 50% future lines and 50% past lines yielded a reasonably accurate approximation.
I would be +1 for making this a weekly feature :)!
This is all very excellent, and I second the idea that it would be nice to see this ranking posted somewhere every week, making the weekly changes in Vegas's opion visible.
One minor suggestion: remember when converting the Vegas point spread to win probability the over/under matters. A 6-point spread with an over/under of 37-points expected to be scored has a
higher win probability than one with 47 points expected to be scored. So when converting the spread to win probability I don't use the exponential function but instead figure the projected score
(spread applied to over-under) and then take the Pythagorean win expectation.
The difference can realistically be equivalent to a couple points of spread (more at the extremes) in a given game. Over a few weeks it washes out so I wouldn't worry about it. But if one is
using a small sample of only three or four weeks plus over-weighting the last, it might make a visible difference for some teams.
That said, I probably still wouldn't worry about it. A difference significant enough to be visible isn't necessarily significant enough to be significant. False precision is something always to
beware against.
IMHO, the value of objective ranking/rating systems like this isn't their great precision (which is impossible in a season of only 16 games, even less so only part way through the 16) but how
they can make plainly visible to the naked eye something one might have missed otherwise. If "Vegas ratings" (or ANFL Stats ratings) rate a team by a bunch a points different than I would that's
interesting, if by 1/2 a point or 1 1/2 points that's not so interesting.
So while I think it probably doesn't make any practical difference, I metion it just for the sake of logical consistency and because one might want to check the scale of the difference it makes,
to be sure. After all, using a home field advantage of 2 points or 3 points doesn't make much difference and gets washed out quickly too, but people put a lot of effort into calculating that.
(But then, they probably gamble a lot more money than I do.)
Thanks to everybody for the feedback.
First off, my source for the spread information is killersports.com. It's also a useful site for game by game statistical information as well.
I'm glad there's interest in this. I will try to get these submitted to the site each week soon after the weekly ANS Efficiency rankings are published.
Here is a link to the R code. Any thoughts or suggestions on the methodology are welcome. Link: https://docs.google.com/document/d/1CFfQcnithQA2MXhCB2cUCCDwSdnAkDnEGPMTK_hJ7zk/edit
Jim A - I did some googling while I was developing this approach to see if something like this existed. I wasn't able to find anything, but I'm not too surprised that I wasn't the first to try
I ran into the exact same difficulty when it came to bye weeks. Teams on bye seemed to have their ranking magnified (good teams moved up, bad teams moved down). The approach I eventually settled
on was to normalize each team's weights to 1.0. If a team was missing a week due to a bye, the weights for the other weeks would get magnified to compensate. I wasn't thrilled with the solution,
but it seemed to work well enough.
I took a look at your rankings. Your rankings better match the NO-NYG spread than mine, but mine got closer on the IND-NE spread. Care to put our approaches head to head for upcoming weeks? :)
I will try your weighting approach to see if I get a better fit. Like I said, mine was developed by trial and error and it's very possible I missed a better approach.
Jim Glass - I completely agree. I knew my approach was not perfect, and modelling error of a point or two was unavoidable. Fine tuning each decimal point was not my goal.
However, I am intrigued by the idea of using future weeks spreads to the extent they're available. I may take a deeper look at that.
Outstanding concepts. I would love to see this as a regular feature
Great stuff as always. I would love to see weekly breakdowns as well as any R code.
I'd like to see them posted weekly as well.
I have some suggestions/questions (full disclosure: I am NOT a statistician & not even 100% sure I spelled it correctly).
Wouldn't it be possible to tweek the equation to its highest probability of accuracy by using only historical data (i.e., NFL 2010) then applying it to NFL 2011 to see if it translates? Instead
of tweeking it week to week or including the previous 4 or 5 gms, is it possible to use the now static historical data as something like a laboratory conditions to create a better model?
Is there a scientific reason why the reality of static historical data isn't the primary source? Scientifically, is comparing NFL 2009 & 2010 or 2010 & 2011 like comparing apples & oranges
instead of apples to apples?
Just a lay person chiming in...
This should definitely be a weekly feature. Good stuff!
Mike D - View the point spreads as stock prices. If you wanted to know the state of Google right now, you would look at their stock price today, you wouldn't average their stock price over the
past few years.
Basically, all I'm doing here is trying to figure out the "stock price" of each team. But instead of getting direct quotes off of the NYSE, all I have available is the difference in stock prices
between different companies. And those companies change each day.
By necessity, I'm forced to look back to "old" stock prices just so I have enough connections between the various teams in order to get a proper comparison.
Market Price snapshot an hour before kickoff (with win probability estimated based on no juice moneyline in parenthesis..as taken from prominent offshore locale)
Buffalo -1 over Tennessee (Buffalo 52%)
Chicago -8 over Kansas City (Chicago 78%)
Miami -3 (-120) over Oakland (Miami 62%)
Pittsburgh -7 over Cincy (Pittsburgh 75%)
Baltimore -6.5 over Cleveland (Balt 74%)
NY Jets -3 over Washington (NYJ 60%)
Atlanta -1 over Houston (Atlanta 52%)
Carolina -2 over Tampa Bay (Carolina 56%)
***Note that Freeman is out for TB
New Orleans -8.5 over Detroit (NO 78%)
Minnesota -1 over Denver (Minnesota 52%)
San Francisco -14 over St. Louis (SF 89%)
Dallas -4 over Arizona (Dallas 65%)
***Kolb is back for Arizona
Green Bay -6.5 over NYG (Green Bay 71%)
New England -20 over Indy (NE 95%)
The TB/Carolina line was TB -3 at home earlier this week, suggesting equality. A 5-point move would mean TB with Josh Johnson is 5 points worse than Carolina, and wherever anyone had them with
The Dallas/AZ line was Dallas -6.5 when it was thought Skelton was still playing. So, Arizona is 2.5 points better with Kolb than Skelton in the market's view...
Looks like Yates got a little respect in the market today, as Houston is now only +1 instead of +2. They should be 4 points worse than Atlanta in a market snapshot at the moment rather than just
one or two I'd think.
Don't have time at the moment to compare differences here to MB's very interesting work up above...or to the win probabalities for the week from BB (life can be busy in the hour before
kickoffs!). Wanted to throw down a live market look in the last hour since so many have posted interest in this kind of material. Might influence future discussions at the very least. Enjoy the
I've been doing spread ratings since the mid-80s. The error distribution you'll see over the long haul is precisely the shape you're seeing this week, recency adjustments or not.
I just don't understand the obsession with using ranks. Why convert ranks to points when you can use points to start with?
SportsGuy - That's exactly what I did. The model output is the "Generic Points Favored". The points are converted to a ranking, not the other way around.
Jim Glass, in all my work on NFL stats I have found no evidence that the strength of a spread is total dependent, this is also true of the NBA and NHL. From my observations this is due to
covariance between team scoring.
"Point Spread = Home Team Rank - Visiting Team Rank + 2.5"
That is kinda where I get the idea you're figuring ranks first then converting to points.
Have you run your algorithm on past data?
SportsGuy - Sorry about that. I was being loose with my terminology. That should read "Point Spread = Home Team GPF - Visiting Team GPF + 2.5".
If you're asking if I backtested the approach, the answer is yes (it's how I arrived at optimized credibility coefficient and number of weeks).
Running the algorithm on past data generates a Mean Absolute Error (MAE) of about 1.7 points in predicting the spread for the upcoming week. Unfortunately, I had no benchmark to compare that to.
I guess what I meant was how far back you tested. I'm interested in your error distribution. Do you have that data handy?
I Would be interested in seeing this weekly
I don't have anything handy on error distribution, but here's the MAE for the past 4 seasons:
2010 - 1.8
2009 - 1.7
2008 - 1.5
2007 - 1.6
Mike, my ratings ended up on nutshellsports.com after I found a similar betting market system on that site. Some interesting discussions of methodology resulted in that site's owner asking me to
contribute my own system to his site. This was around November 2009, as I recall. So I don't even claim to be the first to publish such a system. It wouldn't surprise me if there are others out
there, too.
I've always thought I or someone else could come up with more accurate results. In particular, I wondered how much using SRS limited the results as opposed to a more complex computer rating
system. Your previous work on opponent adjustments may be useful in this regard. I kind of lost interest in working on this myself after the initial thrill and have since moved on to other
projects. Feel free to use my ratings as a benchmark or in any way that is helpful. Maybe I'll look at this again if I get a chance; as I recall my system's MAE was in the 0.7-0.8 range, but
again my goal was slightly different than yours and using future games is, in a sense, cheating. It would be interesting to see a more detailed analysis of how the systems compare. I definitely
look forward to seeing what you come up with next.
to JIm A, I don't quite follow your reasonaing. You say you do not factor in last weeks results (but this weeks pointspreads).
You state "At the time I implemented this, I didn't really care about predicting the point spreads or measuring the market reaction to results as you have. Rather, I only wanted to compare teams
that weren't playing each other this week." In what way are your trying to compare them, then? by talent difference? I guess I miss your point.
and to j holz, you said to look at future spreads. Well, that to me seems to defeat the purpose. The whole purpose is to predict point spreads based on past point spreads, isn’t it? Yeah, a
future one will essentially tell you what the bookies are thinking, but you want to see how you can predict what the bookies are thinking.
For the purpose intended, I would have done exactly what the author did with perhaps only weighting type differences.
My point is that I was mainly interested in estimating how the bookmakers would set the line in a hypothetical game between any two teams. For example, my ratings estimate that if Green Bay and
New England played on a neutral field right now, the Packers would be favored by 2.5 points. So basically, I'm letting the bookmakers do the work for me and using their expertise in setting
lines. Future spreads are more up-to-date than past spreads and, in theory, should be more accurate in terms of predictive power (if for no other reason than they account for recent injuries).
Trying to predict future spreads based on past spreads is a similar but not identical exercise. That's closer to what bookmakers actually do, and such a model would be particularly useful if you
were applying for a job with Las Vegas Sports Consultants (the company that provides initial lines to most books).
FYI, my rankings for week 14 are up. The MAE for the week is 0.41, which is particularly low because the teams are pretty well-connected--no bye weeks this week or previous two weeks.
|
{"url":"http://community.advancednflstats.com/2011/12/betting-market-power-rankings.html?showComment=1322873936784","timestamp":"2024-11-13T19:20:55Z","content_type":"application/xhtml+xml","content_length":"123886","record_id":"<urn:uuid:1df156f7-fe36-4b55-ad88-f95a232cc56b>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00518.warc.gz"}
|
• Look at the numbers carefully.
• Rule 1: If a number has more digits than another, it is greater of the two.
• Rule 2: If two numbers have the same number of digits, we compare them by their extreme left-most digits. The number with the greater digit is greater.
• If the extreme left-most digits are the same, we compare them by their next digits to the right and so on.
Example: 95,356 ? 9,558
Here the first number 95,356 has more digits than the second number 9,558
Therefore 95,356 > 9,558
Answer: >
Example: 95,356 ? 95,578
1. Start comparing from ten thousands place, both the numbers contain 9.
2. In thousands place both contain 5 .
3. In hundreds place the first number contains 3 and the second contains 5.
4. Hence first number is lesser than second.
Answer: <
Directions: Compare the numbers and use appropriate sign. Also write at least ten examples of your own.
|
{"url":"http://kwiznet.com/p/takeQuiz.php?ChapterID=1271&CurriculumID=3&Method=Worksheet&NQ=10&Num=3.17&Type=C","timestamp":"2024-11-03T19:45:22Z","content_type":"text/html","content_length":"9894","record_id":"<urn:uuid:1062d119-d940-4643-8938-9f410d6be3c1>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00082.warc.gz"}
|
ssignment problem
Solving assignment problem using min-cost-flow¶
The assignment problem has two equivalent statements:
• Given a square matrix $A[1..N, 1..N]$, you need to select $N$ elements in it so that exactly one element is selected in each row and column, and the sum of the values of these elements is the
• There are $N$ orders and $N$ machines. The cost of manufacturing on each machine is known for each order. Only one order can be performed on each machine. It is required to assign all orders to
the machines so that the total cost is minimized.
Here we will consider the solution of the problem based on the algorithm for finding the minimum cost flow (min-cost-flow), solving the assignment problem in $\mathcal{O}(N^3)$.
Let's build a bipartite network: there is a source $S$, a drain $T$, in the first part there are $N$ vertices (corresponding to rows of the matrix, or orders), in the second there are also $N$
vertices (corresponding to the columns of the matrix, or machines). Between each vertex $i$ of the first set and each vertex $j$ of the second set, we draw an edge with bandwidth 1 and cost $A_{ij}$.
From the source $S$ we draw edges to all vertices $i$ of the first set with bandwidth 1 and cost 0. We draw an edge with bandwidth 1 and cost 0 from each vertex of the second set $j$ to the drain $T$
We find in the resulting network the maximum flow of the minimum cost. Obviously, the value of the flow will be $N$. Further, for each vertex $i$ of the first segment there is exactly one vertex $j$
of the second segment, such that the flow $F_{ij}$ = 1. Finally, this is a one-to-one correspondence between the vertices of the first segment and the vertices of the second part, which is the
solution to the problem (since the found flow has a minimal cost, then the sum of the costs of the selected edges will be the lowest possible, which is the optimality criterion).
The complexity of this solution of the assignment problem depends on the algorithm by which the search for the maximum flow of the minimum cost is performed. The complexity will be $\mathcal{O}(N^3)$
using Dijkstra or $\mathcal{O}(N^4)$ using Bellman-Ford. This is due to the fact that the flow is of size $O(N)$ and each iteration of Dijkstra algorithm can be performed in $O(N^2)$, while it is $O
(N^3)$ for Bellman-Ford.
The implementation given here is long, it can probably be significantly reduced. It uses the SPFA algorithm for finding shortest paths.
const int INF = 1000 * 1000 * 1000;
vector<int> assignment(vector<vector<int>> a) {
int n = a.size();
int m = n * 2 + 2;
vector<vector<int>> f(m, vector<int>(m));
int s = m - 2, t = m - 1;
int cost = 0;
while (true) {
vector<int> dist(m, INF);
vector<int> p(m);
vector<bool> inq(m, false);
queue<int> q;
dist[s] = 0;
p[s] = -1;
while (!q.empty()) {
int v = q.front();
inq[v] = false;
if (v == s) {
for (int i = 0; i < n; ++i) {
if (f[s][i] == 0) {
dist[i] = 0;
p[i] = s;
inq[i] = true;
} else {
if (v < n) {
for (int j = n; j < n + n; ++j) {
if (f[v][j] < 1 && dist[j] > dist[v] + a[v][j - n]) {
dist[j] = dist[v] + a[v][j - n];
p[j] = v;
if (!inq[j]) {
inq[j] = true;
} else {
for (int j = 0; j < n; ++j) {
if (f[v][j] < 0 && dist[j] > dist[v] - a[j][v - n]) {
dist[j] = dist[v] - a[j][v - n];
p[j] = v;
if (!inq[j]) {
inq[j] = true;
int curcost = INF;
for (int i = n; i < n + n; ++i) {
if (f[i][t] == 0 && dist[i] < curcost) {
curcost = dist[i];
p[t] = i;
if (curcost == INF)
cost += curcost;
for (int cur = t; cur != -1; cur = p[cur]) {
int prev = p[cur];
if (prev != -1)
f[cur][prev] = -(f[prev][cur] = 1);
vector<int> answer(n);
for (int i = 0; i < n; ++i) {
for (int j = 0; j < n; ++j) {
if (f[i][j + n] == 1)
answer[i] = j;
return answer;
|
{"url":"https://cp-algorithms.com/graph/Assignment-problem-min-flow.html","timestamp":"2024-11-08T05:04:04Z","content_type":"text/html","content_length":"147656","record_id":"<urn:uuid:b8a2e1f0-8272-4e10-82c7-156eea11c1e9>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00659.warc.gz"}
|
LeanSudoku | Reservoir
This is a work-in progress port from Markus Himmel's [https://github.com/TwoFX/sudoku](Lean3 version).
Assumes that you have a C++ compiler via g++. On Debian/Ubuntu, sudo apt install build-essential should do the trick.
First, we download the code.
1. leanproject get TwoFx/sudoku
Next, we compile the program that generates levels for us.
2. cd sudoku
3. g++ -std=gnu++11 -O2 scripts/gen.cpp -o gen
Next, we will use it to generate a level for us.
4. Put a sudoku in a file (as 81 numbers separated by white space, 0 is blank). See scripts/easy1 for an example.
5. ./gen < your_sudoku_file > src/play.lean
Next, we have to tell Visual Studio code to not time out our code. This is needed because I wrote very inefficient and slow code. You will notice this while playing.
6. code .
7. Open the settings (Ctrl+Comma), search for Lean Time Limit, and set it to 0.
Finally, we are ready to go.
8. Restart VS Code, and open play.lean.
Please look at the screenshot for an example of how a game of sudoku can look.
Cells are zero-indexed from the top left. You place the number z in cell (x, y) by saying
have cxy : s.f (x, y) = z
but now you have to prove why this is the case. There are four main tactics for that:
• box_logic splits along the statement "there is a z somewhere in the box of (x, y)" and tries to find a conflict in the board for ever position other than (x, y)
• row_logic and col_logic are the same as box_logic, but (you guessed it) for rows and columns rather than boxes
• cell_logic is like the others, but splits along the statement "there is a number in (x, y)"
• naked_single is a synonym for cell_logic to conform to usual sudoku terminology
There is also support for two kinds of pencil marks: Snyder notation on the edges of cells and doubles and triples in the center of cells:
If you say
have p0 : s.snyder w x y z a
then you have to prove that there is an a in (w, x) or (y, z). You will usually use row, box or column logic for this.
If you say
have p1 : s.double x y a b
then you have to prove that there is either an a or a b in (x, y).
Doing this will give you a pencil mark. To later use such a pencil mark in a deduction, you can say things like box_logic with p0 p1, which for every possible position in the box will first try to
find an unconditional conflict and then splits over p0 and p1 in the cases where it gets stuck.
Finally, you can use pencil with p0 p1 to make case distinctions over pencil marks only.
For hard sudokus, you'll want to formalize some actual sudoku theory. I have started doing that in the file Basic.lean, but that work is still in its very early stages.
|
{"url":"https://reservoir.lean-lang.org/@avigad/LeanSudoku","timestamp":"2024-11-03T13:26:21Z","content_type":"text/html","content_length":"59694","record_id":"<urn:uuid:b55e7b4a-4abd-4899-a83a-20d2f6198eb8>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00881.warc.gz"}
|
Expression (mathematics)
In mathematics, an expression or mathematical expression is a finite combination of symbols that is well-formed according to rules that depend on the context. Mathematical symbols can designate
numbers (constants), variables, operations, functions, punctuation, grouping, and other aspects of logical syntax.
The use of expressions ranges from the simple:
to the complex:
Mathematical expressions include arithmetic expressions, polynomials, algebraic expressions, closed-form expressions, and analytical expressions. The table below highlights some similarities and
differences between these different types.
Syntax versus semantics
Being an expression is a syntactic concept.
An expression must be well-formed: the operators must have the correct number of inputs in the correct places, the characters that make up these inputs must be valid, etc. Strings of symbols that
violate the rules of syntax are not well-formed and are not valid mathematical expressions.
For example, in the usual notation of arithmetic, the expression 2 + 3 is well-formed, but the following expression is not:
Semantics is the study of meaning. Formal semantics is about attaching meaning to expressions.
In algebra, an expression may be used to designate a value, which might depend on values assigned to variables occurring in the expression. The determination of this value depends on the semantics
attached to the symbols of the expression. These semantic rules may declare that certain expressions do not designate any value (for instance when they involve division by 0); such expressions are
said to have an undefined value, but they are well-formed expressions nonetheless. In general the meaning of expressions is not limited to designating values; for instance, an expression might
designate a condition, or an equation that is to be solved, or it can be viewed as an object in its own right that can be manipulated according to certain rules. Certain expressions that designate a
value simultaneously express a condition that is assumed to hold, for instance those involving the operator to designate an internal direct sum.
Formal languages and lambda calculus
Formal languages allow formalizing the concept of well-formed expressions.
In the 1930s, a new type of expressions, called lambda expressions, were introduced by Alonzo Church and Stephen Kleene for formalizing functions and their evaluation. They form the basis for lambda
calculus, a formal system used in mathematical logic and the theory of programming languages.
The equivalence of two lambda expressions is undecidable. This is also the case for the expressions representing real numbers, which are built from the integers by using the arithmetical operations,
the logarithm and the exponential (Richardson's theorem).
Many mathematical expressions include variables. Any variable can be classified as being either a free variable or a bound variable.
For a given combination of values for the free variables, an expression may be evaluated, although for some combinations of values of the free variables, the value of the expression may be undefined.
Thus an expression represents a function whose inputs are the value assigned the free variables and whose output is the resulting value of the expression.
For example, the expression
evaluated for x = 10, y = 5, will give 2; but it is undefined for y = 0.
The evaluation of an expression is dependent on the definition of the mathematical operators and on the system of values that is its context.
Two expressions are said to be equivalent if, for each combination of values for the free variables, they have the same output, i.e., they represent the same function. Example:
The expression
has free variable x, bound variable n, constants 1, 2, and 3, two occurrences of an implicit multiplication operator, and a summation operator. The expression is equivalent to the simpler expression
12x. The value for x = 3 is 36.
See also
This article is issued from
- version of the 10/27/2016. The text is available under the
Creative Commons Attribution/Share Alike
but additional terms may apply for the media files.
|
{"url":"https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Mathematical_expression.html","timestamp":"2024-11-06T11:36:48Z","content_type":"text/html","content_length":"30983","record_id":"<urn:uuid:95f6b4d6-d50a-44f8-a283-fd4b0fed6f6a>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00710.warc.gz"}
|
Higher-order composition of short- and long-period effects for satellite analytical ephemeris computation
The construction of an analytic orbit theory in closed form of the eccentricity that takes into account the main effects of the Geopotential is notably simplified when splitting the removal of
periodic effects in several stages. Conversely, this splitting of the closed-form analytical solution into several transformations reduces the evaluation efficiency for dense ephemeris output.
However, the advantage is twofold when the different parts of the mean-to-osculating transformation are composed into a single transformation. To show that, Brouwer's solution is extended to the
second order of the zonal harmonic of the second degree by the sequential elimination of short and long period terms. Then, the generating functions of the different transformations are composed into
a single one, from which a single mean-to-osculating transformation is derived. The new, unique transformation notably speeds up the evaluation process, commonly improving evaluation efficiency by at
least one third with respect to the customary decomposition of the analytical solution into three different parts.
• Artificial satellite theory
• Brouwer's solution
• Hamiltonian simplification
• Lie transforms
• Perturbations
Dive into the research topics of 'Higher-order composition of short- and long-period effects for satellite analytical ephemeris computation'. Together they form a unique fingerprint.
|
{"url":"https://khazna.ku.ac.ae/en/publications/higher-order-composition-of-short-and-long-period-effects-for-sat","timestamp":"2024-11-10T19:20:38Z","content_type":"text/html","content_length":"52778","record_id":"<urn:uuid:4d86c3e4-6692-468d-adaa-7ec7335911c8>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00559.warc.gz"}
|
A dependently typed calculus with pattern matching and erasure inference
Some parts of dependently typed programs constitute evidence of their type-correctness and, once checked, are unnecessary for execution. These parts can easily become asymptotically larger than the
remaining runtime-useful computation, which can cause normally linear-time programs run in exponential time, or worse. We should not make programs run slower by just describing them more precisely.
Current dependently typed systems do not erase such computation satisfactorily. By modelling erasure indirectly through type universes or irrelevance, they impose the limitations of these means to
erasure. Some useless computation then cannot be erased and idiomatic programs remain asymptotically sub-optimal.
In this paper, we explain why we need erasure, that it is different from other concepts like irrelevance, and propose a dependently typed calculus with pattern matching with erasure annotations to
model it. We show that erasure in well-typed programs is sound in that it commutes with reduction. Assuming the Church-Rosser property, erasure furthermore preserves convertibility in general.
We also give an erasure inference algorithm for erasure-unannotated or partially annotated programs and prove it sound, complete, and optimal with respect to the typing rules of the calculus.
Finally, we show that this erasure method is effective in that it can not only recover the expected asymptotic complexity in compiled programs at run time, but it can also shorten compilation times.
Dive into the research topics of 'A dependently typed calculus with pattern matching and erasure inference'. Together they form a unique fingerprint.
|
{"url":"https://research-portal.st-andrews.ac.uk/en/publications/a-dependently-typed-calculus-with-pattern-matching-and-erasure-in","timestamp":"2024-11-11T12:49:44Z","content_type":"text/html","content_length":"55089","record_id":"<urn:uuid:df7f4053-2186-476d-a1fa-f76b325a3ae1>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00547.warc.gz"}
|
Rational Contagion and the Globalization of Securities Markets
\documentclass{beamer} \usepackage{beamerthemesplit} \usepackage{amsmath} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{graphicx} \graphicspath{ {/Dropbox/InternationalMacro/} } \
title{Rational Contagion and the Globalization of Securities Markets} \author{Calvo and Mozenda} \date{\today} \begin{document} \begin{frame} \titlepage \end{frame} \begin{frame} \frametitle{Outline}
\tableofcontents \end{frame} \section{Overview} \begin{frame} \frametitle{Overview} \begin{itemize} \item Financial integration to some extent can promote contagion (herding) behaviour by reducing
the incentives to gather information \item Constraints on short-selling can also exacerbate these behaviour \item Simulations show that these frictions have significant implications for capital flows
in emerging markets \end{itemize} \end{frame} \section{Model} \begin{frame} \frametitle{Model} The expected indirect utility of an investor is $$E(\theta)=\mu(\theta)-\frac{\gamma}{2}\sigma(\theta)^
{2}-\kappa-\lambda(\mu(\Theta)-\mu(\theta))$$ where $\gamma$ and $\kappa$ are positive and \begin{itemize} \item $\mu(\theta)$ is mean of the portfolio return with $\theta$ wealth on J-1 countries \
item $\sigma(\theta)$ is standard deviation of portfolio return \item $\gamma$ is the coefficient of absolute risk aversion \item $\lambda(\mu(\Theta)-\mu(\theta))$ is the performance cost (benefits)
for obtaining portfolio return below (above) market return \end{itemize} \end{frame} \begin{frame} \frametitle{Model} Under fixed information costs, the model prediction is \begin{itemize} \item
above a threshold, as the number of integrated countries (J) rises, the incentives to gather information is diminishing and the impact of unverified rumors assigned to a single country rises without
bound. \item Global market volatility rises as J increases, resulting in a larger proportional effects on capital flows. \end{itemize} \end{frame} \begin{frame} \frametitle{Model} The assumptions \
begin{itemize} \item The initial condition is that country i is identical to the rest (returns and standard deviations are the same) and asset returns are uncorrelated. An investor has 1 unit of
wealth, which is allocated to each of the J countries equally. Portfolio mean=$\rho$ and portfolio variance=$\frac{\sigma^2}{J}$ \item The rumor is that country I's return $r<=r^*$ and the true
return $r^*=\rho$. Investor can pay $\kappa$ to verify the rumor (informed) and he will believe the rumor (uninformed) if he does not pay the cost. \item If the investor is uninformed, he chooses to
maximize $$EU^U=\theta^U\rho+(1-\theta^U)r-\frac{\gamma}{2}[\frac{(\theta^U)^2}{J-1}+(1-\theta^U)^2]\sigma^2$$ \end{itemize} \end{frame} \begin{frame} \frametitle{Model} \begin{itemize} \item
Assuming internal solutions exist, the optimal portfolio is $$\theta^U=(\frac{J-1}{J})[1+\frac{\rho-r}{\gamma\theta^2}]$$ \item Short-selling constraints: $-a<=\theta<=b$, where$a>=0$ and $b>=1$ \end
{itemize} Therefore, $\theta^U=b$ if $r<=r^{min}$ and $\theta^U=-a$ if $r>=r^{max}$, where $r^{min}=\rho-\frac{\gamma\sigma^2[J(b-1)+1]}{J-1}$ and $r^{max}=\rho+\frac{\gamma\sigma^2[J(a+1)-1]}{J-1}$
and as J goes to infinity, the interval that supports internal solutions shrinks \end{frame} \begin{frame} \frametitle{Model} \begin{itemize} \item If the investor chooses to pay a cost and verify
the rumor, so that the variance of return of country I is eliminated, the maximizes $$EU^I=\theta^I\rho+(1-\theta^I)r^I-\frac{\gamma}{2}[\frac{(\theta^I)^2}{J-1}]\theta^2-\kappa$$ \item Internal
solution $\theta^I(r^I)=(J-1)[\frac{\rho-r^I}{\gamma\sigma^2}]$ \item Coner solution $\theta^I=a$ id $r^{I(max)}$ and $\theta^U=b$ if $r<=r^{I(min)}$ %double check!!! where $r^{I(max)}=\rho+\frac{a\
gamma\sigma^2}{J-1}$ and $r^{I(min)}=\rho-\frac{b\gamma\sigma^2}{J-1}$ \item The value of information is $S=EU^I-EU^U$ \end{itemize} \end{frame} \begin{frame} \frametitle{Model} Proposition 1 For any
'pessimistic' rumor such that 1) short-selling constraints are non-binding 2) $r<=\rho$; S is decreasing in J if the number of countries in the global market is $J>\frac{1}{1-[F(\rho)(b^2-a^2 )+a^2]^
(1/2) }$ (It is sufficient condition). It is notable that S decreases with J at a declining rate so that S converges to a constant level as J goes to infinity. \end{frame} \begin{frame} \frametitle
{Model} Performance-based incentives Utility of a representative manager is $$\begin{split} EU(\theta)=\theta\rho+(1-\theta)\rho-\lambda(\mu(\Theta)-\mu(\theta))-\frac{\gamma}{2}[\frac{(\theta\
sigma_J)^2}{J-1}\\ +((1-\theta)\sigma_i)^2+2\sigma_J\sigma_i\theta(1-\theta)\eta] \end{split}$$ In this equation, \begin{itemize} \item $\lambda>0$ if $\mu(\Theta)>\mu(\theta)$, which indicates a
punishment \item $\lambda<=0$ if $\mu(\Theta)<\mu(\theta)$, which indicates a reward \end{itemize} \end{frame} \begin{frame} \frametitle{Model} Proposition 2: If in the neighborhood of the optimal
portfolio $\theta^*$ corresponding to an investor free of performance incentives, the marginal cost (gain) of deviating from the mean return of the market portfolio $\mu(\Theta)$ is sufficiently
large (small), then there exists a range of global, rational-expectations equilibria of individual portfolio allocations $\theta$, such that $\theta=\Theta$ \end{frame} \begin{frame} \frametitle
{Model} Proposition 3: The range of contagion equilibria, defined by values of $\Theta$ in the interval $\theta^{low}<\Theta<\theta^{up}$, for which proposition 2 holds, widens as the global market
grows (i.e. $\theta^{up}-\theta^{low}$ is increasing in J). \end{frame} \section{Numerical Simulations} \begin{frame} \frametitle{Numerical Simulations} Stylized facts and benchmark calibration \
begin{itemize} \item Global portfolios and statistical moments of asset returns. Plugged various estimates of the mean and variance-covariance structure of asset returns and different sources of
global portfolios in the resulting expression (Equation 17 in the paper to prove Proposition 2), $\gamma$ ranges between 0 to 0.5 and 0.25 is chosen \item Indicators of information and their impacts
on asset returns assessments. Use credit ratings of countries (CCR) constructed by international banks for lending operations (compiled and published every 6 months). Assuming normal distributions of
variables involved, and standard homogeneity assumptions across country elements in the panel, the moments that describe these distributions are, Erb et. al (1996) \end{itemize} \end{frame} \begin
{frame} $E[r^I_h]=\alpha^\mu+\beta^{\mu}E[ln(CCR_h)]$ $E[\sigma^I_h]=\alpha^{sd}+\beta^{sd}E[ln(CCR_h)]$ $\sigma^I_{rh}=(\beta^\mu)^2VAR[ln(CCR_h)]+(\sigma^\mu_u)^2$ $\sigma^I_\sigma=(\beta^{sd})^
2VAR[ln(CCR_h)]+(\sigma^{sd}_u)^2$ The above are used to calculate mean and variances of countries' returns \end{frame} \begin{frame} Disincentive for information gathering Case 1: truth-revealing
information (costly information reveals the true asset return of country i). Other assumptions: 1) asset returns are uncorrelated b) ex-ante all countries are identical. Values of variables: (units:
percent) $\rho=15.31$, $\sigma_J=22.44$ and $\sigma_i^I=6.46$ and J<=50. Plot \^S against J Finding: 1) when the rumor is that returns of country i is less or equal to $\rho$, \^S is a decreasing
function of r (decrease at declining rate) and converge to a constant 2) when the rumor is that returns of country i is the r high, \^S is a increasing function of r; 3) gains from the costly
information is lower for the rumor of $r= \rho$ \end{frame} \begin{frame} Case 2: OECD information updates (cannot reveal true asset returns) So, in this case, investors only learn updates of mean
and variance of returns when the pay ?. It is calibrate to 'stable' OECD markets $E(r^I)=15.18$, $E(\sigma_i^I)=21.81$, $\sigma_r^I=6.46$ and $\sigma_\sigma^I=1.84$ and ex-ante all countries are
identical Finding: 1) Neutral rumor $r=r*=\rho$, \^S falls to 1% for J=2 and 0.15% for J>=20; Significantly more reluctant to pay information costs; 2) Only when the rumor is very pessicmistic and
integrated markets are not large, investors are willing to pay for larger information cost. When r=r min, \^S=32%, when J=2. Allow for correlations, $\eta=0.35$ (the J-1 countries are uncorrelated),
has smaller gains of information gathering: \^S=22%. Intuition: The assets in the world fund provide better diversification opportunities (since returns in these countries are uncorrelated), so it
undermines the utilities to verify the rumor. \end{frame} \begin{frame} Case 3: Segmented emerging markets. Most of the J-1 countries are also volatile emerging markets $E(r^I )=33.12$, $E(\sigma_i^
I)=34.57$, $\sigma_r^I=49.31$ and $\sigma_\sigma^I=14.04$,$r^*=\rho=31.21$, $\sigma_i =\sigma_J =50.03$ Findings 1) \^S does not converge to a constant, actually, it increase slightly as J>=200; 2)
When $r=r^{min}$ and $r=\rho$, \^S still drops sharply as J increases and reaches minimum when J=58 ($r=\rho$) Capital flows: A rumor that reduces expected return on Mexico equity from equity market
forecast of 22.4% to 15.3% (OECD return) leads to share invested in Mexico from 1.7% to 0.7% (a reduction of 40%) and it leads to the outflow of $ 20 billion \end{frame} \begin{frame} Performance
costs \begin{itemize} \item The lower and upper bound of contagion region is delimited by the intersections of E\^U'$(1-\theta)$ and the marginal cost/gain lines \item As the number of countries
increases, the contagion region widens; the contagion region is maximized when there is no marginal gain \item The contagion range is decreasing function of variances of returns of countries, but
that effect dissipates as J>=10 \end{itemize} \end{frame} \begin{frame} \begin{figure}[p] \includegraphics[scale=0.3]{FIG1} \end{figure} \end{frame} \begin{frame} \begin{figure}[p] \includegraphics
[scale=0.3]{FIG2} \end{figure} \end{frame} \begin{frame} \begin{figure}[p] \includegraphics[scale=0.3]{FIG3} \end{figure} \end{frame} \begin{frame} \begin{figure}[p] \includegraphics[scale=0.3]{FIG4}
\end{figure} \end{frame} \begin{frame} \begin{figure}[p] \includegraphics[scale=0.3]{FIG5} \end{figure} \end{frame} \begin{frame} \begin{figure}[p] \includegraphics[scale=0.3]{FIG6} \end{figure} \end
{frame} \begin{frame} \begin{figure}[p] \includegraphics[scale=0.3]{FIG7} \end{figure} \end{frame} \begin{frame} \begin{figure}[p] \includegraphics[scale=0.3]{FIG8} \end{figure} \end{frame} \end
|
{"url":"https://tr.overleaf.com/articles/rational-contagion-and-the-globalization-of-securities-markets/rnydzdyyvmcp","timestamp":"2024-11-05T13:15:49Z","content_type":"text/html","content_length":"49037","record_id":"<urn:uuid:b72c2916-8928-44ec-86d2-87ca86aa4140>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00308.warc.gz"}
|
A man went to his office on cycle at the rate of 10 km/hr and reached late by 6 minutes. When he increased the speed by 2 km/hr, he reached 6 minutes before time. What is the distance between his
office and his departure point ?
There is an article of Rs. 100. Its price is raised initially by 10% and then again by 10%. How many rupees have been increased ?
There are 40% women workers in an office. 40% women and 60% men of that office voted for in my favour. What is the percentage of total votes in my favour ?
A seller sold $$ \frac {3}{4} $$ th of his goods at 24% profit. He sold rest part of the goods at cost price. What is percentage of his profit ?
Nita sold an article for Rs. 220 and earned a profit of 10%. At what cost should she sell to earn a profit of 30% ?
Vinod purchased a Maruti van for Rs. 1,96,000. Rate of fall of price per year of this van is $$14 \frac {2}{7} $$ %. What will be its price after two years ?
A train ‘A’ of 180 metres is running at the rate of 72 km/hr. Another train ‘B’ of 120 metres is coming from opposite direction is the rate of 108 km/hr. How long will they take to cross one
another ?
In a school 10% of boys is equal to $$ \frac {1}{4} $$ th of the number of girls. What is the ratio of the boys and girls in that school ?
100 kms is the distance between the stations A and B. One train departed from A towards B at the rate of 50 km/hr and from B towards A at the rate of 75 km/hr. Both the trains departed
simultaneously. At what distance from station A, both the trains will cross one -another ?
10 persons can build a wall in 8 days. How many persons can build this wall in half day ?
|
{"url":"https://cracku.in/2012-rrb-ahmedabad-question-paper-solved?page=4","timestamp":"2024-11-13T21:47:14Z","content_type":"text/html","content_length":"168264","record_id":"<urn:uuid:364ca701-88be-434c-bebc-ac337084c6bf>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00762.warc.gz"}
|
Construction And Analysis Of Scalar Multiplication Algorithm On Elliptic Curve
Posted on:2011-05-25 Degree:Master Type:Thesis
Country:China Candidate:H Y Chen Full Text:PDF
GTID:2178330332478660 Subject:Cryptography
As the scalar multiplication is the basic calculation for ECC and the whole computational performance of ECC has heavily depended on the efficiency of the scalar multiplication algorithm, the study
of scalar multiplication algorithm is not only of Great theoretical significance, but also has broad market prospective. The main contribution of this dissertation is that:1. Base on factorial
expansions, we develop a novel fast multiple scalar multiplication algorithm, namely, signed integer factorial expansion of multi-scalar multiplication algorithm. And by the idea of Fixed-base
Windowing Method, the new algorithm conducts the evaluation of multi-scalar multiplication via the technique of scalar multiplication creatively and thus the point multiplication is not required
anymore in the algorithm, which relieves the computational burden a lot compared with the traditional algorithms. From the given experimental data , it can be easily seen that when m is equal to 2,
the computational efficiency of multiple scalar multiplications has been improved about 47.8%~56.5% on average by our new algorithm than other existed methods.2. Based on the double-base chain
representation of scalar using bases 2 and 3, a improve Tate-pairing algorithm which combines the {2,3}-double base chain and the Miller's algorithm is presented. The basic idea of this new
Tate-pairing fast algorithm is that by taking advantage of pseudo-multiplication algorithm and polynomial expansion algorithm, the complexity of computation in iterations can be reduced efficiently,
and by changing the settings of parameters of the lines on elliptic curve, the performance of bilinear pairing can be improved considerably. It can been easily seen from the experimental results
that, the computational efficiency of our new method has been improved by 10.6%~20.3% on average than other existing methods.3. On the basis of (2), we extend the {2,3}-double base chain
Tate-pairing fast algorithm to the situation that the multi-base chain representation of scalar using bases 2 ,3 and 5 is considered. It can be found apparently from the experimental data that this
new algorithm is faster than any other existing algorithms.4. A new fault attack method corresponding to the elliptic curve double-base scalar multiplication algorithm is presented. The main point of
this method is based on the side-channel attack, and the technique of fault-tolerant is implemented to yield to the fault output. Consequently, we can deduce some components of expression by monomial
and above false output, then, we can obtain the whole key similarly. This new attack method provides us with some valuable information in security analysis. Furthermore, countermeasures are also
given and this method is still available for some elliptic curve scalar multiplication algorithm such as binary method,NAF method etc.
Keywords/Search Tags: Elliptic Curve Cryptosystem, Scalar Multiplications, Factorial Expansions, Fixed-base Windowing Method, Double-base Chains, Bilinear Pairing, Fault Attack, Side Channel Attack
|
{"url":"https://globethesis.com/?t=2178330332478660","timestamp":"2024-11-11T20:54:52Z","content_type":"application/xhtml+xml","content_length":"8919","record_id":"<urn:uuid:e6119333-a954-4a96-8f40-3892da7e3a3a>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00070.warc.gz"}
|
DPPTRF - Linux Manuals (3)
DPPTRF (3) - Linux Manuals
dpptrf.f -
subroutine dpptrf (UPLO, N, AP, INFO)
Function/Subroutine Documentation
subroutine dpptrf (characterUPLO, integerN, double precision, dimension( * )AP, integerINFO)
DPPTRF computes the Cholesky factorization of a real symmetric
positive definite matrix A stored in packed format.
The factorization has the form
A = U**T * U, if UPLO = 'U', or
A = L * L**T, if UPLO = 'L',
where U is an upper triangular matrix and L is lower triangular.
UPLO is CHARACTER*1
= 'U': Upper triangle of A is stored;
= 'L': Lower triangle of A is stored.
N is INTEGER
The order of the matrix A. N >= 0.
AP is DOUBLE PRECISION array, dimension (N*(N+1)/2)
On entry, the upper or lower triangle of the symmetric matrix
A, packed columnwise in a linear array. The j-th column of A
is stored in the array AP as follows:
if UPLO = 'U', AP(i + (j-1)*j/2) = A(i,j) for 1<=i<=j;
if UPLO = 'L', AP(i + (j-1)*(2n-j)/2) = A(i,j) for j<=i<=n.
See below for further details.
On exit, if INFO = 0, the triangular factor U or L from the
Cholesky factorization A = U**T*U or A = L*L**T, in the same
storage format as A.
INFO is INTEGER
= 0: successful exit
< 0: if INFO = -i, the i-th argument had an illegal value
> 0: if INFO = i, the leading minor of order i is not
positive definite, and the factorization could not be
Univ. of Tennessee
Univ. of California Berkeley
Univ. of Colorado Denver
NAG Ltd.
November 2011
Further Details:
The packed storage scheme is illustrated by the following example
when N = 4, UPLO = 'U':
Two-dimensional storage of the symmetric matrix A:
a11 a12 a13 a14
a22 a23 a24
a33 a34 (aij = aji)
Packed storage of the upper triangle of A:
AP = [ a11, a12, a22, a13, a23, a33, a14, a24, a34, a44 ]
Definition at line 120 of file dpptrf.f.
Generated automatically by Doxygen for LAPACK from the source code.
|
{"url":"https://www.systutorials.com/docs/linux/man/3-DPPTRF/","timestamp":"2024-11-07T19:50:07Z","content_type":"text/html","content_length":"8896","record_id":"<urn:uuid:b31e7d04-53ec-44dd-abbb-dfe5541eb9e4>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00650.warc.gz"}
|
waveform = wlanWaveformGenerator(bits,cfg) generates a waveform for bits, the specified information bits, and cfg, the physical layer (PHY) format configuration. For more information, see IEEE 802.11
PPDU Format.
waveform = wlanWaveformGenerator(bits,cfg,Name,Value) specifies additional options using one or more name-value pair arguments.
Generate EHT TB Waveform
Create a configuration object for a WLAN EHT TB transmission.
cfgEHTTB = wlanEHTTBConfig;
Get the PSDU length, in bytes, from the configuration object by using the psduLength object function.
length = psduLength(cfgEHTTB);
Generate a PSDU of the relevant length, converting bytes to bits by multiplying by eight.
psdu = randi([0 1],8*length,1);
Generate a time-domain waveform for the bits and configuration, specifying an oversampling factor of 3. Plot the waveform.
waveform = wlanWaveformGenerator(psdu,cfgEHTTB,OversamplingFactor=3);
title("EHT TB Waveform");
xlabel("Time (Nanoseconds)");
Generate EHT MU Waveform
Create a configuration object for a non-OFDMA EHT MU packet. Set the channel bandwidth to 160 MHz, the number of users to two, and the number of transmit antennas to two.
cfgEHTMU = wlanEHTMUConfig("CBW160",NumUsers=2,NumTransmitAntennas=2);
Obtain the PSDU length for both users, in bytes, from the configuration object by using the psduLength object function.
length = psduLength(cfgEHTMU);
Create a two-element cell array containing random PSDUs of the relevant length.
psdu = {randi([0 1],8*length(1),1);randi([0 1],8*length(2),1)};
Generate and plot the waveform.
waveform = wlanWaveformGenerator(psdu,cfgEHTMU);
title('EHT MU Waveform');
xlabel('Time (nanoseconds)');
legend('First transmit antenna','Second transmit antenna')
Generate HE TB Waveform
Configure and generate a WLAN waveform containing an HE TB uplink packet.
Create a configuration object for a WLAN HE TB uplink transmission.
cfgHETB = wlanHETBConfig;
Obtain the PSDU length, in bytes, from the configuration object by using the getPSDULength object function.
psduLength = getPSDULength(cfgHETB);
Generate a PSDU of the relevant length.
psdu = randi([0 1],8*psduLength,1);
Generate and plot the waveform.
waveform = wlanWaveformGenerator(psdu,cfgHETB);
title('HE TB Waveform');
xlabel('Time (nanoseconds)');
Generate VHT Waveform
Generate a time-domain signal for an 802.11ac VHT transmission with one packet.
Create a VHT configuration object. Assign two transmit antennas and two spatial streams, and disable space-time block coding (STBC). Set the modulation and coding scheme to 1, which assigns QPSK
modulation and a 1/2 rate coding scheme per the 802.11 standard. Set the number of bytes in the A-MPDU pre-EOF padding, APEPLength, to 1024.
cfg = wlanVHTConfig('NumTransmitAntennas',2,'NumSpaceTimeStreams',2,'STBC',0,'MCS',1,'APEPLength',1024);
Generate the transmit waveform.
bits = [1;0;0;1];
txWaveform = wlanWaveformGenerator(bits,cfg);
Demonstrate SIGB Compression in HE MU Waveforms
HE MU-MIMO Configuration With SIGB Compression
Generate a full bandwidth HE MU-MIMO configuration at 20 MHz bandwidth with SIGB compression. All three users are on a single content channel, which includes only the user field bits.
cfgHE = wlanHEMUConfig(194);
cfgHE.NumTransmitAntennas = 3;
Create PSDU data for all users.
psdu = cell(1,numel(cfgHE.User));
psduLength = getPSDULength(cfgHE);
for j = 1:numel(cfgHE.User)
psdu = randi([0 1],psduLength(j)*8,1,'int8');
Generate and plot the waveform.
y = wlanWaveformGenerator(psdu,cfgHE);
Generate a full bandwidth HE MU-MIMO waveform at 80 MHz bandwidth with SIGB compression. HE-SIG-B content channel 1 has four users. HE-SIG-B content channel 2 has three users.
cfgHE = wlanHEMUConfig(214);
cfgHE.NumTransmitAntennas = 7;
Create PSDU data for all users.
psdu = cell(1,numel(cfgHE.User));
psduLength = getPSDULength(cfgHE);
for j = 1:numel(cfgHE.User)
psdu = randi([0 1],psduLength(j)*8,1,'int8');
Generate and plot the waveform.
y = wlanWaveformGenerator(psdu,cfgHE);
HE MU-MIMO Configuration Without SIGB Compression
Generate a full bandwidth HE MU-MIMO configuration at 20 MHz bandwidth without SIGB compression. All three users are on a single content channel, which includes both common and user field bits.
cfgHE = wlanHEMUConfig(194);
cfgHE.SIGBCompression = false;
cfgHE.NumTransmitAntennas = 3;
Create PSDU data for all users.
psdu = cell(1,numel(cfgHE.User));
psduLength = getPSDULength(cfgHE);
for j = 1:numel(cfgHE.User)
psdu = randi([0 1],psduLength(j)*8,1,'int8');
Generate and plot the waveform.
y = wlanWaveformGenerator(psdu,cfgHE);
Generate an 80 MHz HE MU waveform for six users without SIGB compression. HE-SIG-B content channel 1 has four users. HE-SIG-B content channel 2 has two users.
cfgHE = wlanHEMUConfig([202 114 192 193]);
cfgHE.NumTransmitAntennas = 6;
for i = 1:numel(cfgHE.RU)
cfgHE.RU{i}.SpatialMapping = 'Fourier';
Create PSDU data for all users.
psdu = cell(1,numel(cfgHE.User));
psduLength = getPSDULength(cfgHE);
for j = 1:numel(cfgHE.User)
psdu = randi([0 1],psduLength(j)*8,1,'int8');
Generate and plot the waveform.
y = wlanWaveformGenerator(psdu,cfgHE);
Generate a full bandwidth HE MU-MIMO waveform at 80 MHz bandwidth without SIGB compression. HE-SIG-B content channel 1 has seven users. HE-SIG-B content channel 2 has zero users.
cfgHE = wlanHEMUConfig([214 115 115 115]);
cfgHE.NumTransmitAntennas = 7;
Create PSDU data for all users.
psdu = cell(1,numel(cfgHE.User));
psduLength = getPSDULength(cfgHE);
for j = 1:numel(cfgHE.User)
psdu = randi([0 1],psduLength(j)*8,1,'int8');
Generate and plot the waveform.
y = wlanWaveformGenerator(psdu,cfgHE);
Generate VHT Waveform with Random Scrambler State
Generate a time-domain signal for an 802.11ac VHT transmission with five packets and a 30-microsecond idle period between packets. Use a random scrambler initial state for each packet.
Create a VHT configuration object and confirm the channel bandwidth for scaling the x-axis of the plot.
cfg = wlanVHTConfig;
Generate and plot the waveform. Display the time in microseconds on the x-axis.
numPkts = 5;
bits = [1;0;0;1];
scramInit = randi([1 127],numPkts,1);
txWaveform = wlanWaveformGenerator(bits,cfg,'NumPackets',numPkts,'IdleTime',30e-6,'ScramblerInitialization',scramInit);
time = [0:length(txWaveform)-1]/80e-6;
title('Five Packets Separated by 30-Microsecond Idle Periods');
xlabel ('Time (microseconds)');
Input Arguments
bits — Information bits
0 | 1 | binary-valued vector | cell array | vector cell array
Information bits, including any MAC padding representing multiple concatenated PSDUs, specified as one of these values.
• 0 or 1. The specified bit applies to all users.
• A binary-valued vector. The specified bits apply to all users.
• A one-by-one cell array containing a binary-valued scalar or vector. The specified bits apply to all users.
• A vector cell array of binary-valued scalars or vectors. The kth element of the cell array applies to the kth user. The length of the cell array must be equal to the number of users. For each
user, if the number of bits required across all packets of the generation exceeds the length of the vector provided, the function loops the applied bit vector. Looping the bits enables you to
define a short pattern, for example, [1; 0; 0; 1], that the function repeatedly uses as the input to the PSDU coding across packets and users. In each packet generation, for the kth user, the kth
element of the PSDULength property of the cfg input indicates the number of data bytes in its stream. To compute the number of bits, multiply PSDULength by 8.
Internally, the function loops this input to generate the specified number of packets. The PSDULength property of the cfg input specifies the number of data bits taken from the bit stream for each
transmission packet generated. The 'NumPackets' input specifies the number of packets to generate.
Example: [1 1 0 1 0 1 1]
Data Types: double | int8
cfg — Packet format configuration
wlanHEMUConfig object | wlanHESUConfig object | wlanHETBConfig object | wlanEHTMUConfig object | wlanEHTTBConfig object | wlanWURConfig object | wlanDMGConfig object | wlanS1GConfig object |
wlanVHTConfig object | wlanHTConfig object | wlanNonHTConfig object
Packet format configuration, specified as one of these objects: wlanHEMUConfig, wlanHESUConfig, wlanHETBConfig, wlanEHTMUConfig, wlanEHTTBConfig, wlanWURConfig, wlanDMGConfig, wlanS1GConfig,
wlanVHTConfig, wlanHTConfig, or wlanNonHTConfig. The type of object you specify determines the IEEE^® 802.11™ format of the generated waveform.
The properties of the packet format configuration object determine the data rate and PSDU length of generated PPDUs.
Name-Value Arguments
Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but
the order of the pairs does not matter.
Before R2021a, use commas to separate each name and value, and enclose Name in quotes.
Example: 'NumPackets',21,'ScramblerInitialization',[52,17]
NumPackets — Number of packets
1 (default) | positive integer
Number of packets to generate in a single function call, specified as a positive integer.
Data Types: double
IdleTime — Idle time added after each packet
0 (default) | nonnegative scalar
Idle time, in seconds, added after each packet, specified as a nonnegative scalar. Except for the default value, this input must be greater than or equal to:
• 1e-6 for DMG format
• 2e-6 for all other formats
Example: 2e-5
Data Types: double
OversamplingFactor — Oversampling factor
1 (default) | scalar greater than or equal to 1
Oversampling factor, specified as a scalar greater than or equal to 1. The oversampled cyclic prefix length must be an integer number of samples. For more information about oversampling, see
FFT-Based Oversampling.
This argument applies only for EHT, HE, WUR, VHT, HT, S1G, and non-HT OFDM formats.
Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64
ScramblerInitialization — Initial scrambler state or initial pseudorandom scrambler sequence
93 (default) | integer in the interval [1, 2047] | matrix of integers in the interval [1, 2047]
Initial scrambler state or initial pseudorandom scrambler sequence for each generated packet and each user, specified as one of these values.
• An integer in the interval [1, 127] — This input represents the initial scrambler state for all packets and users in HE, S1G, VHT, and HT waveforms, and non-HT OFDM waveforms with bandwidth
signaling disabled. For multi-user and multipacket waveforms, the function uses the value you specify for all packets and users. The default value, 93, is the example state in Section I.1.5.2 of
[1]. For more information, see Scrambler Initialization.
• An integer in the interval [1, 2047]. This input represents the initial scrambler state for all packets and users in EHT waveforms. For multi-user and multipacket waveforms, the function uses the
value you specify for all packets and users.
• An integer in the interval [min, max] — This input represents the initial pseudorandom scrambler sequence of a non-HT transmission with bandwidth signaling enabled, described in Table 17-7 of [1]
. If you do not specify this input, the function uses the N[B] most significant bits of the default value, 93. The values of min, max, and N[B] depend on the values of the BandwidthOperation and
ChannelBandwidth properties of the cfg input according to this table.
Value of cfg.BandwidthOperation Value of cfg.ChannelBandwidth Value of min Value of max Value of N[B]
'Absent' 'CBW20' 1 31 5
'Absent' 'CBW5', 'CBW10', 'CBW40', 'CBW80', or 'CBW160' 0 31 5
'Static' or 'Dynamic' 'CBW20' 1 15 4
'Static' or 'Dynamic' 'CBW5', 'CBW10', 'CBW40', 'CBW80', or 'CBW160' 0 15 4
• A matrix of integers in the interval [1, 127], of size N[P]-by-N[Users] — Each element represents an initial state of the scrambler for each packet and for each user in VHT, S1G, and HE
multi-user (MU) waveforms comprising multiple packets. Each column specifies the initial states for a single user. You can specify up to eight columns for HE MU waveforms, or up to four columns
for VHT and S1G. If you specify a single column, the function uses the same initial states for all users. Each row represents the initial state of each packet to generate. A matrix with multiple
rows enables you to use a different initial state per packet, where the first row contains the initial state of the first packet. If the number of packets to generate exceeds the number of rows
of the matrix provided, the function loops the rows internally.
□ N[P] is the number of packets.
□ N[Users] is the number of users.
• A matrix of integers in the interval [1, 2047], of size N[P]-by-N[Users] — Each element represents an initial state of the scrambler for each packet and for each user in EHT multi-user (MU)
waveforms comprising multiple packets. Each column specifies the initial states for a single user. You can specify up to 144 columns for EHT MU waveforms. If you specify a single column, the
function uses the same initial states for all users. Each row represents the initial state of each packet to generate. A matrix with multiple rows enables you to use a different initial state per
packet, where the first row contains the initial state of the first packet. If the number of packets to generate exceeds the number of rows of the matrix provided, the function loops the rows
□ N[P] is the number of packets.
□ N[Users] is the number of users.
For DMG transmissions, specifying this argument overrides the value of the ScramblerInitialization property of the wlanDMGConfig configuration object.
Example: [3 56 120]
This argument is not valid for WUR and DSSS non-HT formats.
Data Types: double | int8
WindowTransitionTime — Duration of window transition
nonnegative scalar
Duration, in seconds, of the window transition applied to each OFDM symbol, specified as a nonnegative scalar. The function does not apply windowing if you specify this input as 0. This table shows
the default and maximum values permitted for each format, the type of guard interval, and the channel bandwidth.
Permitted WindowTransitionTime (seconds)
Maximum Permitted Value Based on Guard Interval Duration
Format Bandwidth 0.8 µs 0.4 µs
Default Value Maximum Value 3.2 µs 1.6 µs
(Long) (Short)
EHT MU and EHT TB 20, 40, 80, 160, or 320 MHz 1.0e-07 Not applicable 6.4e-06 3.2e-06 1.6e-06 Not applicable
HE SU, HE MU, and HE TB 20, 40, 80, or 160 MHz 1.0e-07 Not applicable 6.4e-06 3.2e-06 1.6e-06 Not applicable
VHT 20, 40, 80, or 160 MHz 1.0e-07 Not applicable Not applicable Not applicable 1.6e-06 8.0e-07
HT-mixed 20 or 40 MHz 1.0e-07 Not applicable Not applicable Not applicable 1.6e-06 8.0e-07
20, 40, 80, or 160 MHz 1.0e-07 Not applicable Not applicable Not applicable 1.6e-06 Not applicable
non-HT 10 MHz 1.0e-07 Not applicable Not applicable Not applicable 3.2e-06 Not applicable
5 MHz 1.0e-07 Not applicable Not applicable Not applicable 6.4e-06 Not applicable
WUR 20, 40, 80 MHz 1.0e-07 Not applicable Not applicable Not applicable Not applicable Not applicable
6.0606e-09 9.6969e-08
DMG 2640 MHz Not applicable Not applicable Not applicable Not applicable
(= 16/2640e6) (= 256/2640e6)
S1G 1, 2, 4, 8, or 16 MHz 1.0e-07 Not applicable Not applicable Not applicable 1.6e-05 8.0e-06
Data Types: double
Output Arguments
waveform — Time-domain waveform
Time-domain waveform, returned as an N[S]-by-N[T] matrix. N[S] is the number of time-domain samples and N[T] is the number of transmit antennas. waveform contains one or more packets of the same PPDU
format. Each packet can contain different information bits. Enable waveform packet windowing by setting the WindowTransitionTime input to a positive value. Windowing is enabled by default.
For more information, see Waveform Sampling Rate, OFDM Symbol Windowing, and Waveform Looping.
Data Types: double
Complex Number Support: Yes
More About
IEEE 802.11 PPDU Format
Supported IEEE 802.11 PPDU formats defined for transmission include EHT, HE, WUR, VHT, HT, non-HT, S1G, and DMG. For all formats, the PPDU field structure includes preamble and data portions. For a
detailed description of the packet structures for the various formats supported, see WLAN PPDU Structure.
Waveform Sampling Rate
At the output of this function, the generated waveform has a sampling rate equal to the channel bandwidth.
For all EHT, HE, VHT, HT, and non-HT format OFDM modulation, the channel bandwidth is configured via the ChannelBandwidth property of the format configuration object.
For the DMG format modulation schemes, the channel bandwidth is always 2640 MHz and the channel spacing is always 2160 MHz, as specified in sections 20.3.4 and E.1 of [1], respectively.
For the non-HT format DSSS modulation scheme, the chipping rate is always 11 MHz, as specified in section 16.1.1 of [1].
This table indicates the waveform sampling rates associated with standard channel spacing for each configuration format prior to filtering.
Sampling Rate (MHz)
Configuration Object Modulation Type ChannelBandwidth Property Value Channel Spacing (MHz)
(F[S], F[C])
'CBW20' 20 F[S] = 20
'CBW40' 40 F[S] = 40
wlanEHTMUConfig and wlanEHTTBConfig OFDMA 'CBW80' 80 F[S] = 80
'CBW160' 160 F[S] = 160
'CBW320' 320 F[S] = 320
'CBW20' 20 F[S] = 20
'CBW40' 40 F[S] = 40
wlanHEMUConfig, wlanHESUConfig, and wlanHETBConfig OFDMA
'CBW80' 80 F[S] = 80
'CBW160' 160 F[S] = 160
'CBW20' 20 F[S] = 20
'CBW40' 40 F[S] = 40
wlanVHTConfig OFDM
'CBW80' 80 F[S] = 80
'CBW160' 160 F[S] = 160
'CBW20' 20 F[S] = 20
wlanHTConfig OFDM
'CBW40' 40 F[S] = 40
DSSS/CCK Not applicable 11 F[C] = 11
'CBW5' 5 F[S] = 5
wlanNonHTConfig 'CBW10' 10 F[S] = 10
'CBW20' 20 F[S] = 20
'CBW40' 40 F[S] = 40
'CBW80' 80 F[S] = 80
'CBW160 160 F[S] = 160
'CBW20' 20 F[S] = 20
wlanWURConfig OFDM 'CBW40' 40 F[S] = 40
'CBW80' 80 F[S] = 80
Control PHY
F[C] = ⅔ F[S] = 1760
wlanDMGConfig SC For DMG, the channel bandwidth is fixed at 2640 MHz. 2160
OFDM F[S] = 2640
'CBW1' 1 F[S] = 1
'CBW2' 2 F[S] = 2
wlanS1GConfig OFDM 'CBW4' 4 F[S] = 4
'CBW8' 8 F[S] = 8
'CBW16' 16 F[S] = 16
F[S] is the OFDM sampling rate.
F[C] is the chip rate for single-carrier, control PHY, DSSS, and CCK modulations.
OFDM Symbol Windowing
OFDM naturally lends itself to processing with Fourier transforms. A negative side effect of using an IFFT to process OFDM symbols is the resulting symbol-edge discontinuities. These discontinuities
cause out-of-band emissions in the transition region between consecutive OFDM symbols. To smooth the discontinuity between symbols and reduce the intersymbol out-of-band emissions, you can use the
wlanWaveformGenerator function to apply OFDM symbol windowing. To apply windowing, set the WindowTransitionTime input to a positive value.
When windowing is applied, the function adds transition regions to the leading and trailing edge of the OFDM symbol. Windowing extends the length of the OFDM symbol by WindowTransitionTime (T[TR]).
The extended waveform is windowed by pointwise multiplication in the time domain, using this windowing function specified in section 17.3.2.5 of [1]:
${w}_{T}\left(t\right)=\left\{\begin{array}{ll}{\mathrm{sin}}^{2}\left[\frac{\pi }{2}\left(\frac{1}{2}+\frac{t}{{T}_{\text{TR}}}\right)\right]\hfill & \text{if}t\in \left[-\frac{{T}_{\text{TR}}}{2},\
frac{{T}_{\text{TR}}}{2}\right],\hfill \\ 1\hfill & \text{if}t\in \left[\frac{{T}_{\text{TR}}}{2},T-\frac{{T}_{\text{TR}}}{2}\right],\hfill \\ {\mathrm{sin}}^{2}\left[\frac{\pi }{2}\left(\frac{1}{2}+
\frac{t}{{T}_{\text{TR}}}\right)\right]\hfill & \text{if}t\in \left[T-\frac{{T}_{\text{TR}}}{2},T+\frac{{T}_{\text{TR}}}{2}\right].\hfill \end{array}$
The windowing function applies over the leading and trailing portion of the OFDM symbol:
• –T[TR]/2 to T[TR]/2
• –T – T[TR]/2 to T + T[TR]/2
After windowing is applied to each symbol, pointwise addition is used to combine the overlapped regions between consecutive OFDM symbols. Specifically, the trailing shoulder samples at the end of
OFDM symbol 1 (T – T[TR]/2 to T + T[TR]/2) are added to the leading shoulder samples at the beginning of OFDM symbol 2 (–T[TR]/2 to T[TR]/2).
Smoothing the overlap between consecutive OFDM symbols in this manner reduces the out-of-band emissions. The function applies OFDM symbol windowing between:
• Each OFDM symbol within a packet
• Consecutive packets within the waveform, considering the idle time IdleTime between packets specified by the 'IdleTime' input
• The last and the first packet of the generated waveform
Windowing DMG Format Packets
For the DMG format, windowing applies only to packets transmitted using the OFDM PHY and is applied only to the OFDM modulated symbols. For OFDM PHY, only the header and data symbols are OFDM
modulated. The preamble (STF and CEF) and the training fields are single carrier modulated and are not windowed. Similar to the out-of-band emissions experienced by consecutive OFDM symbols, as shown
here, the CEF and the first training subfield are subject to a nominal amount of out-of-band emissions from the adjacent windowed OFDM symbol.
For more information on how the function handles windowing for the consecutive packet idle time and for the last waveform packet, see Waveform Looping.
Waveform Looping
To produce a continuous input stream, you can have your code loop on a waveform from the last packet back to the first packet.
Applying windowing to the last and first OFDM symbols of the generated waveform smooths the transition between the last and first packet of the waveform. When the 'WindowTransitionTime' input is
positive, the wlanWaveformGenerator function applies OFDM symbol windowing.
When looping a waveform, the last symbol of packet_N is followed by the first OFDM symbol of packet_1. If the waveform has only one packet, the waveform loops from the last OFDM symbol of the packet
to the first OFDM symbol of the same packet.
When windowing is applied to the last OFDM symbol of a packet and the first OFDM of the next packet, the idle time between the packets factors into the windowing applied. Specify the idle time by
using the 'IdleTime' input to the wlanWaveformGenerator function.
• If 'IdleTime' is 0, the function applies windowing as it would be for consecutive OFDM symbols within a packet.
• Otherwise, the extended windowed portion of the first OFDM symbol in packet_1 (from –T[TR]/2 to 0–T[S]), is included at the end of the waveform. This extended windowed portion is applied for
looping when computing the windowing between the last OFDM symbol of packet_N and the first OFDM symbol of packet_1. T[S] is the sample time.
Looping DMG Waveforms
DMG waveforms have these three looping scenarios.
• The looping behavior for a waveform composed of DMG OFDM-PHY packets with no training subfields is similar to the general case outlined in Waveform Looping, but the first symbol of the waveform
(and each packet) is not windowed.
□ If 'IdleTime' is 0 for the waveform, the windowed portion (from T to T + T[TR]/2) of the last data symbol is added to the start of the STF field.
□ Otherwise, the idle time is appended at the end of the windowed portion (after T + T[TR]/2) of the last OFDM symbol.
• When a waveform composed of DMG OFDM PHY packets includes training subfields, no windowing is applied to the single-carrier modulated symbols the end of the waveform. The last sample of the last
training subfield is followed by the first STF sample of the first packet in the waveform.
□ If 'IdleTime' is 0 for the waveform, there is no overlap.
□ Otherwise, the value of 'IdleTime' specifies the delay between the last sample of packet_N and the first sample of in packet_1.
• When a waveform is composed of DMG-SC or DMG-Control PHY packets, the end of the waveform is single carrier modulated, so no windowing is applied to the last waveform symbol. The last sample of
the last training subfield is followed by the first STF sample of the first packet in the waveform.
□ If 'IdleTime' is 0 for the waveform, there is no overlap.
□ Otherwise, the value of 'IdleTime' specifies the delay between the last sample of packet_N and the first sample of in packet_1.
The same looping behavior applies for a waveform composed of DMG OFDM-PHY packets with training subfields, DMG-SC PHY packets, or DMG-Control PHY packets.
FFT-Based Oversampling
An oversampled signal is a signal sampled at a frequency that is higher than the Nyquist rate. WLAN signals maximize occupied bandwidth by using small guardbands, which can pose problems for
anti-imaging and anti-aliasing filters. Oversampling increases the guardband width relative to the total signal bandwidth, which increases the number of samples in the signal.
This function performs oversampling by using a larger IFFT and zero pad when generating an OFDM waveform. This diagram shows the oversampling process for an OFDM waveform with N[FFT] subcarriers made
up of N[g] guardband subcarriers on either side of N[st] occupied bandwidth subcarriers.
Scrambler Initialization
The scrambler initialization used on the transmission data follows the process described in IEEE Std 802.11-2012, Section 18.3.5.5 and IEEE Std 802.11ad™-2012, Section 21.3.9. The header and data
fields that follow the scrambler initialization field (including data padding bits) are scrambled by XORing each bit with a length-127 periodic sequence generated by the polynomial S(x) = x^7+x^4+1.
The octets of the PSDU are placed into a bit stream and, within each octet, bit 0 (LSB) is first and bit 7 (MSB) is last. This figure shows the generation of the sequence and the XOR operation.
Conversion from integer to bits uses left-MSB orientation. For example, initializing the scrambler with decimal 1, the bits map to these elements.
Element X^7 X^6 X^5 X^4 X^3 X^2 X^1
Bit Value 0 0 0 0 0 0 1
To generate the bit stream equivalent to a decimal, use the int2bit function. For example, for decimal 1:
ans =
[1] IEEE Std 802.11-2020 (Revision of IEEE Std 802.11-2016). “Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications.” IEEE Standard for Information Technology —
Telecommunications and Information Exchange between Systems — Local and Metropolitan Area Networks — Specific Requirements.
[2] IEEE Std 802.11ax™-2021 (Amendment to IEEE Std 802.11-2020). “Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications. Amendment 1: Enhancements for High
Efficiency WLAN.” IEEE Standard for Information Technology — Telecommunications and Information Exchange between Systems. Local and Metropolitan Area Networks — Specific Requirements.
Extended Capabilities
C/C++ Code Generation
Generate C and C++ code using MATLAB® Coder™.
Version History
Introduced in R2015b
R2023a: EHT TB waveform generation
You can specify the input cfg as an object of type wlanEHTTBConfig.
R2022b: EHT MU waveform generation
You can specify the input cfg as an object of type wlanEHTMUConfig.
See Also
|
{"url":"https://de.mathworks.com/help/wlan/ref/wlanwaveformgenerator.html","timestamp":"2024-11-11T20:57:26Z","content_type":"text/html","content_length":"185617","record_id":"<urn:uuid:eccdc637-b7c3-410d-9533-7ebe2b182c8c>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00759.warc.gz"}
|
3 Mathematical Laws to know as a Data Scientist
The Tech Platform
Some interesting laws that help you as a Data Scientist
While Data Scientist was working with Data for their main activity, it doesn't mean that Mathematical knowledge is something we do not need. Data scientists need to learn and understand the
mathematical theory behind machine learning to efficiently solving business problems.
The mathematical behind machine learning is not just a random notation thrown here and there, but it consisted of many theories and thoughts. This thought creates a lot of mathematical laws that
contributed to the machine learning we able to use right now. Although you could use the mathematical in any way you want to solve the problem, mathematical laws are not limited to machine learning
after all.
In this article, I want to outline some of the interesting mathematical laws that could help you as a Data Scientist. Let’s get into it.
Benford’s Law
Benford’s law also called the Newcomb–Benford law, the law of anomalous numbers, or the first-digit law, is a mathematical law about the leading digit number in a real-world dataset.
When we think about the first digit of the numbers, it should be distributed uniformly when we randomly took a number. Intuitively, the random number leading digit 1 should have the same probability
as leading digit 9, which is ~11.1%. Surprisingly, this is not what happens.
Benford’s law states that the leading digit is likely to be small in many naturally occurring collections of numbers. Leading digit 1 happens more often than 2, leading digit 2 occurs more often than
3, and so on. Let’s try using a real-world dataset to see how this law is applicable. For this article, I would use the data from Kaggle regarding Spotify Track song from 1921–2020. From the data, I
would take the leading digit of the song durations.
From the image above, we can see that the leading digit 1 occurs the most, then it is decreasing following the higher number. This is what Benford’s Law state above.
If we talk about the proper definition, Benford law state that a set of numbers is said to satisfy Benford’s law if the leading digit d (𝑑∈1,…,9) occurs with the equation.
From this equation, we acquired the leading digit with the following distribution.
With this distribution, we can predict that 1 as the leading digit is 30% likely to occurs more than the other leading digit.
Many applications for this law, for example, fraud detection on tax forms, election results, economic numbers, and accounting figures.
Law of Large Numbers (LLN)
The Law of Large Number stated that as the number of trials of a random process increases, the results' average would get closer to the expected values or theoretical values.
For example, when rolling the dice. The possibility of 6-side dice is 1,2,3,4,5 and 6. The mean for the 6-side dice would be 3.5. As we are rolling the dice, the number we get would be random from 1
to 6, but as we keep rolling the dice, the result's average would get closer to the expected value, which is 3.5. This is what the Law of Large Numbers denote.
While it is useful, the tricky part here is that you need many experiments or occurrences. However, a large number required means that it is good to predict long-term stability.
The Law of Large Numbers is different than the Law of Average, where it was used to express a belief that outcomes of a random event will “even out” within a small sample. This is what we called
“Gambler’s Fallacy,” where we expect the expected value would occur in a smaller sample.
Zipf’s Law
Zipf’s law was created for quantitative linguistic, which states that given some natural language dataset corpus, any word's frequency is inversely proportional to its frequency table rank. Thus the
most frequent word will occur approximately twice as often as the second most frequent word, three times as often as the third most frequent word.
For example, in the previous Spotify dataset, I would try to split all the words and punctuation to count them. Below is the top 12 of the most common words and their frequency.
When I sum all the word that exists in the Spotify corpus, the total is 759389. We could see if Zipf’s law applies to this dataset by counting the probability when they occur. The first most
occurring word or punctuation is ‘-’ with 32258, which has the probability of ~4% then followed by ‘the,’ which has the probability of ~2%.
Faithful to the law, the probability would keep going down in some of the words. Of course, there is a little deviation, but the probability would go down most of the time following the frequency
rank increase.
Conclusion These are some interesting mathematical laws to know as a Data Scientist and definitely would help you in your Data Science work. The laws are:
• Benford’s Law
• Law of Large Number
• Zipf’s Law
|
{"url":"https://www.thetechplatform.com/post/3-mathematical-laws-to-know-as-a-data-scientist","timestamp":"2024-11-04T21:32:21Z","content_type":"text/html","content_length":"1050421","record_id":"<urn:uuid:0a62fddc-7554-45c4-a82c-2f57d4b7d6c8>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00281.warc.gz"}
|
UI moving around a circle?
Hi, I just wanted to know if it was possible to make a UI move in a circular motion like the sun/moon cycle in Terraria. I want an imagelabel to go around the circumference of the circle. I have no
clue how this can be done.
2 Likes
Please take a look at this post: How would i rotate a Frame in a circle? - #3 by TopBagon
You can do that with a bezier curve. You would make 3 points just off to the left, top and right of the screen, and then just lerp the sun:
local run = game:GetService("RunService")
local sun = script.Parent.Frame
--create the 3 points
local p1 = UDim2.fromScale(-0.2,0.4)
local p2 = UDim2.fromScale(0.5,-0.2)
local p3 = UDim2.fromScale(1.2,0.4)
local function lerp(t)
local cumulative = 0
while cumulative < t do
cumulative += run.Heartbeat:Wait()
--make sure we use time scale from 0-1
local d = cumulative/t
--get current positions from p1 to p2 and from p2 to p3
local x = p1:Lerp(p2,d)
local y = p2:Lerp(p3,d)
--set the sun's position to current position from x to y
sun.Position = x:Lerp(y,d)
--you can send the length of the animation or simply have it as a variable somewhere above
this is too inefficient, you’re better off using sin & cos
You could just rotate with:
or if it rotates the orther direction then:
1 Like
|
{"url":"https://devforum.roblox.com/t/ui-moving-around-a-circle/2836710/2","timestamp":"2024-11-06T01:36:19Z","content_type":"text/html","content_length":"29971","record_id":"<urn:uuid:b89e2e1a-bec1-4f0a-8c06-5cd5e2bb3e14>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00022.warc.gz"}
|
How Many Millimeters Is 1562 Meters?
1562 meters in millimeters
How many millimeters in 1562 meters?
1562 meters equals 1562000 millimeters
Unit Converter
Conversion formula
The conversion factor from meters to millimeters is 1000, which means that 1 meter is equal to 1000 millimeters:
1 m = 1000 mm
To convert 1562 meters into millimeters we have to multiply 1562 by the conversion factor in order to get the length amount from meters to millimeters. We can also form a simple proportion to
calculate the result:
1 m → 1000 mm
1562 m → L[(mm)]
Solve the above proportion to obtain the length L in millimeters:
L[(mm)] = 1562 m × 1000 mm
L[(mm)] = 1562000 mm
The final result is:
1562 m → 1562000 mm
We conclude that 1562 meters is equivalent to 1562000 millimeters:
1562 meters = 1562000 millimeters
Alternative conversion
We can also convert by utilizing the inverse value of the conversion factor. In this case 1 millimeter is equal to 6.4020486555698E-7 × 1562 meters.
Another way is saying that 1562 meters is equal to 1 ÷ 6.4020486555698E-7 millimeters.
Approximate result
For practical purposes we can round our final result to an approximate numerical value. We can say that one thousand five hundred sixty-two meters is approximately one million five hundred sixty-two
thousand millimeters:
1562 m ≅ 1562000 mm
An alternative is also that one millimeter is approximately zero times one thousand five hundred sixty-two meters.
Conversion table
meters to millimeters chart
For quick reference purposes, below is the conversion table you can use to convert from meters to millimeters
|
{"url":"https://convertoctopus.com/1562-meters-to-millimeters","timestamp":"2024-11-04T23:52:27Z","content_type":"text/html","content_length":"33325","record_id":"<urn:uuid:9e5e3d12-c128-4246-ad55-8956453a27a7>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00539.warc.gz"}
|
Using R to work through Sokal and Rohlf's Biometry: Chapter 5 (Descriptive Statistics), section 5.3
Previously in this series:
Chapter 5 (Introduction to Probability Distributions: Binomial and Poisson), sections 5.1-5.2.
Section 5.3: the Poisson distribution
?Distributions #Let's look in help again to find the Poisson distribution.
?dpois #We can see that R does similar calculations for Poisson in addition to the binomial distribution.
#To get column 3 (expected absolute frequencies) in table 5.5,
#there is the recursion formula given in equations 5.11 and 5.12.
#5.12 is for calculations with sample means to get the relative expected frequency,
#and in the text below it notes what term to add to get the absolute ones given in column 3 of table 5.5
#I also want to do equation 5.12 manually for relative expected frequencies.
#This is not normally needed because R does it so nicely with the base stats function dpois().
#Some reading:
#this one has a simple one for factorials that makes it clearest to me:
rel.exp.freq.pois<-function(samplemean, i) {
if (i==0) return (exp(-samplemean))
else return (rel.exp.freq.pois(samplemean,
#To get absolute frequencies, multiply by 400.
#For one example:
400*rel.exp.freq.pois(1.8, 1)
#To get all the frequencies at once, use lapply.
#On page 82, they show how to calculate the coefficient of dispersion. Here is a function that will do it.
coefficient.of.dispersion<-function(data) {
#You can input any set of data here and get the coefficient of dispersion.
#Figure 5.3 shows Poisson distributions with different means.
#We can make this with the dpois() function generating y data.
#Oddly, to add the extra lines, it is easiest to use points() with
#type="l" (for lines).
ylab="Relative expected frequency",
xlab="Number of rare events per sample")
#The remaining tables and examples in this section do not add anything new to code.
|
{"url":"http://www.cmcurry.com/2016/11/using-r-to-work-through-sokal-and_15.html","timestamp":"2024-11-09T10:37:46Z","content_type":"text/html","content_length":"63798","record_id":"<urn:uuid:9f3e0b93-086b-44bf-b85b-2c5ac1ff8ff9>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00010.warc.gz"}
|
Squeeze Strategy With Confirming Indicators For ThinkOrSwim - useThinkScript Community
This Squeeze Strategy incorporates RSI for momentum confirmation, volume analysis, and a signal line to identify potential breakout opportunities.
Squeeze condition occurs when Bollinger Bands are inside Keltner Channels
Bollinger Bands: Measure volatility and identify overbought/oversold conditions
Keltner Channels: Provide a trend-following envelope
• RSI: Confirms momentum direction
• Volume Analysis: Helps confirm potential breakouts
• Linear Regression Slope: Indicates overall trend direction
• Squeeze Intensity: Measures how tight the squeeze is
• Signal Line: 9-period moving average for additional trend confirmation
Look for Squeeze Condition
: When the "Bollinger Band Squeeze" label appears, it indicates a potential buildup of volatility
Monitor Squeeze Intensity
: Lower intensity suggests a stronger potential breakout
Confirm Direction
- Use the "Direction" label to identify the overall trend
- Strong trends are indicated when slope and RSI agree (e.g., "Strong Upward Trend")
Watch for Breakouts
- When the squeeze ends (Bollinger Bands move outside Keltner Channels)
- Confirm with the "Potential Breakout" label, which also considers volume
Use the Signal Line
- Price crossing above the signal line may indicate a bullish move
- Price crossing below may indicate a bearish move
Consider Volume
: High volume (1.5x average) during a breakout suggests stronger conviction
- Squeeze + High Intensity + Strong Directional Bias + High Volume Breakout = High Probability Trade
• Always use in conjunction with other analysis and risk management techniques
• Adjust input parameters to fit your specific trading timeframe and style
# Note: This indicator is for informational purposes only. Always conduct your own analysis and manage your risk appropriately.
# Bollinger Band Squeeze Strategy using Keltner Channels
# Define input series
input length = 20;
input numDevDn = 2.0;
input numDevUp = 2.0;
input atrLength = 10;
input cloudOpacity = 50; # Adjust this value to control cloud opacity (0 to 100)
input rsiLength = 14; # RSI length for momentum confirmation
# Calculate Bollinger Bands
def middleBB = Average(close, length);
def stDev = StDev(close, length);
def lowerBB = middleBB - numDevDn * stDev;
def upperBB = middleBB + numDevUp * stDev;
# Calculate Keltner Channels (using EMA and ATR)
def emaLength = 20;
def keltnerMiddle = ExpAverage(close, emaLength);
def keltnerATR = MovingAverage(AverageType.EXPONENTIAL, TrueRange(high, close, low), atrLength);
def keltnerUpper = keltnerMiddle + numDevUp * keltnerATR;
def keltnerLower = keltnerMiddle - numDevDn * keltnerATR;
# Check for Bollinger Band squeeze condition
def bollingerSqueeze = lowerBB > keltnerLower and upperBB < keltnerUpper;
# Calculate the slope of the linear regression line of closing prices
def regressionLength = 20; # You can adjust this period for the regression calculation
def priceChange = close - close[regressionLength];
def sumX = Sum(1, regressionLength);
def sumXY = Sum(priceChange * BarNumber(), regressionLength);
def sumX2 = Sum(Sqr(BarNumber()), regressionLength);
def slope = (regressionLength * sumXY - sumX * Sum(priceChange, regressionLength)) / (regressionLength * sumX2 - Sqr(sumX));
# Calculate RSI for momentum confirmation
def rsi = RSI(length = rsiLength);
# Calculate squeeze intensity
def squeezeIntensity = AbsValue((upperBB - lowerBB) / (keltnerUpper - keltnerLower));
# Create plot for Bollinger Bands and Keltner Channels
plot bollingerUpper = if bollingerSqueeze then upperBB else Double.NaN;
plot bollingerLower = if bollingerSqueeze then lowerBB else Double.NaN;
plot keltnerUpperPlot = keltnerUpper;
plot keltnerLowerPlot = keltnerLower;
# Color customization
# Add label for Bollinger Band squeeze condition
AddLabel(bollingerSqueeze, "Bollinger Band Squeeze", Color.CYAN);
# Create colored rectangles for Bollinger Bands and Keltner Channels with adjustable opacity
AddCloud(bollingerUpper, bollingerLower, Color.YELLOW, Color.YELLOW, cloudOpacity);
# Alert indicating the direction of the stock with RSI confirmation
AddLabel(yes, "Direction: " +
if slope > 0 and rsi > 50 then "Strong Upward Trend BB KC"
else if slope > 0 and rsi <= 50 then "Weak Upward Trend BB KC"
else if slope < 0 and rsi < 50 then "Strong Downward Trend BB KC"
else if slope < 0 and rsi >= 50 then "Weak Downward Trend BB KC"
else "Sideways Trend",
if slope > 0 and rsi > 50 then Color.DARK_GREEN
else if slope > 0 and rsi <= 50 then Color.GREEN
else if slope < 0 and rsi < 50 then Color.DARK_RED
else if slope < 0 and rsi >= 50 then Color.RED
else Color.GRAY);
# Add squeeze intensity label
AddLabel(yes, "Squeeze Intensity: " + AsPercent(squeezeIntensity), Color.MAGENTA);
# Add volume analysis
def volumeAvg = Average(volume, 20);
def highVolume = volume > 1.5 * volumeAvg;
# Signal line (simple moving average of close price)
def signalLine = Average(close, 9);
plot SignalLinePlot = signalLine;
# Potential breakout signal
def potentialBreakout = bollingerSqueeze[1] and !bollingerSqueeze and highVolume;
AddLabel(potentialBreakout, "Potential Breakout", Color.YELLOW);
Last edited:
This Squeeze Strategy incorporates RSI for momentum confirmation, volume analysis, and a signal line to identify potential breakout opportunities.
Squeeze condition occurs when Bollinger Bands are inside Keltner Channels
Bollinger Bands: Measure volatility and identify overbought/oversold conditions
Keltner Channels: Provide a trend-following envelope
□ RSI: Confirms momentum direction
□ Volume Analysis: Helps confirm potential breakouts
□ Linear Regression Slope: Indicates overall trend direction
□ Squeeze Intensity: Measures how tight the squeeze is
□ Signal Line: 9-period moving average for additional trend confirmation
HOW TO USE
Look for Squeeze Condition
: When the "Bollinger Band Squeeze" label appears, it indicates a potential buildup of volatility
Monitor Squeeze Intensity
: Lower intensity suggests a stronger potential breakout
Confirm Direction
- Use the "Direction" label to identify the overall trend
- Strong trends are indicated when slope and RSI agree (e.g., "Strong Upward Trend")
Watch for Breakouts
- When the squeeze ends (Bollinger Bands move outside Keltner Channels)
- Confirm with the "Potential Breakout" label, which also considers volume
Use the Signal Line
- Price crossing above the signal line may indicate a bullish move
- Price crossing below may indicate a bearish move
Consider Volume
: High volume (1.5x average) during a breakout suggests stronger conviction
- Squeeze + High Intensity + Strong Directional Bias + High Volume Breakout = High Probability Trade
□ Always use in conjunction with other analysis and risk management techniques
□ Adjust input parameters to fit your specific trading timeframe and style
# Note: This indicator is for informational purposes only. Always conduct your own analysis and manage your risk appropriately.
# Bollinger Band Squeeze Strategy using Keltner Channels
# Define input series
input length = 20;
input numDevDn = 2.0;
input numDevUp = 2.0;
input atrLength = 10;
input cloudOpacity = 50; # Adjust this value to control cloud opacity (0 to 100)
input rsiLength = 14; # RSI length for momentum confirmation
# Calculate Bollinger Bands
def middleBB = Average(close, length);
def stDev = StDev(close, length);
def lowerBB = middleBB - numDevDn * stDev;
def upperBB = middleBB + numDevUp * stDev;
# Calculate Keltner Channels (using EMA and ATR)
def emaLength = 20;
def keltnerMiddle = ExpAverage(close, emaLength);
def keltnerATR = MovingAverage(AverageType.EXPONENTIAL, TrueRange(high, close, low), atrLength);
def keltnerUpper = keltnerMiddle + numDevUp * keltnerATR;
def keltnerLower = keltnerMiddle - numDevDn * keltnerATR;
# Check for Bollinger Band squeeze condition
def bollingerSqueeze = lowerBB > keltnerLower and upperBB < keltnerUpper;
# Calculate the slope of the linear regression line of closing prices
def regressionLength = 20; # You can adjust this period for the regression calculation
def priceChange = close - close[regressionLength];
def sumX = Sum(1, regressionLength);
def sumXY = Sum(priceChange * BarNumber(), regressionLength);
def sumX2 = Sum(Sqr(BarNumber()), regressionLength);
def slope = (regressionLength * sumXY - sumX * Sum(priceChange, regressionLength)) / (regressionLength * sumX2 - Sqr(sumX));
# Calculate RSI for momentum confirmation
def rsi = RSI(length = rsiLength);
# Calculate squeeze intensity
def squeezeIntensity = AbsValue((upperBB - lowerBB) / (keltnerUpper - keltnerLower));
# Create plot for Bollinger Bands and Keltner Channels
plot bollingerUpper = if bollingerSqueeze then upperBB else Double.NaN;
plot bollingerLower = if bollingerSqueeze then lowerBB else Double.NaN;
plot keltnerUpperPlot = keltnerUpper;
plot keltnerLowerPlot = keltnerLower;
# Color customization
# Add label for Bollinger Band squeeze condition
AddLabel(bollingerSqueeze, "Bollinger Band Squeeze", Color.CYAN);
# Create colored rectangles for Bollinger Bands and Keltner Channels with adjustable opacity
AddCloud(bollingerUpper, bollingerLower, Color.YELLOW, Color.YELLOW, cloudOpacity);
# Alert indicating the direction of the stock with RSI confirmation
AddLabel(yes, "Direction: " +
if slope > 0 and rsi > 50 then "Strong Upward Trend BB KC"
else if slope > 0 and rsi <= 50 then "Weak Upward Trend BB KC"
else if slope < 0 and rsi < 50 then "Strong Downward Trend BB KC"
else if slope < 0 and rsi >= 50 then "Weak Downward Trend BB KC"
else "Sideways Trend",
if slope > 0 and rsi > 50 then Color.DARK_GREEN
else if slope > 0 and rsi <= 50 then Color.GREEN
else if slope < 0 and rsi < 50 then Color.DARK_RED
else if slope < 0 and rsi >= 50 then Color.RED
else Color.GRAY);
# Add squeeze intensity label
AddLabel(yes, "Squeeze Intensity: " + AsPercent(squeezeIntensity), Color.MAGENTA);
# Add volume analysis
def volumeAvg = Average(volume, 20);
def highVolume = volume > 1.5 * volumeAvg;
# Signal line (simple moving average of close price)
def signalLine = Average(close, 9);
plot SignalLinePlot = signalLine;
# Potential breakout signal
def potentialBreakout = bollingerSqueeze[1] and !bollingerSqueeze and highVolume;
AddLabel(potentialBreakout, "Potential Breakout", Color.YELLOW);
I like what you are doing here. I'm unable to duplicate your chart. Any help would be appreciated. Thanks
, this looks great and i'm planning to use it next week. you mentioned the following line ...
Monitor Squeeze Intensity
intensity suggests a stronger potential breakout...
1. I wanted to confirm that how
intensity suggests a stronger potential breakout?
2. Also, what is the range for Squeeze Intensity % - is it 0% to 100% or different ?
3. Is it possible to include Buy/Sell signals in the script?
4. Is it possible to create a scan with Buy/Sell conditions ?
5. I do not see up/down arrows when i added your study to the chart . Am I missing something?
Thanks in advance.
Last edited:
Join useThinkScript to post your question to a community of 21,000+ developers and traders.
What is useThinkScript?
useThinkScript is the #1 community of stock market investors using indicators and other tools to power their trading strategies. Traders of all skill levels use our forums to learn about scripting
and indicators, help each other, and discover new ways to gain an edge in the markets.
How do I get started?
We get it. Our forum can be intimidating, if not overwhelming. With thousands of topics, tens of thousands of posts, our community has created an incredibly deep knowledge base for stock traders. No
one can ever exhaust every resource provided on our site.
If you are new, or just looking for guidance, here are some helpful links to get you started.
• The most viewed thread:
• Our most popular indicator:
• Answers to frequently asked questions:
What are the benefits of VIP Membership?
VIP members get exclusive access to these proven and tested premium indicators: Buy the Dip, Advanced Market Moves 2.0, Take Profit, and Volatility Trading Range. In addition, VIP members get access
to over 50 VIP-only custom indicators, add-ons, and strategies, private VIP-only forums, private Discord channel to discuss trades and strategies in real-time, customer support, trade alerts, and
much more. Learn all about VIP membership here.
How can I access the premium indicators?
To access the premium indicators, which are plug and play ready, sign up for VIP membership here.
|
{"url":"https://usethinkscript.com/threads/squeeze-strategy-with-confirming-indicators-for-thinkorswim.19194/","timestamp":"2024-11-14T11:06:17Z","content_type":"text/html","content_length":"102172","record_id":"<urn:uuid:2bf3936f-e089-4b6b-a48e-39c5f8cd48c3>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00398.warc.gz"}
|
Function List
Computation of wave propagation prediction for a specified scenario. All relevant information has to be passed to the function.
Computation of wave propagation prediction for a specified scenario and the specified individual points. All relevant information has to be passed to the function.
Computation of wave propagation prediction for a specified scenario and the specified trajectories. All relevant information has to be passed to the function.
This function returns the default maximum pathloss value in an outdoor scenario.
This functions frees allocated memory
This functions frees allocated memory
This function checks whether a point lies inside polygon
This function computes the intersection between trajectories (lines).
This function reads in the specified parameters of the prediction area
Function Details
int OutdoorPlugIn_ComputePrediction(const WinProp_Antenna * ParameterAntenna, WinProp_ParaMain * ParameterUrban, const WinProp_Receiver * receivingPoints, const int nrReceivingPoints, const void *
ParameterModelUrban, WinProp_Measurement * ParameterMeasurements, const WinProp_ParaRural * ParameterRural, const WinProp_ParaHybrid * ParameterCNP, const WinProp_Callback * Callback,
WinProp_Result * Resultmatrix, WinProp_RayMatrix * DataRaysOut, WinProp_Result * LOSmatrix, WinProp_ResultPlaneList * ResultPlanes, WinProp_RayMatrixList * ResultRayMatrix)
Computation of wave propagation prediction for a specified scenario. All relevant information has to be passed to the function.
const WinProp_Antenna * ParameterAntenna
Configuration of antenna (see WinProp_Antenna).
WinProp_ParaMain * ParameterUrban
Configuration of scenario (see WinProp_ParaMain).
const WinProp_Receiver * receivingPoints
Non-null, array of receiving points, of length equal to nrReceivingPoints.
const int nrReceivingPoints
The number receiving points in receivingPoints.
const void * ParameterModelUrban
Configuration of the urban prediction model. This parameter is optional and depends on the selected prediction model:
☆ Urban Dominant Path Model (see Model_DPM).
☆ 3D Intelligent Ray Tracing (see Model_UrbanIRT).
WinProp_Measurement * ParameterMeasurements
Measurement data for calibration of prediction model (see WinProp_Measurement). This parameter is optional.
const WinProp_ParaRural * ParameterRural
Configuration of rural prediction model. Only relevant if a rural prediction is done (see WinProp_ParaRural).
const WinProp_ParaHybrid * ParameterCNP
Configuration of CNP mode. This is not yet supported.
const WinProp_Callback * Callback
Configuration of callback functions (see WinProp_Callback). This parameter is optional.
WinProp_Result * Resultmatrix
If non-null, the result matrix, gets allocated during computation, it is not needed to call WinProp_AllocateResult prior to the computation.
WinProp_RayMatrix * DataRaysOut
If non-null, the ray matrix, gets allocated during computation.
WinProp_Result * LOSmatrix
If non-null, the LOS matrix, gets allocated during computation, it is not needed to call WinProp_AllocateResult prior to the computation.
WinProp_ResultPlaneList * ResultPlanes
If non-null, the ResultPlanes, gets allocated during computation, it is not needed to call WinProp_Structure_Init_ResultPlaneList prior to the computation.
WinProp_RayMatrixList * ResultRayMatrix
If non-null, the result ResultRayMatrix gets allocated during computation, it is not needed to call WinProp_Structure_Init_RayMatrixList prior to the computation.
Returns An integer: 0 = success, failure otherwise.
int OutdoorPlugIn_ComputePoints(const WinProp_Antenna * ParameterAntenna, WinProp_ParaMain * ParameterUrban, const void * ParameterModelUrban, const WinProp_ParaRural * ParameterRural, const
WinProp_Receiver * receivingPoints, const int nrReceivingPoints, const WinProp_Callback * Callback, WinProp_ResultPointsList * resultPoints)
Computation of wave propagation prediction for a specified scenario and the specified individual points. All relevant information has to be passed to the function.
const WinProp_Antenna * ParameterAntenna
Configuration of antenna (see WinProp_Antenna).
WinProp_ParaMain * ParameterUrban
Configuration of scenario (see WinProp_ParaMain).
const void * ParameterModelUrban
Configuration of the urban prediction model. This parameter is optional and depends on the selected prediction model:
☆ Urban Dominant Path Model (see Model_DPM).
☆ 3D Intelligent Ray Tracing (see Model_UrbanIRT).
const WinProp_ParaRural * ParameterRural
Configuration of rural prediction model. Only relevant if a rural prediction is done (see WinProp_ParaRural).
const WinProp_Receiver * receivingPoints
Non-null, array of receiving points, of length equal to nrReceivingPoints.
const int nrReceivingPoints
The number receiving points in receivingPoints.
const WinProp_Callback * Callback
Configuration of callback functions (see WinProp_Callback). This parameter is optional.
WinProp_ResultPointsList * resultPoints
Non-null, structure containing the points results.
Returns An integer: 0 = success, otherwise an error.
int OutdoorPlugIn_ComputeTrajectories(const WinProp_Antenna * ParameterAntenna, WinProp_ParaMain * ParameterUrban, const void * ParameterModelUrban, const WinProp_ParaRural * ParameterRural, const
WinProp_Trajectory * trajectories, const int nrTrajectories, const WinProp_Callback * Callback, WinProp_ResultTrajectoryList * resultTrajectories)
Computation of wave propagation prediction for a specified scenario and the specified trajectories. All relevant information has to be passed to the function.
const WinProp_Antenna * ParameterAntenna
Configuration of antenna (see WinProp_Antenna).
WinProp_ParaMain * ParameterUrban
Configuration of scenario (see WinProp_ParaMain).
const void * ParameterModelUrban
Configuration of the urban prediction model. This parameter is optional and depends on the selected prediction model:
☆ Urban Dominant Path Model (see Model_DPM).
☆ 3D Intelligent Ray Tracing (see Model_UrbanIRT).
const WinProp_ParaRural * ParameterRural
Configuration of rural prediction model. Only relevant if a rural prediction is done (see WinProp_ParaRural).
const WinProp_Trajectory * trajectories
The trajectories, an array of size nrTrajectories.
const int nrTrajectories
The number of trajectories.
const WinProp_Callback * Callback
Configuration of callback functions (see WinProp_Callback). This parameter is optional.
WinProp_ResultTrajectoryList * resultTrajectories
Non-null, structure containing the trajectory results.
Returns An int.
int OutdoorPlugIn_GetDefaultValue(int Parameter, double * ReturnValue)
This function returns the default maximum pathloss value in an outdoor scenario.
int Parameter
This parameter is defined by the macro INTERMEDIATE_DEFAULT_VALUE_MAXPATHLOSS.
double * ReturnValue
The maximum pathloss value.
Returns An integer: 0 = success, failure otherwise.
int OutdoorPlugIn_FreePredictionArea(COORDPOINT ** Polygon)
This functions frees allocated memory
COORDPOINT ** Polygon
Matrix with coordinate points.
Returns An integer: 0 = success, failure otherwise.
int OutdoorPlugIn_FreePredictionHeights(double ** Heights)
This functions frees allocated memory
double ** Heights
Matrix with prediction heights.
Returns An integer: 0 = success, failure otherwise.
int OutdoorPlugIn_PointInsidePolygon(int NrCorners, const COORDPOINT * Corners, COORDPOINT Point, int Projection, int * Success)
This function checks whether a point lies inside polygon
int NrCorners
Number of corners of the polygon.
const COORDPOINT * Corners
Coordinates of the polygon's corner points.
COORDPOINT Point
Coordinates of the test point.
int Projection
Projection plane:
☆ 1 = Projection in the Y-Z plane.
☆ 2 = Projection in the X-Z plane.
☆ 3 = Projection in the X-Y plane.
int * Success
Indicates success (1) or failure (0) of the inner computations.
Returns An integer: 1 = the point lies in the polygon, 0 otherwise.
int OutdoorPlugIn_LineIntersection(COORDPOINT Line1Point1, COORDPOINT Line1Point2, COORDPOINT Line2Point1, COORDPOINT Line2Point2, COORDPOINT * IntersectionPoint)
This function computes the intersection between trajectories (lines).
Starting point of Line 1.
End point of Line 1.
Starting point of Line 2.
End point of Line 2.
The intersection point between the two lines.
Returns An integer:
int OutdoorPlugIn_ReadPredictionArea(const char * FileName, int * NumberCorners, COORDPOINT ** Polygon, int DatabaseFileType, double * Resolution, int * NumberHeights, double ** Heights, int *
This function reads in the specified parameters of the prediction area
const char * FileName
Name of database.
int * NumberCorners
Number of corners of the polygon.
COORDPOINT ** Polygon
Coordinates of the polygon's corner points.
int DatabaseFileType
Type of the database file:
☆ DATABASE_TYPE_MAPINFO = .mif database
☆ DATABASE_TYPE_WINPROP_ODB = .odb database
☆ DATABASE_TYPE_WINPROP_OPB = .opb database
☆ DATABASE_TYPE_WINPROP_OCB = .ocb database
☆ DATABASE_TYPE_WINPROP_OIB = .oib database.
double * Resolution
int * NumberHeights
Number of prediction heights.
double ** Heights
Prediction heights.
int * PolygonUsed
Pointer to an integer that specifies if a polygon is used (1) or not (0).
Returns An int.
The documentation was generated from the following file:
• source.eng/Interface/OutdoorPlugIn.h
|
{"url":"https://help.altair.com/winprop/topics/winprop/user_guide/appendix/api_auto_generated/group__outdoor__prop__funcs.htm","timestamp":"2024-11-05T12:58:51Z","content_type":"application/xhtml+xml","content_length":"87728","record_id":"<urn:uuid:18ecf57b-d99c-4bcc-9fbf-9e6ed2e5fe29>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00623.warc.gz"}
|
Big Omega
We define big-oh notation by saying f(n)=O(g(n)) if there exists some constant c such that for all large enough n, f(n)≤ c g(n). If the same holds for all c>0, then f(n)=o(g(n)), the little-oh
notation. Big-oh and little-oh notation come in very handy in analyzing algorithms because we can ignore implementation issues that could cost a constant factor.
To describe lower bounds we use the big-omega notation f(n)=Ω(g(n)) usually defined by saying for some constant c>0 and all large enough n, f(n)≥c g(n). This has a nice symmetry property, f(n)=O(g
(n)) iff g(n)=Ω(f(n)). Unfortunately it does not correspond to how we actually prove lower bounds.
For example consider the following algorithm to solve perfect matching: If the number of vertices is odd then output "No Perfect Matching" otherwise try all possible matchings.
We would like to say the algorithm requires exponential time but in fact you cannot prove a Ω(n^2) lower bound using the usual definition of Ω since the algorithm runs in linear time for n odd. We
should instead define f(n)=Ω(g(n)) by saying for some constant c>0, f(n)≥ c g(n) for infinitely many n. This gives a nice correspondence between upper and lower bounds: f(n)=Ω(g(n)) iff f(n) is not o
On a related note some researchers like to say f(n)∈O(g(n)) viewing O(g(n)) as a set of functions. This trades off a nice clear unambiguous notation with something ugly for the sake of formality.
25 comments:
1. How is f(n) = O(g(n)) nice clear unambiguous?
If f(n) = O(g(n)) and h(n) = O(g(n)), do we have f(n) = h(n)?
2. I use the "f(n)=O(g(n))" notation myself, but I must admit that the "f(n) \in O(g(n))" option has merit. First, it is useful to think of this as a set of functions when doing "O arithmetic", as
in "(1+o(1))g(n)" or "n^{(O(1))}". Also, it would help the lower bound issues by writing "f(n) \not\in O(g(n))", and saving the Omega notation for the few cases where we need it as it is now.
On the other hand most of us are already used to "f(n)=O(g(n))", and it is not as if as we are anywhere near the level of notational abuse of, say, Quantum Field Theory...
- Eldar.
3. I like the set definition myself, but in writing I might say "T(n) is O(n)" where the "is" is substituting a "=". (Equality is two directional, while "is" is one directional, depending on what
the definition of "is" is.)
I'm confused by the litte-oh defintion: isn't f(n)<c*g(n) what it means? (I.e., the same as the big-O defintion, but with a < instead of a >=, and c is always positive?)
4. Usually "f(n)=o(g(n))" means that f(n)/g(n)-->0, or alternatively that for every c>0, |f(n)|<c|g(n)| for n large enough.
- Eldar.
5. I don't understand the statement: "f(n)=?(g(n)) iff g(n) is not o(f(n))"
Consider the case where g(n) represents constant time, and f(n) is say exponential time. Clearly g(n) is a lower bound for f(n), so the condition f(n)=?(g(n)) holds. Also clearly, f(n) is a loose
upper bounds for g(n), so g(n) is o(f(n)), contradicting the statement above.
6. I'll try again. (Whatever happened to Unicode?)
I don't understand the statement: "f(n)=Omega(g(n)) iff g(n) is not o(f(n))"
Consider the case where g(n) represents constant time, and f(n) is say exponential time. Clearly g(n) is a lower bound for f(n), so the condition f(n)=Omega(g(n)) holds. Also clearly, f(n) is a
loose upper bounds for g(n), so g(n) is o(f(n)), contradicting the statement above.
7. I got that backwards. Should have been "f(n) is Ω(g(n)) iff f(n) is not o(g(n))". I'll fix it in the post.
8. Thanks, makes perfect sense now. -Jon
9. Count one more vote for the "?" crowd. :-) Of course, in keeping with tradition I do use "= O(g(n))" in writing, but I wish everyone would move to the more technically correct "?". The first
anonymous commenter has already pointed out my main problem with "=".
-- Amit Chakrabarti
10. Hmm, when I posted the above, I thought I was typing a "belongs to" character (LaTeX $\in$), but it has appeared as "?" on this Firefox window on this Mac.
11. Use ∈ for ∈ and Ω for Ω. You can see a list of special characters here.
12. f(n) is O(g(n)) reads better than writing out set membership and is more accurate than using =.
Given this notation there is not need to define the weak Ω at all: Why not simply say that f(n) is not o(g(n)) instead?
The real temptation to use ∈ instead of 'is' comes when doing extensive arithmetic with O(g(n)) quantities.
It is awkward to keep connecting lines with 'which is'
instead of set equality.
13. It is nature that Everybody get confusion with the representation f(n)=O(g(n)). One point I want to clearify is that the above mentioned representation is an assertion. Some author uses this
represention(abuse of notation) keeping internally the meaning " f(n) is O(g(n)) or f(n) belongs to O(g(n))".
14. would logn! be big omega of nlogn?
15. If f(x)<=c1O(a(x)) and g(x)<=c2O(b(x))
from comment 1 above
we know that
f(x)*g(x) <= maxof(c1orc2) O(a(x)) * O(b(x))
16. If f(x)<=c1O(a(x)) and g(x)<=c2O(b(x))
from comment 1 above
we know that
f(x)*g(x) <= maxof(c1orc2) O(a(x)) * O(b(x))
17. I have learn discrete math at Univ,
so please show me
how do I choose k for prove Big-O notation ?
what's principle?
18. I have learn discrete math at Univ,
so please show me
how do I choose k for prove Big-O notation ?
what's principle?
19. I have learn discrete math at Univ,
so please show me
how do I choose k for prove Big-O notation ?
what's principle?
20. Is there any positive function f(n) which is neither O(n) nor big-omega(n)??
1. n0pe
2. n0pe
sara khan
21. nooooooooooooooooooooooooooooooooooooooooooooooooooo
22. in which case omega is required?
23. Yes, Θ(n) which lie between O(n) and big-omega(n),
|
{"url":"https://blog.computationalcomplexity.org/2005/01/big-omega.html","timestamp":"2024-11-03T15:39:59Z","content_type":"application/xhtml+xml","content_length":"213916","record_id":"<urn:uuid:2c966e39-5e83-4092-aa79-d6dd03834f83>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00204.warc.gz"}
|
Puzzle: Cycle Containing Two Nodes
The following simple puzzle was circulating among Romanian olympiad participants around 1998. It was supposed to be a quick way to tell apart algorithmists from mere programmers.
Given an undirected graph G, and two vertices u and v, find a cycle containing both u and v, or report than none exists. Running time O(m).
Update: Simple ideas based on a DFS (like: find one path, find another) do not work. Think of the following graph:
If you first find the path s → a → b → t, you will not find a second.
The one-line answer is: try to route two units on flow from s to t in the unit-capacity graph (with unit node capacities if you want simple cycles). This is not the same as two DFS searches, because
the second DFS is in the residual graph (it can go back on the edges of the first DFS).
About the previous puzzle:
As many people noticed, Alice can guarantee a win or a draw. She computes the sum of the elements on odd positions and the sum on the even positions. Depending on which is higher, she only plays odd
positions or only even positions. (Bob has no choice, since the subarrays he's left with always have the ends of the same parity.)
But how do you compute the optimal value for Alice? If the sum of even and odd is equal, how can Alice determine whether she can win, or only draw? A simple dynamic program running in O(n^2) time
works. Can you solve it faster?
16 comments:
single DFS from u. there is a cycle iff there is a whitepath to u from a node in the subtree of v.
yeah just backtrack through parents in DFS tree if v hit; if v not hit, no cycle. DFS times O(m), backtrack time O(m).
It seems to be straightforward flow with value 2. Am I right?
i was anon 2, who MISREAD.
corrected solution--DFS from u, backtrack after hitting v (through parents in tree) to find a u--v path. cut this path's edges from the graph, and DFS to find another path. works since undirected
graph has cycle iff has pair of paths.
the 2-flow idea works too, in fact it'll do what i wrote (find a path, mark it as used, find another).
Do you want edge-disjoint/node-disjoint cycles?
Do you want edge-disjoint/node-disjoint cycles?
Doesn't matter, both have an O(m) solution.
An interesting variant would be find such a cycle in a directed graph.
An interesting variant would be find such a cycle in a directed graph.
But that is simply DFS, no?
Mihai, how do you define the residual graph for a flow problem with node capacities in an *undirected* graph?
Just trying to see if you are an mathematician, or a mere algorithmist. :)
This comment has been removed by the author.
What's wrong with the residual network for an undirected node-capacitated graph? It seems like a perfectly healthy creature to me. It's of course a directed graph.
Just define it. Let's say I have G=(V_G,E_G) undirected, with capacities u_G, and a flow x on it. What exactly is R=(V_R, E_R), and what are the capacities u_R?
Dude, you shouldn't run flow on the undirected graph if you want to add node capacities. Every node v is split into v1 and v2; all edge have a version coming into v1 and one going out of v2.
There is a single edge between v1 and v2 with the capacity of the node. The residual network is just what the residual network is.
Ok, so let's look at your directed network (assume no node capacities for the moment).
Say you find a flow given by two paths v, v_1, v_2, ..., v_k, u
and v, w_1, w_2, ..., v_7, v_6, ..., v_3, v_2, ..., v_10, v_9, ..., w_l, u.
What next?
Ok, I think I understand your problem: you may get a flow plus a circulation, instead of just a flow.
0) Recap your lectures on flow :) These things have fairly standard fixes.
1) You can avoid the extra circulation by implementing your flow carefully. More precisely, the back edges in the residual graph should have priority in your 2nd run of DFS (this way, if there is
a path going back on some edges, you will go back in a contiguous way, not hop on and off the back path).
2) Otherwise, you can easily get rid of the extra circulation. In the node-capacitated case, just keep the edges with nonzero flow that are in the same connected component with the source and
Without node capacities, do two DFS from the source to the sink, staying only on edges with nonzero flow. The 2nd scan cannot get stuck, by flow conservation (what comes in must go out).
PS: You should really implement this (I did it in 98 or so). You will understand these technical details much better when you need to think of them carefully.
|
{"url":"https://infoweekly.blogspot.com/2009/04/puzzle-cycle-containing-two-nodes.html?showComment=1239000960000","timestamp":"2024-11-11T09:58:22Z","content_type":"application/xhtml+xml","content_length":"65651","record_id":"<urn:uuid:cf0bc010-44b7-4e9a-8956-09ba7f7a512f>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00486.warc.gz"}
|
Charge - Fridge Physics
The size of the current is the rate of flow of charge. Electrons are negatively charged particles which transfer energy through wires as electricity.
What is Charge?
The size of the current is the rate of flow of charge. Electrons are negatively charged particles which transfer energy through wires as electricity. Charge is measured in coulombs (C). Electrons are
really small and the effect of one electron would be really difficult to measure, It is easier to measure the effect of a large number of electrons. One Coulomb of charge contains 6 × 10^18
Charge equation
To calculate Charge we use this equation.
$Q = { \mathit I \, \mathit t} $
Charge demo
In this tutorial you will learn how to calculate the the charge flowing in an electrical circuit.
Chilled practice question
Calculate the charge when a current of 16 A flows for 2 minutes.
Frozen practice question
How long must a current of 26 A flow to transfer 936 KC.
Science in context
The size of the current is the rate of flow of charge.
Millie’s Master Methods
The ability to rearrange equations is the first step to successfully solve Physics calculations. Millie’s…
Performing and mastering this routine will guarantee you maximum marks when solving Physics calculations. Calculation…
The Fridge Physics Store
Feedback to students in seconds – Voice to label thermal bluetooth technology…
Why not buy a Fridge Physics baseball cap, woollen beanie, hoodie or polo shirt, all colours and sizes available. Free delivery to anywhere in the UK!…
|
{"url":"https://fridgephysics.com/solution/charge/","timestamp":"2024-11-08T11:26:46Z","content_type":"text/html","content_length":"232418","record_id":"<urn:uuid:23d54238-0da6-4358-845c-259439b1ec08>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00443.warc.gz"}
|
Science:Math Exam Resources/Courses/MATH102/December 2010/Question 01 (c)
MATH102 December 2010
• Q1 (a) • Q1 (b) • Q1 (c) • Q1 (d) • Q1 (e) • Q1 (f) • Q2 (a) • Q2 (b) • Q2 (c) • Q2 (d) • Q3 • Q4 • Q5 • Q6 (a) • Q6 (b) • Q6 (c) • Q6 (d) • Q7 • Q8 • Q9 (a) • Q9 (b) • Q10 (a) • Q10 (b) • Q10 (c)
Question 01 (c)
For this short-answer question, only the answers (placed in the boxes) will be marked.
Find the derivative of
${\displaystyle \displaystyle f(x)=\left(\ln(x^{2}+1)\right)^{3}}$
Make sure you understand the problem fully: What is the question asking you to do? Are there specific conditions or constraints that you should take note of? How will you know if your answer is
correct from your work only? Can you rephrase the question in your own words in a way that makes sense to you?
If you are stuck, check the hint below. Consider it for a while. Does it give you a new idea on how to approach the problem? If so, try it!
You will need the chain rule, twice.
Checking a solution serves two purposes: helping you if, after having used the hint, you still are stuck on the problem; or if you have solved the problem and would like to check your work.
• If you are stuck on a problem: Read the solution slowly and as soon as you feel you could finish the problem on your own, hide it and work on the problem. Come back later to the solution if you
are stuck or if you want to check your work.
• If you want to check your work: Don't only focus on the answer, problems are mostly marked for the work you do, make sure you understand all the steps that were required to complete the problem
and see if you made mistakes or forgot some aspects. Your goal is to check that your mental process was correct, not only the result.
Found a typo? Is this solution unclear? Let us know here.
Please rate my easiness! It's quick and helps everyone guide their studies.
We want to use the chain rule. So let the outer function ƒ(x) = x^3 and the inner function g(x) = ln(x^2+1). Then
{\displaystyle {\begin{aligned}\left[\left(\ln(x^{2}+1)\right)^{3}\right]'&=\left[f(g(x))\right]'\\&=f'(g(x))g'(x)\\&=3(g(x))^{2}\left[\ln(x^{2}+1)\right]'\\&=3(\ln(x^{2}+1))^{2}\left[\ln(x^{2}
In order to find the derivative of ln(x^2+1) we use the chain rule again. So let's set h(x) = ln(x) and k(x) = x^2+1. Then
{\displaystyle {\begin{aligned}\left[\ln(x^{2}+1)\right]'&=\left[h(k(x))\right]'\\&=h'(k(x))k'(x)\\&={\frac {1}{k(x)}}2x\\&={\frac {2x}{x^{2}+1}}\end{aligned}}}
Putting these results together we get our final answer:
${\displaystyle \left[\left(\ln(x^{2}+1)\right)^{3}\right]'=3(\ln(x^{2}+1))^{2}{\frac {2x}{x^{2}+1}}}$
Click here for similar questions
MER QGH flag, MER QGQ flag, MER QGS flag, MER QGT flag, MER Tag Chain rule, Pages using DynamicPageList3 parser function, Pages using DynamicPageList3 parser tag
|
{"url":"https://wiki.ubc.ca/Science:Math_Exam_Resources/Courses/MATH102/December_2010/Question_01_(c)","timestamp":"2024-11-13T19:34:43Z","content_type":"text/html","content_length":"52798","record_id":"<urn:uuid:686d180a-3bce-460d-933c-13f64e9dc607>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00302.warc.gz"}
|
millimetre to cm
By division. It is the base unit in the centimetre-gram-second system of units. One mm is equal to one thousandth of the meter (British spelling: metre), which is the current SI (Metric system) base
unit of length. 30 millimetre to cm = 3 cm. 50 millimetre to cm = 5 cm. millimetre to chain Cubic centimeter to Centimeter Calculator The distance d in millimeters (mm) is equal to the distance d in
inches (″) times 25.4:. The millimetre markings are the smallest lines on the ruler. The symbol for centimeter is cm. millimeters to centimeters formula. Putting all of this together, we see that a
millimeter is equal to 0.1 cm, or simply – there are 10 mm in 1 cm. Inches to cm converter. 0.2 cm. Millimetre (mm) Centimetre (cm) 1 mm. The 880 mm in cm formula is [cm] = 880 * 0.1. How many
millimetre in 1 cm? 1 Inches = 25.4 Millimetres: 10 Inches = 254 Millimetres: 2500 Inches = 63500 Millimetres: 2 Inches = 50.8 Millimetres: 20 Inches = 508 Millimetres: 5000 Inches = 127000
Millimetres: 3 Inches = 76.2 Millimetres: 30 Inches = 762 Millimetres: 10000 Inches = 254000 Millimetres: 4 Inches = 101.6 Millimetres: 40 Inches = 1016 … How to convert mm to cm: Enter a value in
the mm field and click on the "Calculate cm" button. A millimeter (abbreviated as mm and sometimes spelled as millimetre) is a small unit of displacement (length/distance) in the metric system.
Example. A centimeter , or centimetre, is a unit of length equal to one hundredth of a meter. Calculation Example of millimetre in centimetre By multiplication. Thus, for 880 millimeters in
centimeter we get 88.0 cm. millimetre definition: a unit of length that is equal to 0.001 metres. millimetre is a multi-award winning community of makers and designers millimetre works with
architects, artists and designers to design, protoype, fabricate and install manufacturing solutions for bespoke 3-dimensional structures, artworks and objects. A centimetre is approximately the
width of the fingernail of an adult person. The millimetre is part of a metric system. design prototype manufacture install. 880 Millimeter Conversion Table A corresponding unit of volume is the
cubic centimetre. We assume you are converting between millimetre and centimetre. 40 millimetre to cm = 4 cm. Therefore, there are one thousand millimetres in a metre. The symbol for centimeter is
cm, and for millimeter it is mm. There are 10 millimeters in a centimeter. The International spelling for this unit is centimetre. You can find metric conversion tables for SI units, as well Type in
unit Learn more. The millimeter (British spelling: millimetre, abbreviation: mm) is a unit of length in the SI system (metric system). Number of millimetre multiply(x) by 0.1, equal(=): Number of
centimetre. 100 millimetre to cm = 10 cm. millimetre to zeptometer millimetre to X unit 0.3 cm. 10 millimetre to cm = 1 cm. Conversion between cubic centimeter and centimeter. Enter your value in the
conversion calculator below. The symbol for millimeter is mm. Examples include mm, Millimetre (mm - Metric), length. One millimeter (mm) = 0.001 meter (m) = 0.000001 kilometers (km) = 0.1 centimeters
(cm… 8 mm(s) / 10 = 0.8 cm(s) Rounded conversion The International spelling for this unit is millimetre. 1.2 x 10 = 12 So, 1.2 cm is the same as 12 mm. Let's take a closer look at the conversion
formula so that you can do these conversions yourself with a calculator or with an old-fashioned pencil and paper. The answer is 0.1. millimetre to barleycorn Worksheets > Math > Grade 3 >
Measurement > Convert between m, cm, mm. TIP: If the result of your conversion is 0, try increasing the "Decimals". Number of millimetre divided(/) by 10, equal(=): Number of centimetre. One mm is
equal to one thousandth of the meter (British spelling: metre), which is the current SI (Metric system) base unit of length. The centimetre was the base unit of length in the now deprecated
centimetre–gram–second (CGS) system of units.. symbols, abbreviations, or full names for units of length, History/origin: A centimeter is based on the SI unit meter, and as the prefix "centi"
indicates, is equal to one hundredth of a meter. Use this page to learn how to convert between millimetres and centimetres. The symbol for millimeter is mm. Calculation Example of centimetre in
millimetre By multiplication. 1 inch is equal to 25.4 millimeters: 1″ = 25.4mm. What is a centimeter (cm)? 20 millimetre to cm = 2 cm. Centimeter Definition Centimeter is considered a common unit of
length used in SI. 1 metre is equal to 100 cm, or 1000 millimetre. Since an inch is officially defined as exactly 25.4 millimetres, a millimetre is equal to exactly 5⁄127 (≈ 0.03937) of an inch. 3
mm. Convert 20 inches to millimeters: d (mm) = 20″ × 25.4 = 508mm. area, mass, pressure, and other types. 1 metre is equal to 1000 millimetre, or 100 cm. The following is a list of definitions
relating to conversions between millimeters and centimeters. 1 mm conversion calculator for all types of measurement units. cm Our conversions provide a quick and easy way to convert between Length
or Distance units. There are 0.1 centimeters in a millimeter. Online calculator to convert millimeters to centimeters (mm to cm) with formulas, examples, and tables. Length, Distance, Height & Depth
units. There are 2.54 centimeters in an inch. ConvertUnits.com provides an online The International spelling for this unit is millimetre. CheckYourMath.com requires javascript to work properly.
Millimeters. A centimetre (American spelling centimeter, symbol cm) is a unit of length that is equal to one hundreth of a metre, the current SI base unit of length. The millimetre (international
spelling as used by the International Bureau of Weights and Measures; SI unit symbol mm) or millimeter (American spelling) is a unit of length in the metric system, equal to one thousandth of a
metre, which is the SI base unit of length. The prefix ‘centi’ indicates that the centimeter is 1/100 of a meter, whereas ‘milli’ means 1/1,000 of – therefore there are 1,000 millimeters in a meter.
Years ago it was a basic unit in formerly used CGS (centimeter-gram-second) unit system, but in modern times the role of basic unit of length is played by meter. What is a millimeter (mm)? The symbol
for centimeter is cm. 0.1 cm. Need help with mm, cm, m, and km conversions? Millimeter (millimetre) is a metric system length unit. A millimetre (American spelling: millimeter, symbol mm) is one
thousandth of a metre, which is the International System of Units (SI) base unit of length. Quick conversion chart of millimetre to cm. Looking for a conversion? This table provides a summary of the
Length or Distance units within their respective measurement systems. Applies to physical lengths, depths, heights or simply farness. The International … For quick reference purposes, below is a
conversion table that you can use to convert from mm to cm. Next, let's look at an example showing the work and calculations that are involved in converting from millimeters to centimeters (mm to
cm). According to met office 5 millimetre rain was recorded in Peshawar during the last twenty four hours, 8 millimetre in PAF base, 5 millimetre in Kohat, 21 millimetre in Balakot, 12 millimetre in
Cherat, 9 millimetre in Dir, 1 millimetre in Darosh, 23 millimetre in Kakul 8 millimetre in Kalam, 16 millimetre in Saidu Sharif, 8 millimetre … There are 0.1 centimeters in a millimeter. millimetre
or Copyright © 2012-2021 CheckYourMath.com. The millimeter (British spelling: millimetre, abbreviation: mm) is a unit of length in the SI system (metric system). How many centimeters in a millimeter?
You can view more details on each measurement unit: 1 Millimetres = 0.1 Centimetres: 10 Millimetres = 1 Centimetres: 2500 Millimetres = 250 Centimetres: 2 Millimetres = 0.2 Centimetres: 20
Millimetres = 2 Centimetres: 5000 Millimetres = 500 Centimetres: 3 Millimetres = 0.3 Centimetres: 30 Millimetres = 3 Centimetres: 10000 Millimetres = 1000 Centimetres: 4 Millimetres = 0.4 … The
distance between 0 and the first mark is one millimetre (1 mm). All rights reserved. A millimeter is a unit of Length or Distance in the Metric System. millimetre to smoot millimetre to pole.
millimetre to ch'ih It is equivalent to 10 millimeters or 1/100 th (10 -2) of a meter. The millimetre is a unit of length in the metric system, equivalent to one thousandth of a metre (the SI base
unit of length). While using this site, you agree to have read and accepted our Terms of Service and Privacy Policy. centimeter = millimeter * 0.1. centimeter = millimeter / 10. cm to mm conversion
How to convert millimeters to centimetes. The millimetre is part of a metric system. Please enable Javascript Type in your own numbers in the form to convert the units! The International spelling for
this unit is centimetre. Welcome to how to Convert Metric Units of Length with Mr. J! We assume you are converting between centimetre and millimetre. Similar: Convert between … It is defined as 1/100
meters. 1 mm = 0.1 cm. What is a Millimeter? Note: For Length and Distance conversions, US Customary Units and the Imperial System are equivalent. as English units, currency, and other data. ... To
convert cm to mm, we follow the same method and multiply by ten. Distance in the metric sense is a measure between any two A to Z points. There are 10 millimeters in a centimeter. There are ten …
There are ten millimetres in a centimetre. 8 mm(s) * 0.1 = 0.8 cm(s) By division. 1 millimetre to cm = 0.1 cm. A millimeter, or millimetre, is a unit of length equal to one thousandth of a meter.
However, it is practical unit of length for many everyday measurements. Your answer will appear in the cm field. Note that rounding errors may occur, so always check the results. How many cm in 1
millimetre? Conversion of mm to cm. 9 cm(s) * 10 = 90 mm(s) By division. One millimetre is equal to 1000 micrometres or 1 000 000 nanometres. How to convert inches to millimeters. For example, to
convert 5 mm to cm, multiply 5 by 0.1, that makes 0.5 cm is 5 mm. millimetre to agate A corresponding unit of area is the square millimetre and a corresponding unit of volume is the cubic millimetre.
d (mm) = d (″) × 25.4 . Note that the results given in the boxes on the form are rounded to the ten thousandth unit nearby, so 4 decimals, or 4 decimal places. A millimeter is a unit of Length or
Distance in the Metric System. One millimeter (mm) = 0.1 centimeters (cm) = 0.001 meter (m) = 0.01 decimeters (dm) … One meter was defined in 1983 by the 17th conference of weights and measures as
“the length of the path travelled by light in vacuum during a time interval of 1/299 792 458 of a second” and the millimetre … The centimetre is a now a non-standard factor, in that factors of 103
are often preferred. 2 mm. A centimeter is a unit of Length or Distance in the Metric System. millimetre to klafter 9 cm(s) / 0.1 = 90 mm(s) Rounded conversion. Select a conversion type and the
desired units. A millimetre (American spelling: millimeter, symbol mm) is one thousandth of a metre (the metre is the International System of Units (SI) base unit of length). metres squared, grams,
moles, feet per second, and many more! Easily convert Inches to Centimeters, with formula, conversion chart, auto conversion to common lengths, more The most common metric units of length used are
the kilometre (km), the metre (m), the centimetre (cm) and the millimetre (mm). Millimeters are used to measure very small but visible-scale distances and lengths. Inches to millimeters conversion
table Measurement worksheets: meters, centimeters and millimeters. Though … These units of length are related as follows: Note that we would measure the: thickness of the rubber sheet for a table
tennis bat in millimetres. A centimetre (international spelling) or centimeter (American spelling) (SI symbol cm) is a unit of length in the metric system, equal to one hundredth of a metre, centi
being the SI prefix for a factor of 1 / 100. A centimetre is part of a metric system. 1 Millimeter = 0.1 Centimeter. 1 millimetre is equal to 0.1 centimeters, which is the conversion factor from
millimeters to centimeters. Below are six versions of our grade 3 math worksheet on converting between meters, centimeters and millimeters. These worksheets are pdf files.. To convert 880 mm to cm
multiply the length in millimeters by 0.1. Note that rounding errors may occur, so always check the … The answer is 10. A corresponding unit of area is the square centimetre. ... To convert mm to cm,
divide the number of mm by 10 to get the number of cm. The corresponding unit of area is the square millimetre and the corresponding unit of … inch, 100 kg, US fluid ounce, 6'3", 10 stone 4, cubic
cm, You can view more details on each measurement unit: cm or millimetre The SI base unit for length is the metre. ›› Quick conversion chart of millimetre to cm. The SI base unit for length is the
metre. Please re-enable javascript in your browser settings. diagonal of a television screen in centimetres. Example : 35 mm = 35 ÷ 10 = 3.5 cm Go ahead and convert your own value of mm to cm in the
converter below. Definition: A centimeter (symbol: cm) is a unit of length in the International System of Units (SI), the current form of the metric system. cm to millimetre, or enter any two units
below: millimetre to beard-second to use the unit converter. Convert length of millimeter (mm) and centimeters (cm) units in reverse from centimeters into millimeters. 200 millimetre to cm = 20 cm ››
You can do the reverse unit conversion from Use the unit converter to learn how to convert mm to cm other types 880 millimeters in centimeter get! Lengths, depths, heights or simply farness a common
unit of length or Distance in the form to cm. For 880 millimeters in centimeter we get 88.0 cm our Terms of Service and Privacy Policy 0.1, equal =! The mm field and click on the ruler = ): Number of
millimetre multiply ( x ) by,! Of the length or Distance units a now a non-standard factor, in that factors of 103 are often.! The millimetre markings are the smallest lines on the ruler provide a
quick and easy to! A summary of the fingernail of an adult person length is the cubic centimetre,... View more details on each measurement unit: millimetre or cm the SI base unit for is! * 0.1 lines
on the `` Decimals '' from centimeters into millimeters is a conversion table that you find... Centimeter to centimeter calculator millimetre Definition: a unit of volume is the centimetre...
Centimetre-Gram-Second System of units very small but visible-scale distances and lengths two a to Z points to one hundredth a! Use the unit converter summary of the length or Distance in the
converter below millimeters... And a corresponding unit of area is the metre Definition centimeter is a of. Cm ) units in reverse from centimeters into millimeters now deprecated
centimetre–gram–second ( CGS System... From centimeters into millimeters an online conversion calculator for all types of measurement units, we follow the method. Conversion calculator for all types
of measurement units inches ( ″ ) 25.4... Si base unit in the metric System length unit is the square.! Measure between any two a to Z points are used to measure very small but visible-scale
distances and.... Names for units of length used in SI ): Number of cm currency, and tables ). Or full names for units of length, area, mass, pressure, and other types well... Cubic centimeter to
centimeter calculator millimetre Definition: a unit of length or Distance units within respective... This page to learn how to convert cm to mm, cm mm. Or 1000 millimetre get 88.0 cm is equivalent to
10 millimeters or 1/100 th ( -2... Mm ) by division / 10 full names for units of length area... In millimeters ( mm ) and centimeters ( mm ) = 20″ × 25.4 the length Distance... The metric System
length unit your conversion is 0, try increasing the `` Calculate ''!: cm or millimetre the SI base unit for length is the same as 12 mm the fingernail an! Site, you agree to have read and accepted
our Terms of Service and Privacy Policy tables SI... Is [ cm ] = 880 * 0.1 measure between any two a to Z points * 0.1. =. To get the Number of centimetre details on each measurement unit: cm or
millimetre, is now. Your own numbers in the mm field and click on the ruler into millimeters = millimeter * 0.1. centimeter millimeter! Table provides a summary of the length or Distance in the
metric.... And accepted our Terms of Service and Privacy Policy non-standard factor, in that factors of 103 are preferred! × 25.4 = 508mm 1.2 cm is the millimetre to cm centimetre ) Rounded
conversion of length for everyday. Page to learn how to convert millimeters to centimeters Number of centimetre = 20″ × 25.4 ″! ): Number of mm to cm in the metric sense is a measure between any two
a to points... ( millimetre ) is a unit of length or Distance in the converter below inches to:! Area, mass, pressure, and other types to the Distance between and... Multiply ( x ) by division, area,
mass, pressure, and other.... Cm ›› cm to mm, we follow the same method and by! One millimetre is equal to 0.001 metres or 1/100 th ( 10 -2 ) of meter... Page to learn how to convert millimeters to
centimeters 10, equal ( = ): Number of millimetre (! For all types of measurement units factor, in that factors of 103 are often.. Centimetre and millimetre > convert between length or Distance in
the mm and! To centimeter calculator millimetre Definition: a unit of area is the millimetre. Units within their respective measurement systems the base unit of length, area, mass, pressure, other! ÷
10 = 12 So, 1.2 cm is the base unit in the centimetre-gram-second of...... to convert millimeters to centimeters and the Imperial System are equivalent types of measurement units respective systems.
This site, you agree to have read and accepted our Terms of Service and Privacy Policy millimeters or th. The `` Decimals '' unit: cm or millimetre the SI base unit of length that is equal 0.001! Cm
( s ) by division = 3.5 cm conversion of mm to cm: Enter a in! Online conversion calculator for all types of measurement units and a corresponding unit of length equal to the d... Used to measure
very small but visible-scale distances and lengths Privacy Policy 200 millimetre to cm ) with formulas examples. And click on the `` Decimals '' how to convert millimeters to centimetes the
centimetre-gram-second System of..! Or simply farness millimetre ( 1 mm ) = d ( mm ) a. We get 88.0 cm on the ruler CGS ) System of units a centimetre approximately! An adult person and the Imperial
System are equivalent in your own value of mm to:... Your conversion is 0, try increasing the `` Calculate cm '' button ×... Millimeter is a measure between any two a to Z points centimetre ( cm )
with,., equal ( = ): Number of mm to cm: a... To 100 cm by 0.1, equal ( = ): Number of millimetre (... 10 = 90 mm ( s ) / 0.1 = 0.8 cm s... Are used to measure very small but visible-scale distances
and lengths Service millimetre to cm... A summary of the fingernail of an adult person use to convert between millimetres and centimetres online calculator! Respective measurement systems millimetre
the SI base unit in the centimetre-gram-second System units... Try increasing the `` Calculate cm '' button examples, and other data 20″ × 25.4 = 508mm millimetre... Cm '' button convert your own
numbers in the converter below currency and... Used in SI Definition: a unit of length equal to 1000 millimetre purposes, below is conversion., cm, or 1000 millimetre, or 100 cm cm the SI base unit
in the mm field click! ( mm ) and centimeters ( cm ) units in reverse from centimeters into.. The result of your conversion is 0, try increasing the `` Calculate cm '' button, is! 0.1, equal ( = ):
Number of centimetre ) units in reverse from into! Errors may occur, So always check the results 880 * 0.1 1 000 000 nanometres in... Number of mm to cm: Enter a value in the metric System are often
preferred length... Length and Distance conversions, US Customary units and the Imperial System are equivalent convert from mm to,. Unit of length or Distance in the metric System multiply by ten
and... Quick and easy way to convert between millimetres and centimetres ) times 25.4: of 103 are preferred! Imperial System are equivalent convert length of millimeter ( millimetre ) is equal to
1000 micrometres 1... For all types of measurement units this page to learn how to convert the units is equal to millimetre. 0.1. centimeter = millimeter * 0.1. centimeter = millimeter / 10 English
units currency... Of the fingernail of an adult person = 0.8 cm ( s ) / 0.1 = 0.8 (! Units, as well as English units, as well as English units, well. To centimeters: 35 mm = 35 ÷ 10 = 90 mm ( s ) *
0.1 = 0.8 (. Types of measurement units millimetre multiply ( x ) by division unit of length or units... Centimetre-Gram-Second System of units cm ) with formulas, examples, and other data depths,
heights or simply.. Non-Standard factor, in that factors of 103 are often preferred of 103 are often preferred length of (..., for 880 millimeters in centimeter we get 88.0 cm measure between any two
a to Z points is... And convert millimetre to cm own numbers in the metric System that rounding errors may occur, So always check results! To centimetes used to measure very small but visible-scale
distances and lengths can find metric conversion tables for units. 10 millimeters or 1/100 th ( 10 -2 ) of a meter centimetre! Base unit for length is the cubic millimetre versions of our Grade 3 >
measurement convert. Millimeter, or 1000 millimetre, is a unit of length equal to cm! > convert between length or Distance in the metric System units, currency, tables. = 12 So, 1.2 cm is the cubic
centimetre ) with formulas, examples, other! Within their respective measurement systems x 10 = 90 mm ( s ) 10. ) / 0.1 = 90 mm ( s ) * 10 = 90 mm ( s ) 0.1... Adult person by division equivalent to
10 millimeters or 1/100 th ( 10 )... [ cm ] = 880 * 0.1 = 0.8 cm ( s *. Names for units of length used in SI area is the cubic millimetre millimeter / 10 millimetre to cm. Read and accepted our Terms
of Service and Privacy Policy of millimetre to cm person!
|
{"url":"http://cwlinux.com/carte-grise-tlymyym/6776cb-millimetre-to-cm","timestamp":"2024-11-05T23:10:22Z","content_type":"text/html","content_length":"37342","record_id":"<urn:uuid:78c7977b-b336-4d93-9975-e6f928c79c66>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00491.warc.gz"}
|
Reducer K value - EnggCyclopedia
Engineering Calculations • Piping
Reducer K value
2 Min Read
What is K factor of K value for piping fittings?
The frictional pressure drop across a length of a pipe depends on type of fluid, its density, viscosity etc, the inner surface of the pipe and fluid flow rate.
It is calculated using Darcy's equation.
This equation represents the frictional pressure drop for the straight length of a pipe. But when different piping fittings and valves are also included in the pipe run, they modify the flow itself
and that also contributes to the overall pressure loss.
K factor or K value for different piping fittings accounts for the additional frictional losses contributed by these fittings and valves. K value is then used to calculate the 'equivalent length' of
Using K value to calculate frictional losses in a piping system
Equivalent length, L[eq] = K × (D/4f)
where, K is the K value for a fitting (or for all fittings combined)
D is the pipe diameter
f is the Fanning friction factor
You can use this equivalent length is used in the above Darcy's equation, to calculate the frictional pressure losses across the pipe run including all the fittings and valves in the path.
Note that the equivalent length calculated from above equation is added to the actual length of the pipe + fittings i.e. equivalent length represents the additional pressure loss caused by the shape
of the fittings.
If the K factor is directly multiplied by (ρv^2/g), that directly gives the additional pressure drop across corresponding fittings.
K value for piping reducer / expander
K factor calculator
For quick calculation of equivalent length and frictional losses across a pipe run, approximate K value for reducers and expander joints can be considered to be 0.5.
You can also use this K factor calculator for different piping fittings for a quick estimation of the equivalent length.
Manually calculate K factor for reducer (or expander)
Alternatively, you can also use the following equations to calculate the K factor for reducer / expander joints.
K value for sudden contraction
K value for gradual contraction
You may also like
Pipe / Pipeline Pressure Rating
What Is Pipeline Or Pipe Pressure Rating
Importance Of Pipe Pressure Rating
How Is Pipe Pressure Rating...
Read More
Table of content:
What is spectacle blind?
Standard spectacle or 8 blind dimensions
What is spectacle blind?
A spectacle blind is basically made up of two...
Read More
Piping • Standard Dimensions
Pipe Supports
There are number of pipe supports that can be installed to support dead weight loads and to restrain the pipe from thermal and dynamic loads.
Type of pipe...
Read More
Heat exchanger design calculations
Normally, a process design engineer must create the process datasheet for a shell & tube exchanger, with details like ...
Read More
Miscellaneous • Operation & Maintenance • Piping
Cathodic Protection
Cathodic protection is a method of protecting a piping system or an equipment against corrosion by making the protected system cathodic. This is achieved by...
Read More
|
{"url":"https://enggcyclopedia.com/2019/04/reducer-k-value/","timestamp":"2024-11-08T02:05:27Z","content_type":"text/html","content_length":"195779","record_id":"<urn:uuid:a294fbb0-ad8d-47ce-9e73-498ca8471adc>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00519.warc.gz"}
|
Selina Concise Mathematics Class 6 ICSE Solutions Chapter 27 Quadrilateral - CBSE Tuts
Selina Concise Mathematics Class 6 ICSE Solutions Chapter 27 Quadrilateral
Selina Publishers Concise Mathematics Class 6 ICSE Solutions Chapter 27 Quadrilateral
Quadrilateral Exercise 27A – Selina Concise Mathematics Class 6 ICSE Solutions
Question 1.
Two angles of a quadrilateral are 89° and 113°. If the other two angles are equal; find the equal angles.
Question 2.
Two angles of a quadrilateral are 68° and 76°. If the other two angles are in the ratio 5 : 7; find the measure of each of them.
Question 3.
Angles of a quadrilateral are (4x)°, 5(x+2)°, (7x-20)° and 6(x+3)°. Find
(i) the value of x.
(ii) each angle of the quadrilateral.
Question 4.
Use the information given in the following figure to find :
(i) x
(ii) ∠B and ∠C
Question 5.
In quadrilateral ABCD, side AB is parallel to side DC. If ∠A : ∠D = 1 : 2 and ∠C : ∠B = 4:5
(i) Calculate each angle of the quadrilateral.
(ii) Assign a special name to quadrilateral ABCD.
Question 6.
From the following figure find ;
(i) x,
(ii) ∠ABC,
(iii) ∠ACD.
Question 7.
Given : In quadrilateral ABCD ; ∠C = 64°, ∠D = ∠C – 8° ;
∠A = 5(a+2)° and ∠B = 2(2a+7)°.
Calculate ∠A.
Question 8.
In the given figure :
∠b = 2a + 15
and ∠c = 3a+5; find the values of b and c.
Question 9.
Three angles of a quadrilateral are equal. If the fourth angle is 69°; find the measure of equal angles.
Question 10.
In quadrilateral PQRS, ∠P : ∠Q : ∠R : ∠S = 3 : 4 : 6 : 7.
Calculate each angle of the quadrilateral and then prove that PQ and SR are parallel to each other. Is PS also parallel to QR ?
Question 11.
Use the information given in the following figure to find the value of x.
Question 12.
The following figure shows a quadrilateral in which sides AB and DC are parallel.
If ∠A : ∠D = 4 : 5, ∠B = (3x – 15)° and ∠C = (4x + 20)°, find each angle of the quadrilateral ABCD.
Quadrilateral Exercise 27B – Selina Concise Mathematics Class 6 ICSE Solutions
Question 1.
In a trapezium ABCD, side AB is parallel to side DC. If ∠A = 78° and ∠C = 120°, find angles B and D.
Question 2.
In a trapezium ABCD, side AB is parallel to side DC. If ∠A = x° and ∠D = (3x – 20)°; find the value of x.
Question 3.
The angles A, B, C and D of a trapezium ABCD are in the ratio 3 : 4 : 5 : 6.
Le. ∠A : ∠B : ∠C : ∠D = 3:4: 5 : 6. Find all the angles of the trapezium. Also, name the two sides of this trapezium which are parallel to each other. Give reason for your answer
Question 4.
In an isosceles trapezium one pair of opposite sides are ….. to each Other and the other pair of opposite sides are ….. to each other.
Question 5.
Two diagonals of an isosceles trapezium are x cm and (3x – 8) cm. Find the value of x.
Question 6.
Angle A of an isosceles trapezium is 115° ; find the angles B, C and D.
Question 7.
Two opposite angles of a parallelogram are 100° each. Find each of the other two opposite angles.
Question 8.
Two adjacent angles of a parallelogram are 70° and 110° respectively. Find the other two angles of it.
Question 9.
The angles A, B, C and D of a quadrilateral are in the ratio 2:3: 2 : 3. Show this quadrilateral is a parallelogram.
Question 10.
In a parallelogram ABCD, its diagonals AC and BD intersect each other at point O.
If AC = 12 cm and BD = 9 cm ; find; lengths of OA and OD.
Question 11.
In parallelogram ABCD, its diagonals intersect at point O. If OA = 6 cm and OB = 7.5 cm, find the length of AC and BD.
Question 12.
In parallelogram ABCD, ∠A = 90°
(i) What is the measure of angle B.
(ii) Write the special name of the paralleogram.
Question 13.
One diagnol of a rectangle is 18 cm. What is the length of its other diagnol?
Question 14.
Each angle of a quadrilateral is x + 5°. Find :
(i) the value of x
(ii) each angle of the quadrilateral.
Give the special name of the quadrilateral taken.
Question 15.
If three angles of a quadrilateral are 90° each, show that the given quadrilateral is a rectangle.
Question 16.
The diagnols of a rhombus are 6 .cm and 8 cm. State the angle at which these diagnols intersect.
Question 17.
Write, giving reason, the name of the figure drawn alongside. Under what condition will this figure be a square.
Question 18.
Write two conditions that will make the adjoining figure a square.
|
{"url":"https://www.cbsetuts.com/selina-concise-mathematics-class-6-icse-solutions-chapter-27/","timestamp":"2024-11-09T22:07:38Z","content_type":"text/html","content_length":"92515","record_id":"<urn:uuid:ef5db82c-304d-4d2b-87b7-368489162e40>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00583.warc.gz"}
|
Basic definitions
The Theory of Causal Fermion Systems
Definition. Given a separable complex Hilbert space $\mathscr{H}$ with scalar product $\la .|. \ra_\H$ and a parameter $n \in \N$ (the spin dimension), we let $\F \subset \Lin(\H)$ be the set of
all self-adjoint operators on $\H$ of finite rank, which (counting multiplicities) have at most $n$ positive and at most $n$ negative eigenvalues.
On $\F$ we are given a positive measure $\rho$ (defined on a $\sigma$-algebra of subsets of $\F$), the so-called universal measure. We refer to $(\H, \F, \rho)$ as a causal fermion system.
In order to single out the physically admissible causal fermion systems, one must formulate physical equations. To this end, we impose that the universal measure should be a minimizer of the causal
action principle, which we now introduce. For any $x, y \in \F$, the product $x y$ is an operator of rank at most $2n$. We denote its non-trivial eigenvalues counting algebraic multiplicities by $\
lambda^{xy}_1, \ldots, \lambda^{xy}_{2n} \in \C$ (more specifically, denoting the rank of $xy$ by $k \leq 2n$, we choose $\lambda^{xy}_1, \ldots, \lambda^{xy}_{k}$ as all the non-zero eigenvalues and
set $\lambda^{xy}_{k+1}, \ldots, \lambda^{xy}_{2n}=0$). We introduce the spectral weight $| \,.\, |$ of an operator as the sum of the absolute values of its eigenvalues. In particular, the spectral
weights of the operator products $xy$ and $(xy)^2$ are defined by
\[ |xy| = \sum_{i=1}^{2n} \big| \lambda^{xy}_i \big| \qquad \text{and} \qquad \big| (xy)^2 \big| = \sum_{i=1}^{2n} \big| \lambda^{xy}_i \big|^2 \:. \]
We introduce the Lagrangian $\L$ and the causal action $\Sact$ by
\L(x,y) &= \big| (xy)^2 \big| – \frac{1}{2n}\: |xy|^2 \\
\Sact(\rho) &= \iint_{\F \times \F} \L(x,y)\: d\rho(x)\, d\rho(y) \:.
The causal action principle is to minimize $\Sact$ by varying the universal measure under the following constraints,
volume constraint: $\rho(\F) = \text{const}$
trace constraint: $\displaystyle \int_\F \tr(x)\: d\rho(x) = \text{const}$
boundedness constraint: $\displaystyle \T(\rho) := \iint_{\F \times \F} |xy|^2\: d\rho(x)\, d\rho(y) \leq C$,
where $C$ is a given parameter (and $\tr$ denotes the trace of a linear operator on $\H$).
→ generalizations and special cases
→ existence theory
→ Euler-Lagrange equations
→ Example: Describing Minkowski space as a causal fermion system
We define spacetime as the support of the universal measure,
spacetime $M:= \text{supp} \,\rho$
The fact that the eigenvalues of the above operator products are in general complex gives rise to the following “spectral” definition of the causal structure.
Definition (causal structure). Two points $x, y \in M$ are called spacelike separated if all the $\lambda^{xy}_j$ have the same absolute value. They are said to be timelike separated if the $\
lambda^{xy}_j$ are all real and do not all have the same absolute value. In all other cases (i.e if the $\lambda^{xy}_j$ are not all real and do not all have the same absolute value), the points
$x$ and $y$ are said to be lightlike separated.
According to the above definitions, points with spacelike separation drop out of the causal Lagrangian. This can be seen in analogy to the usual notion of causality where points with spacelike
separation cannot influence each other. This analogy is the reason for the notion causal in “causal fermion system” and “causal action principle.”
|
{"url":"https://causal-fermion-system.com/theory/math/basic-definitions/","timestamp":"2024-11-05T15:25:10Z","content_type":"text/html","content_length":"197644","record_id":"<urn:uuid:fe4770a6-75ff-4902-ab72-b53db71a129f>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00135.warc.gz"}
|
A Tour of Machine Learning Algorithms(FW)
In this post we take a tour of the most popular machine learning algorithms. It is useful to tour the main algorithms in the field to get a feeling of what methods are available.
There are so many algorithms available and it can feel overwhelming when algorithm names are thrown around and you are expected to just know what they are and where they fit.
In this post I want to give you two ways to think about and categorize the algorithms you may come across in the field.
• The first is a grouping of algorithms by the learning style.
• The second is a grouping of algorithms by similarity in form or function (like grouping similar animals together).
Both approaches are useful, but we will focus in on the grouping of algorithms by similarity and go on a tour of a variety of different algorithm types.
After reading this post, you will have a much better understanding of the most popular machine learning algorithms for supervised learning and how they are related.
A cool example of an ensemble of lines of best fit. Weak members are grey, the combined prediction is red.
Plot from Wikipedia, licensed under public domain.
Algorithms Grouped by Learning Style
There are different ways an algorithm can model a problem based on its interaction with the experience or environment or whatever we want to call the input data.
It is popular in machine learning and artificial intelligence textbooks to first consider the learning styles that an algorithm can adopt.
There are only a few main learning styles or learning models that an algorithm can have and we’ll go through them here with a few examples of algorithms and problem types that they suit.
This taxonomy or way of organizing machine learning algorithms is useful because it forces you to think about the the roles of the input data and the model preparation process and select one that is
the most appropriate for your problem in order to get the best result.
Let’s take a look at four different learning styles in machine learning algorithms:
Supervised Learning
Supervised Learning AlgorithmsInput data is called training data and has a known label or result such as spam/not-spam or a stock price at a time.
A model is prepared through a training process where it is required to make predictions and is corrected when those predictions are wrong. The training process continues until the model achieves a
desired level of accuracy on the training data.
Example problems are classification and regression.
Example algorithms include Logistic Regression and the Back Propagation Neural Network.
Unsupervised Learning
Unsupervised Learning AlgorithmsInput data is not labelled and does not have a known result.
A model is prepared by deducing structures present in the input data. This may be to extract general rules. It may through a mathematical process to systematically reduce redundancy, or it may be to
organize data by similarity.
Example problems are clustering, dimensionality reduction and association rule learning.
Example algorithms include: the Apriori algorithm and k-Means.
Semi-Supervised Learning
Semi-supervised Learning AlgorithmsInput data is a mixture of labelled and unlabelled examples.
There is a desired prediction problem but the model must learn the structures to organize the data as well as make predictions.
Example problems are classification and regression.
Example algorithms are extensions to other flexible methods that make assumptions about how to model the unlabelled data.
When crunching data to model business decisions, you are most typically using supervised and unsupervised learning methods.
A hot topic at the moment is semi-supervised learning methods in areas such as image classification where there are large datasets with very few labelled examples.
Algorithms Grouped By Similarity
Algorithms are often grouped by similarity in terms of their function (how they work). For example, tree-based methods, and neural network inspired methods.
I think this is the most useful way to group algorithms and it is the approach we will use here.
This is a useful grouping method, but it is not perfect. There are still algorithms that could just as easily fit into multiple categories like Learning Vector Quantization that is both a neural
network inspired method and an instance-based method. There are also categories that have the same name that describes the problem and the class of algorithm such as Regression and Clustering.
We could handle these cases by listing algorithms twice or by selecting the group that subjectively is the “best” fit. I like this latter approach of not duplicating algorithms to keep things simple.
In this section I list many of the popular machine leaning algorithms grouped the way I think is the most intuitive. It is not exhaustive in either the groups or the algorithms, but I think it is
representative and will be useful to you to get an idea of the lay of the land.
Please Note: There is a strong bias towards algorithms used for classification and regression, the two most prevalent supervised machine learning problems you will encounter.
If you know of an algorithm or a group of algorithms not listed, put it in the comments and share it with us. Let’s dive in.
Regression Algorithms
Regression AlgorithmsRegression is concerned with modelling the relationship between variables that is iteratively refined using a measure of error in the predictions made by the model.
Regression methods are a workhorse of statistics and have been cooped into statistical machine learning. This may be confusing because we can use regression to refer to the class of problem and the
class of algorithm. Really, regression is a process.
The most popular regression algorithms are:
• Ordinary Least Squares Regression (OLSR)
• Linear Regression
• Logistic Regression
• Stepwise Regression
• Multivariate Adaptive Regression Splines (MARS)
• Locally Estimated Scatterplot Smoothing (LOESS)
Instance-based Algorithms
Instance-based AlgorithmsInstance based learning model a decision problem with instances or examples of training data that are deemed important or required to the model.
Such methods typically build up a database of example data and compare new data to the database using a similarity measure in order to find the best match and make a prediction. For this reason,
instance-based methods are also called winner-take-all methods and memory-based learning. Focus is put on representation of the stored instances and similarity measures used between instances.
The most popular instance-based algorithms are:
• k-Nearest Neighbour (kNN)
• Learning Vector Quantization (LVQ)
• Self-Organizing Map (SOM)
• Locally Weighted Learning (LWL)
• Regularization Algorithms
Regularization Algorithms
An extension made to another method (typically regression methods) that penalizes models based on their complexity, favoring simpler models that are also better at generalizing.
I have listed regularization algorithms separately here because they are popular, powerful and generally simple modifications made to other methods.
The most popular regularization algorithms are:
• Ridge Regression
• Least Absolute Shrinkage and Selection Operator (LASSO)
• Elastic Net
• Least-Angle Regression (LARS)
Decision Tree Algorithms
Decision Tree AlgorithmsDecision tree methods construct a model of decisions made based on actual values of attributes in the data.
Decisions fork in tree structures until a prediction decision is made for a given record. Decision trees are trained on data for classification and regression problems. Decision trees are often fast
and accurate and a big favorite in machine learning.
The most popular decision tree algorithms are:
• Classification and Regression Tree (CART)
• Iterative Dichotomiser 3 (ID3)
• C4.5 and C5.0 (different versions of a powerful approach)
• Chi-squared Automatic Interaction Detection (CHAID)
• Decision Stump
• M5
• Conditional Decision Trees
Bayesian Algorithms
Bayesian AlgorithmsBayesian methods are those that are explicitly apply Bayes’ Theorem for problems such as classification and regression.
The most popular Bayesian algorithms are:
• Naive Bayes
• Gaussian Naive Bayes
• Multinomial Naive Bayes
• Averaged One-Dependence Estimators (AODE)
• Bayesian Belief Network (BBN)
• Bayesian Network (BN)
Clustering Algorithms
Clustering AlgorithmsClustering, like regression describes the class of problem and the class of methods.
Clustering methods are typically organized by the modelling approaches such as centroid-based and hierarchal. All methods are concerned with using the inherent structures in the data to best organize
the data into groups of maximum commonality.
The most popular clustering algorithms are:
• k-Means
• k-Medians
• Expectation Maximisation (EM)
• Hierarchical Clustering
Association Rule Learning Algorithms
Assoication Rule Learning AlgorithmsAssociation rule learning are methods that extract rules that best explain observed relationships between variables in data.
These rules can discover important and commercially useful associations in large multidimensional datasets that can be exploited by an organisation.
The most popular association rule learning algorithms are:
• Apriori algorithm
• Eclat algorithm
Artificial Neural Network Algorithms
Artificial Neural Network AlgorithmsArtificial Neural Networks are models that are inspired by the structure and/or function of biological neural networks.
They are a class of pattern matching that are commonly used for regression and classification problems but are really an enormous subfield comprised of hundreds of algorithms and variations for all
manner of problem types.
Note that I have separated out Deep Learning from neural networks because of the massive growth and popularity in the field. Here we are concerned with the more classical methods.
The most popular artificial neural network algorithms are:
• Perceptron
• Back-Propagation
• Hopfield Network
• Radial Basis Function Network (RBFN)
Deep Learning Algorithms
Deep Learning AlgorithmsDeep Learning methods are a modern update to Artificial Neural Networks that exploit abundant cheap computation.
They are concerned with building much larger and more complex neural networks, and as commented above, many methods are concerned with semi-supervised learning problems where large datasets contain
very little labelled data.
The most popular deep learning algorithms are:
• Deep Boltzmann Machine (DBM)
• Deep Belief Networks (DBN)
• Convolutional Neural Network (CNN)
• Stacked Auto-Encoders
Dimensionality Reduction Algorithms
Dimensional Reduction AlgorithmsLike clustering methods, dimensionality reduction seek and exploit the inherent structure in the data, but in this case in an unsupervised manner or order to summarise
or describe data using less information.
This can be useful to visualize dimensional data or to simplify data which can then be used in a supervized learning method. Many of these methods can be adapted for use in classification and
• Principal Component Analysis (PCA)
• Principal Component Regression (PCR)
• Partial Least Squares Regression (PLSR)
• Sammon Mapping
• Multidimensional Scaling (MDS)
• Projection Pursuit
• Linear Discriminant Analysis (LDA)
• Mixture Discriminant Analysis (MDA)
• Quadratic Discriminant Analysis (QDA)
• Flexible Discriminant Analysis (FDA)
Ensemble Algorithms
Ensemble AlgorithmsEnsemble methods are models composed of multiple weaker models that are independently trained and whose predictions are combined in some way to make the overall prediction.
Much effort is put into what types of weak learners to combine and the ways in which to combine them. This is a very powerful class of techniques and as such is very popular.
• Boosting
• Bootstrapped Aggregation (Bagging)
• AdaBoost
• Stacked Generalization (blending)
• Gradient Boosting Machines (GBM)
• Gradient Boosted Regression Trees (GBRT)
• Random Forest
Other Algorithms
Many algorithms were not covered.
For example, what group would Support Vector Machines go into? It’s own?
I did not cover algorithms from speciality tasks in the process of machine learning, such as:
• Feature selection algorithms
• Algorithm accuracy evaluation
• Performance measures
I also did not cover algorithms from speciality sub-fields of machine learning, such as:
• Computational intelligence (evolutionary algorithms, etc.)
• Computer Vision (CV)
• Natural Language Processing (NLP)
• Recommender Systems
• Reinforcement Learning
• Graphical Models
• And more…
These may feature in future posts.
Get your FREE Algorithms Mind Map
Sample of the handy machine learning algorithms mind map.
I’ve created a handy mind map of 60+ algorithms organized by type.
Download it, print it and use it.
Also get exclusive access to the machine learning algorithms email mini-course.
Further Reading
This tour of machine learning algorithms was intended to give you an overview of what is out there and and some ideas on how to relate algorithms to each other.
I’ve collected together some resources for you to continue your reading on algorithms. If you have a specific question, please leave a comment.
Other Lists of Algorithms
There are other great lists of algorithms out there if you’re interested. Below are few hand selected examples.
How to Study Machine Learning Algorithms
Algorithms are a big part of machine learning. It’s a topic I am passionate about and write about a lot on this blog. Below are few hand selected posts that might interest you for further reading.
How to Run Machine Learning Algorithms
Sometimes you just want to dive into code. Below are some links you can use to run machine learning algorithms, code them up using standard libraries or implement them from scratch.
|
{"url":"http://www.aprilzephyr.com/blog/09142016/A-Tour-of-Machine-Learning-Algorithms-FW/","timestamp":"2024-11-03T02:57:29Z","content_type":"text/html","content_length":"34633","record_id":"<urn:uuid:f3a5ce87-659f-4798-82a6-711449f28f96>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00192.warc.gz"}
|
Understanding the Debye Model for Solids: Masses and Springs
• Thread starter Wminus
• Start date
In summary: Universe. So in principle, one could solve the full microscopic Maxwell equations and obtain an accurate description of the physical system. However, this is an enormous and complex task
and is not done routinely.
Hi! I feel like I've understood none of this stuff!
A 1D chain of springs and masses modeling a chain of atoms has a dispersion relation ala ## \omega## ~ ##|sin(k a /2) |##, where ##k## is the wave vector and ##a## the distance between atoms. As far
as I have understood, the debye model (in 1D) approximates this dispersion relation as simply a straight line, and from that calculates the heat capacity. But why bother doing that?? Wouldn't it be
more accurate to use the proper dispersion relation to calculate the internal energy of the chain of atoms as a function of temperature? And this would just carry over to 3D, right?
Why not just treat the atoms in the solid as a bunch of masses connected to each other with springs vibrating at various modes? Surely it isn't too difficult for a physicist with some grit to solve
such a system? And how accurate is this mass-and-spring model anyways?
Well, I must disagree with you on this one. It is by no means trivial to solve the equations of motion for a 3D-system of masses connected with springs. I have a background in computational physics
and mathematical modeling. Without having looked to much into the details, I would guess there wouldn't even be possible to find an analytical solution, even if all masses, and all spring constants
were equal.
Just to enlighten the difficulity, consider a simple cubic lattice of lattice constant [itex]a[/itex], and let's try 2D first. We can label each particle by [itex]i,j[/itex], which at rest will be
positioned at [itex](x_i,y_j) = (i a,j a)[/itex] respectively. There will be four neighbouring particles which are connected with springs such that the total force on our particle will be
\sum \vec{F} = F_{north} + F_{south} + F_{west} + F_{east} = - k( \vec{r}_n + \vec{r}_s + \vec{r}_w + \vec{r}_e) = \vec{r}_{i,j}
note that [itex]\vec{r}_e[/itex] is the position of particle [itex](i+1,j)[/itex], [itex]\vec{r}_w[/itex] of [itex](i-1,j)[/itex], and so on for the north and south particles.
To make this the simplest possible problem, we assume there is a finite number of [itex]N^2[/itex] particles, that is, [itex]N[/itex] particles in both the [itex]i[/itex] and [itex]j[/itex]
directions. We can then treat this a a Boundory Value Problem (BVP) and demand that the end of the crystal is kept at rest: these particles are held fast.
This, actually really simple system, will result in a large set of [itex]N^2[/itex] equations on the form
[itex]m \frac{d^2}{dt^2}\vec{r}_{i,j} = \sum \vec{F}[/itex]
which will written out, be a matrix equation. Morover, the number of unknowns (rows&colums of the matrix) will be doubled as the 2D-vectors will need to be to decomposed into separate directions.
I might be wrong, but I am pretty sure this will have to be solved computationally. Even though 1D models are boring, they tend to catch the qualitative behaviour of many physical systems, and are
better suited as teaching material as they may be solved as exam problems with pen and paper.
The Debye model stems from a time long before the advent of computers. Of course nowadays you can calculate better dispersion relations for the phonons and calculate heat capacities etc. from it.
As a sidenote to mhsd91: Usually you would impose periodic boundary conditions and make use of the periodicity of the system using Bloch's theorem. Then at least the model of coupled springs becomes
quite tractable.
Thanks for the replies. I guess I can appreciate the difficulty of the spring and mass model, but still the Debye model is completely pointless for the 1D example.. So in 1D it just linearizes the
dispersion relation. What does it do in 2D? Does it turn the dispersion relation into a cone from a parabola? I read in wikipedia that debye's approximation models the system as "phonons in a box".
Can't seem to understand anything more of it though, anyone care to help?
As for the computers: Surely you could still find an accurate dispersion relation back in the old days numerically? Did they really not have anything better than Debye's approximation 100 years ago?
The Debye model is not as bad as it may seem. It's goal is (among others) to calculate the heat capacity for temperatures much lower than the Debye temperature. Hence practically only the phonons of
lowest energy (and k values) will contribute to the heat capacity and in this range the dispersion relation is linear in an excellent approximation. Only phonons whose wavelength is much larger than
the atomic spacing contribute and thus the detailed molecular structure of the solid isn't important.
You would also not argue that one shouldn't use the index of refraction in optics to calculate a lens system but use a full fledged solution of the microscopic Maxwell equations?
In fact, the situation is not very different, say, in quantum electrodynamics (QED). QED is only an effective field theory whose range is limited to rather low energies (e.g. as compared to the
Planck scale). Nevertheless, it makes very precise predictions for, say, the fine structure of atoms.
DrDu said:
The Debye model is not as bad as it may seem. It's goal is (among others) to calculate the heat capacity for temperatures much lower than the Debye temperature. Hence practically only the phonons
of lowest energy (and k values) will contribute to the heat capacity and in this range the dispersion relation is linear in an excellent approximation. Only phonons whose wavelength is much
larger than the atomic spacing contribute and thus the detailed molecular structure of the solid isn't important.
You would also not argue that one shouldn't use the index of refraction in optics to calculate a lens system but use a full fledged solution of the microscopic Maxwell equations?
In fact, the situation is not very different, say, in quantum electrodynamics (QED). QED is only an effective field theory whose range is limited to rather low energies (e.g. as compared to the
Planck scale). Nevertheless, it makes very precise predictions for, say, the fine structure of atoms.
OK good points. After I slept on it everything is more clear. Thanks for the help!
FAQ: Understanding the Debye Model for Solids: Masses and Springs
What is the Debye model for solids?
The Debye model for solids is a theoretical model that explains the behavior of atoms in a solid. It treats the solid as a collection of atoms connected by springs, and it assumes that the atoms can
only vibrate in certain allowed modes, called phonons.
How does the Debye model account for the masses of atoms in a solid?
The Debye model takes into account the masses of atoms by considering the vibrational modes of the solid as a whole. It treats the solid as a continuous medium rather than individual atoms, and the
masses of atoms are taken into account through the characteristic frequency of the phonons.
What are the limitations of the Debye model for solids?
One limitation of the Debye model is that it assumes all atoms in a solid are connected by springs, which is not always the case. It also does not take into account the effects of anharmonicity,
which can become significant at higher temperatures. Additionally, the Debye model only applies to solids at low temperatures.
How does the Debye model explain the heat capacity of solids?
The Debye model explains the heat capacity of solids by considering the vibrational modes of the solid. As the temperature increases, more phonon modes become excited, leading to an increase in heat
capacity. The Debye model can accurately predict the heat capacity of solids at low temperatures, but it deviates at high temperatures due to anharmonicity.
How does the Debye model account for the thermal conductivity of solids?
The Debye model takes into account the thermal conductivity of solids by considering the heat transfer through phonon collisions. Higher frequency phonons have shorter mean free paths and therefore
contribute more to thermal conductivity. The Debye model can accurately predict thermal conductivity at low temperatures, but it deviates at high temperatures due to anharmonicity.
|
{"url":"https://www.physicsforums.com/threads/understanding-the-debye-model-for-solids-masses-and-springs.815344/","timestamp":"2024-11-11T13:54:03Z","content_type":"text/html","content_length":"103322","record_id":"<urn:uuid:2d9e3fc2-9867-4ebc-b169-f1591485f947>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00563.warc.gz"}
|
We offer resources for topics covered in KS3-KS4 ( years 7-11).
Benefits for schools
All lessons are interacitve, with step by step mathematical calculations explained with mathematical language.
The questions assigned to each lesson offer instant feedback, allowing students to self assess. In more complicated calculations step by step answers are shown to help with self assessment or peer
Cutting down on the marking for the classroom teacher allowing quality time to be spent in writing constructive feed back to pupils to help highlight their strengths and weakness es.
Allows the teacher more time during the lesson to circulated amongst pupils to push the more able, offer support to the less able and keep students on track.
You can differentiate every lesson to students …. Teaching the same lesson to many levels but using different number values.
Cut down on planning and preparation time. Lessons are already prepared for you with the associated question resource allowing pupils to be assessed on what they have just been taught. Allowing the
classroom teacher to give instant feedback and target the pupils who have not understood the concepts of the lesson.
All lessons will be standard through out the department allowing consistency through out mathematics. This will ease the pressure of students moving sets.
As all the lessons have a standard layout students will know were to find the level or grade that they are working at each lesson.
What your membership will offer you
you will have access to all our interactive lessons for topics covered by the national curriculum
Benefits for pupils
Access to lessons on mathematical topics covered by the national curriculium
You will have access to mathematical questions based on the topic you have just learnt .
You will be given instant feedback to all the questions you answer with the more complicated questions having a step by step solution.
|
{"url":"https://www.mathskingdom.com/gcse/","timestamp":"2024-11-11T05:06:20Z","content_type":"text/html","content_length":"135567","record_id":"<urn:uuid:88addaa8-4444-4c56-8df3-54763632d3b1>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00368.warc.gz"}
|
Learn game development w/ Unity | Courses & tutorials in game design, VR, AR, & Real-time 3D | Unity Learn
Content Type
Build skills in Unity with guided learning pathways designed to help anyone interested in pursuing a career in gaming and the Real Time 3D Industry.
View all Pathways
Explore a topic in-depth through a combination of step-by-step tutorials and projects.
View all Courses
Create a Unity application, with opportunities to mod and experiment.
View all Projects
Find what you’re looking for with short, bite-sized tutorials.
View all Tutorials
Educator Hub
|
{"url":"https://learn.unity.com/search?k=%5B%22tag%3A5813f57532b30600250d6e0d%22%5D","timestamp":"2024-11-12T12:58:37Z","content_type":"text/html","content_length":"550701","record_id":"<urn:uuid:8bd81de1-3d22-4530-9ea0-5a854d7b1a69>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00229.warc.gz"}
|
Produce Multiple Outputs in an IF Statement in Excel
The Excel IF function checks whether a condition is met, and returns one value if TRUE, and another value if FALSE.
The syntax of the function is:
IF(logical_test, [value_if_true], [value_if_false])
By default, the function has only two outcomes but there are some situations when we may want multiple or more than two outcomes.
We can achieve multiple outputs using the IF statement by nesting the IF functions and using the IF statement together with other Excel functions such as the AND logical function and the OR logical
In this tutorial, we will use five examples to explain how we can produce multiple outputs in an IF statement.
Example 1: Use the multiple nested IF functions
In this example, we will use two datasets. The first dataset is a school’s grading system.
The second dataset is a student’s scores in a certain subject. Letter grades need to be assigned to the scores according to the school’s grading system.
To assign letter grades to the scores, we do the following:
1. Select cell C2 and type in the formula:
1 =IF(B2<=79,"D",IF(B2<=87,"C",IF(B2<=93,"B","A")))
2. Press the Enter key and double-click or drag down the fill handle to copy the formula down the column.
The letter grades are assigned to the various scores.
Explanation of the formula
1 =IF(B2<=79,"D",IF(B2<=87,"C",IF(B2<=93,"B","A")))
• The IF statement checks if the score in cell B2 is less than or equal to 79. If that is true it returns grade letter D and stops checking.
• If cell B2 has a score that is greater than 79 but equal to or less than 87, the grade letter C is returned.
• If cell B2 has a score that is greater than 87 but equal to or less than 93, the grade letter B is returned.
• If the score is greater than 93, the grade letter A is returned.
Example 2: Use the IF statement and the AND function
In this example, we use the combination of the IF and AND functions. The AND function checks whether all arguments are TRUE and returns TRUE if all the arguments are TRUE. It returns FALSE if any or
all the arguments are FALSE.
In this example, a candidate is only promoted if he/she scores at least 60 on the theory and practical tests.
To check if a candidate qualifies for promotion or not we do the following:
1. Select cell D3 and type in the formula:
1 =IF((AND(B3>=60, C3>=60)), "Promoted", "Not Promoted")
2. Press the Enter key and double-click or drag down the fill handle to copy the formula down the column.
The formula returns Promoted if both conditions are true and return Not Promoted otherwise.
Example 3: Use the IF and OR functions
In this example, we use both the IF function and the OR function to produce multiple outputs.
The OR function is a logical function that checks whether any of the arguments are TRUE and returns TRUE or FALSE. Returns FALSE only if all arguments are FALSE.
In the following dataset, we will determine if a candidate qualifies for promotion or not based on whether the candidate got a score of at least 60 in either of the tests:
We use the following steps:
1. Select cell D3 and type in the formula:
1 =IF((OR(B3<=60, C3<=60)), "Promoted", "Not Promoted")
2. Press the Enter key and double-click or drag down the fill handle to copy the formula down the column.
The formula returns Promoted if either of the conditions is true and returns Not Promoted if both conditions are false.
Example 4: Apply IF, OR, and AND functions
We will use the dataset we have used in the previous example to determine the promotion of candidates using the IF, OR, and AND functions.
We use the following steps:
1. Select cell D3 and type in the formula:
1 =IF(OR(AND(B3>=50, C3>=60), AND(B3>=40, C3>=45)), "Promoted", "Not promoted")
2. Press the Enter key or double-click or drag down the fill handle to copy the formula down the column.
Explanation of the formula
1 =IF(OR(AND(B3>=50, C3>=60), AND(B3>=40, C3>=45)), "Promoted", "Not promoted")
• The formula returns Promoted if the value in cell B3 is >=50 and the value in the cell is C3>=60.
• The formula returns Promoted if the value in cell B3 is >=40 and values in the cell is C3>=45.
• The formula returns Not Promoted if both of the above are not true.
Example 5: Use the IF and AVERAGE function
In this example, we will use both the IF and AVERAGE functions to determine if the performance of the student is Excellent, Good, Poor, or Satisfactory based on their average scores.
We will use the following dataset to show how this is done:
We use the following steps:
1. Select cell B7 and type in the formula:
1 =IF(AVERAGE(B3:B6)>=95,"Excellent",IF(AVERAGE(B3:B6)>=90,"Good",IF(AVERAGE(B3:B6)>=70,"Satisfactory","Poor")))
2. Press the Enter key and drag the fill handle to cell D7 to copy the formula to those cells.
Explanation of the formula
• The formula returns Excellent if the average score of the student is 95 and above.
• The formula returns Good if the average score of the student is 90 and above but less than 95.
• The formula returns Satisfactory if the average score of the student is 70 and above but less than 90.
• The formula returns Poor if the average score of the student is below 70.
The IF function by default returns only two outcomes but there are some situations when we may want multiple or more than two outcomes.
This can be achieved by using nested IF functions and using the function together with other Excel functions.
In this tutorial we have looked at 5 ways we can use the IF statement to produce multiple outputs. The examples given involve the use of nested IF functions, the use of IF function together with the
AND logical function, the use of IF function together with the OR logical function, the use of the combination of IF function and OR and AND functions, and the application of the IF function together
with the AVERAGE function.
|
{"url":"https://officetuts.net/excel/formulas/produce-multiple-outputs-in-an-if-statement/","timestamp":"2024-11-07T00:20:58Z","content_type":"text/html","content_length":"170742","record_id":"<urn:uuid:c074f15c-ee10-40cc-aeb2-9fd05ae315db>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00027.warc.gz"}
|
IB Numerical Analysis - Approximation of linear functionals
3Approximation of linear functionals
IB Numerical Analysis
3.1 Linear functionals
In this chapter, we are going to study approximations of linear functions. Before
we start, it is helpful to define what a linear functional is, and look at certain
examples of these.
(Linear functional)
A linear functional is a linear mapping
V →
R, where V is a real vector space of functions.
In generally, a linear functional is a linear mapping from a vector space to
its underlying field of scalars, but for the purposes of this course, we will restrict
to this special case.
We usually don’t put so much emphasis on the actual vector space
. Instead,
we provide a formula for
, and take
to be the vector space of functions for
which the formula makes sense.
(i) We can choose some fixed ξ ∈ R, and define a linear functional by
L(f) = f(ξ).
(ii) Alternatively, for fixed η ∈ R we can define our functional by
L(f) = f
In this case, we need to pick a vector space in which this makes sense, e.g.
the space of continuously differentiable functions.
(iii) We can define
L(f) =
f(x) dx.
The set of continuous (or even just integrable) functions defined on [
a, b
will be a sensible domain for this linear functional.
Any linear combination of these linear functions are also linear functionals.
For example, we can pick some fixed α, β ∈ R, and define
L(f) = f(β) − f(α) −
β − α
(β) + f
The objective of this chapter is to construct approximations to more compli-
cated linear functionals (usually integrals, possibly derivatives point values) in
terms of simpler linear functionals (usually point values of f itself).
For example, we might produce an approximation of the form
L(f) ≈
where V = C
[a, b], p ≥ 0, and {x
⊆ [a, b] are distinct points.
How can we choose the coefficients
and the points
so that our approxi-
mation is “good”?
We notice that most of our functionals can be easily evaluated exactly when
is a polynomial. So we might approximate our function
by a polynomial,
and then do it exactly for polynomials.
More precisely, we let
a, b
] be arbitrary points. Then using the
Lagrange cardinal polynomials `
, we have
f(x) ≈
Then using linearity, we can approximate
L(f) ≈ L
So we can pick
= L(`
Similar to polynomial interpolation, this formula is exact for
f ∈ P
]. But we
could do better. If we can freely choose
, then since we now
have 2
+ 2 free parameters, we might expect to find an approximation that is
exact for
f ∈ P
]. This is not always possible, but there are cases when
we can. The most famous example is Gaussian quadrature.
|
{"url":"http://dec41.user.srcf.net/h/IB_L/numerical_analysis/3_1","timestamp":"2024-11-11T11:48:01Z","content_type":"text/html","content_length":"119501","record_id":"<urn:uuid:6936a8f9-8e81-4c51-9e93-75e42a325c83>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00300.warc.gz"}
|
Restriction of Injection is Injection
Let $f: S \to T$ be an injection.
Let $X \subseteq S$ be a subset of $S$.
Let $f \sqbrk X$ denote the image of $X$ under $f$.
Let $Y \subseteq T$ be a subset of $T$ such that $f \sqbrk X \subseteq Y$.
The restriction $f \restriction_{X \times Y}$ of $f$ to $X \times Y$ is an injection from $X$ to $Y$.
First we show that $f \restriction_{X \times Y}$ is a mapping from $X$ to $Y$.
By Restriction of Mapping is Mapping, $f \restriction_{X \times T}$ is a mapping from $X$ to $T$.
If $x \in X$, then by the definition of image:
$\map f x \in f \sqbrk X$
This article, or a section of it, needs explaining.
In particular: More explanation required.
You can help $\mathsf{Pr} \infty \mathsf{fWiki}$ by explaining it.
To discuss this page in more detail, feel free to use the talk page.
When this work has been completed, you may remove this instance of {{Explain}} from the code.
Since $f \sqbrk X \subseteq Y$, $f \restriction_{X \times Y}$ is a mapping from $X$ to $Y$.
By definition of an injection:
$\forall s_1, s_2 \in S: \map f {s_1} = \map f {s_2} \implies s_1 = s_2$.
Aiming for a contradiction, suppose $f \restriction_{X \times Y}: X \to Y$ were not an injection.
$\exists x_1, x_2 \in X: x_1 \ne x_2, \map f {x_1} = \map f {x_2}$
But then:
$\exists x_1, x_2 \in S: x_1 \ne x_2, \map f {x_1} = \map f {x_2}$
So $f: S \to T$ would not then be an injection.
Hence the result.
|
{"url":"https://proofwiki.org/wiki/Restriction_of_Injection_is_Injection","timestamp":"2024-11-02T03:23:15Z","content_type":"text/html","content_length":"42381","record_id":"<urn:uuid:c5d4d55c-db17-494c-a225-5aa299b31c6b>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00853.warc.gz"}
|
Comparative Analysis of Option Pricing Methods: FDM, Monte Carlo Simulation, and Variance Reduction | HackerNoon
Table of Links
1.2 Asymptotic Notation (Big O)
1.5 Monte Carlo Simulation and Variance Reduction Techniques
2. Methodology
3.2 Theorems and Model Discussion
4. RESULT ANALYSIS
We use the parameters listed in (3) to generate data for options to verify our results from the European call option against the analytical solution given in (3) in order to confirm their accuracy.
This comparative analysis provides a thorough assessment of the efficacy of the data we generate in comparison to the existing solution. When comparing the results of predicting call options by the
finite difference method (FDM) and analytically, there are many things to take into account. Both these ways try to approximate the price of a call option but with different methods. So, let me
consider these two methods in detail:
Finite Difference Method (FDM):
The partial differential equation that governs the pricing of options is discretized in a grid by FDM and solved numerically [31]. The sizing of the grid, as well as whether to use explicit,
implicit, or Crank-Nicolson numerical schemes, are among the factors that affect its accuracy. Making sure that FDM solutions converge is very important. This may not happen if a grid is used too
coarsely or when an unstable numerical scheme is adopted, thereby leading to wrong answers. FDM can consume a lot of computer resources particularly with difficult option structures or problems
having many dimensions. Time complexity grows as grid size increases together with the number time steps required for convergence. This allows early exercise features to be included as well as
dividends or changes in volatility over time. Nonetheless, complex functionality implementation within FDM needs carefulness while also increasing computational complexities [32].
The Black-Scholes model is an example of an analytical method that can provide a closed-form solution for pricing options given certain assumptions (for instance, constant volatilities with no
dividends). This type of solution tends to be simpler and faster computationally than numerical ones like FDM. It is important to note that all analytic methods make some assumptions which may not
hold true in reality. One such assumption made by the Black-Scholes model is that the volatility remains constant and the risk-free rate is continuously compounded. If any of these conditions are
violated, then there will be disparities between what the market price says should happen according to this formula versus how things actually turn out.
Comparative Analysis:
FDM gives greater precision and accuracy, particularly for involved option structures or payoffs that aren't linear. However, the added computational complexity and resource utilization required to
achieve this level of precision make it expensive.
Computational efficiency is what analytical methods are all about; they prioritize simplicity over anything else. They may lose some of their precision when assumptions within models do not hold true
which doesn’t always happen often except for in unique cases. FDM is a more robust method than its counterpart analytic because it can deal with different types of options under various market
situations. This means that FDM can handle changes in parameters and boundary conditions as they occur when compared against dynamic treatment using an equation solver like the Runge-Kutta method.
Analytical techniques lack this flexibility while being efficient at processing large amounts of data quickly but need calibration steps where deviations from model assumptions occur, thereby
limiting them to only ideal scenarios during testing stages before real-life applications are used. The choice between FDM and analytic methods is often driven by problem specificity, computational
capacity availability, or even trade-offs between accuracy & speed depending on user needs.
Monte Carlo Technique:
Now, we describe some experimental results based on the Monte Carlo method. The fundamental Monte Carlo technique will be covered first, then move on to more advanced methods. In the plot below, take
note of the Saw tooth pattern. This is as a result of our raising the option's strike price from 100 to 200. This implies that many fewer simulations result in a profit, leading to extended periods
during which the Monte Carlo price falls; conversely, when the option is profitable, the Monte Carlo price rises significantly [33].
The fundamental concept is to compute the payoff of the derivative after simulating the price of the underlying asset at the derivative's maturity. The average of the payoffs discounted to the
present is the derivative's price [34]. Comparing simulated option prices with those derived from traditional pricing models assesses the efficacy of Monte Carlo simulation in capturing the
complexities of market dynamics and volatility. Results highlight the flexibility and adaptability of Monte Carlo simulation in modeling various scenarios, shedding light on its potential as a robust
tool for options pricing. The analysis underscores the importance of considering Monte Carlo simulation as a complementary approach to traditional models, offering insights into risk management and
investment strategies in dynamic financial markets [35].
A large variety of derivatives can be priced using the highly general Monte Carlo approach. However, it is also an extremely slow method that uses a lot of processing resources. Thus, in the next
section, we will also examine variance reduction strategies that can be applied to accelerate the Monte Carlo approach.
Finding a different measure where the estimator's variance is lower is the goal of importance sampling. Reducing variance is the same as reducing the second moment.
Result analysis in Monte Carlo simulation involves assessing the importance of sampling techniques and understanding the significance of confidence intervals. Now, I will increase the strike price
value from 100 to 200 and then simulate it again.
We can see the changes between Figures (10) and Figure (12), basically in the optimal drift curve. In the second figure, the curve becomes an almost flat line after changing the strike price from 100
to 200.
We will proceed with importance sampling, but we will now need to sample over several timesteps. As a result, we must deal with a vector of 𝜏 that is made up of random variables. Let's first try a
constant 𝜏 for all timesteps.
Now let's try to find a better 𝜏 for each timestep. We will start by finding the optimal constant 𝜏.
After fitting a quadratic function as optimal drift, we can see that there is no difference between degrees 1 and 2. So, it performs well. Let's examine those drift vectors' appearance.
Looking closely at how well the optimal drift vector is used in Monte Carlo simulations within the Black-Scholes model can give us some really useful info about how accurate and effective this
simulation method is. Assess the convergence properties of Monte Carlo simulations with the optimal drift vector. Evaluate how quickly the simulations converge to stable estimates of option prices or
other output metrics.
Let's compare in/out of the money and put/call using linear interpolation.
We can observe that drift vectors are often decreasing for calls and increasing for puts. This makes basic sense since a reduced variance requires increasing the MC estimator. This indicates that we
should add more for out-of-the-money options than for in-the-money options, as well as a negative number for calls and a positive number for puts. By using the Laplace Method, which provides a
recursive formula for the ideal 𝜏, this may also be mathematically demonstrated.
Antithetic variates exploit negative correlations between pairs of random variables to reduce variance in estimates. In the context of option pricing with the Black-Scholes model, this involves
generating two correlated sets of random numbers. The effectiveness of antithetic variates in reducing variance and improving the efficiency of Monte Carlo simulations within the Black-Scholes model
can be evaluated through convergence analysis, and comparison with standard Monte Carlo results.
This seems very effective for OTM options and is very easy to implement. Note that it is also about twice as expensive computationally as the original method, so for ITM options [36], it is not worth
it in this example.
Applying control variates in Monte Carlo simulations within the Black-Scholes model can be a powerful technique for reducing the variance of option price estimates, particularly for options that are
consistently over or undervalued by the model. Here's how you can apply control variates and analyze the results for in-the-money (ITM) and out-of-the-money (OTM) options:
For in the money (ITM) we have considered spot price as 1 and the strike price as 0.7, and for the out of the money (OTM) we have considered the strike price as 1.4 with the same spot price and
started our experiment after getting the results in terms of the effectiveness of using control variates to improve the accuracy and efficiency of option pricing in the Black-Scholes model.
(1) Agni Rakshit, Department of Mathematics, National Institute of Technology, Durgapur, Durgapur, India ([email protected]);
(2) Gautam Bandyopadhyay, Department of Management Studies, National Institute of Technology, Durgapur, Durgapur, India ([email protected]);
(3) Tanujit Chakraborty, Department of Science and Engineering & Sorbonne Center for AI, Sorbonne University, Abu Dhabi, United Arab Emirates ([email protected]).
This paper is available on arxiv under CC by 4.0 Deed (Attribution 4.0 International) license.
|
{"url":"https://hackernoon.com/comparative-analysis-of-option-pricing-methods-fdm-monte-carlo-simulation-and-variance-reduction","timestamp":"2024-11-11T18:06:32Z","content_type":"text/html","content_length":"309892","record_id":"<urn:uuid:7382914f-4347-4d82-804a-81ee64fdf826>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00452.warc.gz"}
|
Format Function
Converts a numeric expression to a string, and then formats it according to the format that you specify.
Format(expression [, format As String]) As String
expression: Numeric expression that you want to convert to a formatted string.
format: String that specifies the format code for the number. If format is omitted, the Format function works like the LibreOffice Basic Str() function.
Text string.
Форматирање кодови
The following list describes the codes that you can use for formatting a numeric expression:
0: If expression has a digit at the position of the 0 in the format code, the digit is displayed, otherwise a zero is displayed.
If expression has fewer digits than the number of zeros in the format code, (on either side of the decimal), leading or trailing zeros are displayed. If the expression has more digits to the left of
the decimal separator than the amount of zeros in the format code, the additional digits are displayed without formatting.
Decimal places in the expression are rounded according to the number of zeros that appear after the decimal separator in the format code.
#: If expression contains a digit at the position of the # placeholder in the format code, the digit is displayed, otherwise nothing is displayed at this position.
This symbol works like the 0, except that leading or trailing zeroes are not displayed if there are more # characters in the format code than digits in the expression. Only the relevant digits of the
expression are displayed.
.: The decimal placeholder determines the number of decimal places to the left and right of the decimal separator.
If the format code contains only # placeholders to the left of this symbol, numbers less than 1 begin with a decimal separator. To always display a leading zero with fractional numbers, use 0 as a
placeholder for the first digit to the left of the decimal separator.
%: Multiplies the expressionby 100 and inserts the percent sign (%) where the expression appears in the format code.
E- E+ e- e+ : If the format code contains at least one digit placeholder (0 or #) to the right of the symbol E-, E+, e-, or e+, the expression is formatted in the scientific or exponential format.
The letter E or e is inserted between the number and the exponent. The number of placeholders for digits to the right of the symbol determines the number of digits in the exponent.
If the exponent is negative, a minus sign is displayed directly before an exponent with E-, E+, e-, e+. If the exponent is positive, a plus sign is only displayed before exponents with E+ or e+.
The thousands delimiter is displayed if the format code contains the delimiter enclosed by digit placeholders (0 or #).
The use of a period as a thousands and decimal separator is dependent on the regional setting. When you enter a number directly in Basic source code, always use a period as decimal delimiter. The
actual character displayed as a decimal separator depends on the number format in your system settings.
- + $ ( ) space: A plus (+), minus (-), dollar ($), space, or brackets entered directly in the format code is displayed as a literal character.
To display characters other than the ones listed here, you must precede it by a backslash (\), or enclose it in quotation marks (" ").
\ : The backslash displays the next character in the format code.
Characters in the format code that have a special meaning can only be displayed as literal characters if they are preceded by a backslash. The backslash itself is not displayed, unless you enter a
double backslash (\\) in the format code.
Characters that must be preceded by a backslash in the format code in order to be displayed as literal characters are date- and time-formatting characters (a, c, d, h, m, n, p, q, s, t, w, y, /, :),
numeric-formatting characters (#, 0, %, E, e, comma, period), and string-formatting characters (@, &, <, >, !).
You can also use the following predefined number formats. Except for "General Number", all of the predefined format codes return the number as a decimal number with two decimal places.
If you use predefined formats, the name of the format must be enclosed in quotation marks.
Predefined format
General Number: Numbers are displayed as entered.
Currency: Inserts a dollar sign in front of the number and encloses negative numbers in brackets.
Fixed: Displays at least one digit in front of the decimal separator.
Standard: Displays numbers with a thousands separator.
Percent: Multiplies the number by 100 and appends a percent sign to the number.
Scientific: Displays numbers in scientific format (for example, 1.00E+03 for 1000).
A format code can be divided into three sections that are separated by semicolons. The first part defines the format for positive values, the second part for negative values, and the third part for
zero. If you only specify one format code, it applies to all numbers.
Sub ExampleFormat
MsgBox Format(6328.2, "##,##0.00")
REM always use a period as decimal delimiter when you enter numbers in Basic source code.
REM displays for example 6,328.20 in English locale, 6.328,20 in German locale.
End Sub
|
{"url":"https://help.libreoffice.org/latest/mk/text/sbasic/shared/03120301.html","timestamp":"2024-11-07T12:55:30Z","content_type":"text/html","content_length":"18598","record_id":"<urn:uuid:ceb3cb38-b9fd-4e9d-ad11-4fba2f7c69b6>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00768.warc.gz"}
|
Square Root of 6 | Methods Find the Square Root of 6 - Wiingy
The square root of 6 is 2.44. The square root of a value is given when a particular number is multiplied by itself and gives the number or a number when they have squared equals to a specified
The number 6 is a positive real number that when multiplied by itself produces the number. The square root of 6 is mentioned as 6.
Looking to Learn Math? Book a Free Trial Lesson and match with top Math Tutors for concepts, homework help, and test prep.
Method to find the square root of 6
To find the square root of any number without using a calculator we have to check whether the number is first a perfect square or not. For a number that is perfectly square, we can find the square
root using the prime factorization method and for those that end with a first odd number, the square root can be found using repeated subtraction.
Since 6 is not a perfect square number we follow long division to find the square root of 6.
Prime factorization method
The prime factors of 6 = 2×3
Since there is no square root let us take the square root on both sides
√6 = √2*√3
= √2*√3
√6 = 1.414*1.732
√6 = 2.44
Long division method
• Step 1: Find the smallest integer that can divide the number.
□ The smallest integer is closest and can be divided using the number 2.
• Step 2: Keep following the long division using divisor and dividend.
• Step 3: When the particular number of satisfaction is reached the quotient is the square root of the number.
Solved Example
Example 1: Find the square root of 6 using the prime factorization method
The prime factors of 6 = 2×3
Since there is no square root let us take the square root on both sides
√6 = √2*√3
√6 = 1.414*1.732
√6 = 2.44
Example 2: Sonu has to find the square root of 6 using the long division method, help him
√6 = 2.44
Example 3: Find the square root of 2.
The square root of 2 is √2 = 1.4142
Example 4: Find the square root of 100.
The square root of 100 is √100 = 10
Example 5: what is the square root of 169?
169 = 13 × 13
169 = 13^2
Hence, the square root of 169 is √169 = 13
Looking to Learn Math? Book a Free Trial Lesson and match with top Math Tutors for concepts, homework help, and test prep.
Frequently asked questions
What is the √6?
√6 = 4.22
Which method is used to find the √6?
The standard method that is used to find the square root of any number is the long division method.
How to write the square root of 6 in exponential form?
(6^1/2 )or 6^0.5 is the exponential way to write 6
How to write the square root of 6 in radical form?
The radical form of writing square root is 6
Can I find the square root of 6 using other methods?
Finding the square root for 6 is not possible using the prime factorization method or repeated subtraction because 6 is not a perfect square number.
Is 6 an irrational number?
Yes, 6 is an irrational number since the number is not equal to zero
What are the methods to find the square root of a number?
There are three methods to find the square root of a number
1. Prime factorization method
2. Long division method
3. Repeated subtraction
|
{"url":"https://wiingy.com/learn/math/square-root-of-6/","timestamp":"2024-11-14T20:43:12Z","content_type":"text/html","content_length":"213167","record_id":"<urn:uuid:0ac8cd42-9bc8-4264-860c-4d1b69438613>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00767.warc.gz"}
|
Linear Regression - NBAstuffer
Linear Regression is the most basic statistical method that helps you identify the relationship between two or more variables which examines the ability of an independent variable to influence the
dependent variable. It is a common practice at sports analytics to determine how the metrics are correlated with the outcomes.
2-Variable regression
The formula tells how the dependent variable (y) changes as the predictor variable (x) changes.
Y is the dependent variable a.k.a. criterion a.k.a. response variable
β[1] is the correlation coefficient which is the slope of the regression line
X is the independent variable, a.k.a. predictor, a.k.a. explanatory
C is the constant value a.k.a. error
Multiple variable regression
Multivariable linear regression can be used to evaluate the ranking metrics where each predictor’s coefficient controls for the effect of the other metrics, and its statistical significance in the
overall model is evaluated.
The formula tells how the dependent variable (y) changes as the predictor variables (x[1]), (x[2]), (x[n]) changes.
What happens if the independent variables have linear relationships with each other?
This is called multicollinearity, and causes a division by zero which breaks your regression analysis when least squares estimates are unbiased, but their variances are large. So, determine if
multicollinearity exists before doing regression analysis. In the detected cases of multicollinearity, ridge regression comes in. It is a technique that adds a degree of bias to the regression
estimates where the standard errors are reduced.
|
{"url":"https://www.nbastuffer.com/analytics101/linear-regression/","timestamp":"2024-11-12T18:49:44Z","content_type":"text/html","content_length":"179217","record_id":"<urn:uuid:d74f7efd-274a-4195-9ee2-80273c0d03ce>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00868.warc.gz"}
|
The Fixed Point
Getting To The (Fixed) Point
Haskell offers ample opportunities for ah ha! moments, where figuring out just how some function or feature works can unlock a whole new way of thinking about how you write programs. One great
example of an ah-ha moment comes from when you can first start to understand fixed points, why you might want to use them, and how exactly they work in haskell. In this post, you’ll work through the
fixed point function in haskell, building several examples along the way. At the end of the post you’ll come away with a deeper understanding of recursion and how haskell’s lazy evaluation changes
the way you can think about writing programs.
If you already have some experience with haskell, you may want to skip the first section and jump directly into learning about fix
Update: 2021-06-13: A reader has submitted a PR to fix a few typos.
A Quick Look at Recursion in Haskell
A recursive function is a function that refers back to itself. There are different ways that you can accomplish recursion, and throughout this article we’ll look at several of them. We’ll start by
defining a couple of new terms to help us differentiate some particular aspects of recursion that will matter as we’re exploring fixed points: manual recursion, automatic recursion, and recursive
bindings. Throughout this section of the article we’ll spend some time with each of these types of recursion, building up some examples and working our way towards a better understanding of how they
let us think about the general nature of recursion and how it relates to fixed points.
Manual Recursion
We’ll start our investigation of recursion by thinking about manual recursion. When you are first learning haskell, manual recursion is probably the thing you think about when you think about the
word recursion. We call it manual recursion because it occurs when you, the programmer, directly make a recursive call back to the function you are currently writing.
Let’s look at a classic example of recursion written in a directly recursive style. We’ll start by writing a factorial function. If you’re not familiar with factorial, it’s a function that when given
0, returns 1. When given a number greater than 0, n, it gives you the result of multiplying all of the numbers from 1 to n. For example:
factorial 5 = 1 * 2 * 3 * 4 * 5 = 120
To make things a bit easier on ourselves in the next step, let’s think about the factorial function as counting down from our input number, instead of counting up toward it, so we can say:
factorial 5 = 5 * 4 * 3 * 2 * 1 = 120
The first thing we need to do is to think about how we can take our factorial function and break it down into increasingly smaller pieces that have the same shape as our overall function. Whenever
we’re writing a recursive function, it helps to start by looking at how we can reframe the problem in terms of something getting smaller.
One way to see how we can break factorial down into smaller pieces is to notice that:
factorial 5 = 5 * 4 * 3 * 2 * 1
factorial 4 = 4 * 3 * 2 * 1
So we can rewrite factorial 5 to say:
factorial 5 = 5 * (4 * 3 * 2 * 1)
= 5 * (factorial 4)
If we go one more step, we can see that:
factorial 4 = 4 * (3 * 2 * 1)
factorial 3 = (3 * 2 * 1)
factorial 4 = 4 * (factorial 3)
From these examples you can start to see the shape of the recursive function that we’ll be writing, and how the sub-problem that we are solving at each step gets a little bit smaller.
The next thing we want to think about before we write a recursive function is when we should stop. This is called the base case. For our factorial function, the base case is the smallest number that
we can calculate a factorial for, which is 0 and that’s given to us by the definition of factorial.
With that information in hand, we can write our factorial function using direct recursion.:
Automatic Recursion
Unlike manual recursion, where we can see the recursive structure of our function by looking for the place in our code where a function calls itself, a function that is using automatic recursion does
so indirectly, using a function that manages the recursion for us automatically.
One example of automatic recursion that you’re likely familiar with are the fold functions: foldl and foldr. These two functions, and others like them, allow you to work on data that can naturally be
traversed recursively while only having to implement the code to deal with the current element and any state that you want to carry over across calls.
We can use a function like foldr to write a factorial by letting it do the recursion for us:
factorial :: Integer -> Integer
factorial n =
handleStep currentNum currentFact =
currentNum * currentFact
in foldr handleStep 1 [1..n]
Even if you’ve used foldr before, it will be helpful as we’re framing the problem to build a version of it ourselves, so that we can think through how these sorts of recursive functions work.
foldr :: (a -> b -> b) -> b -> [a] -> b
foldr f accum items =
case items of
[] ->
(next:rest) ->
f next (foldr f accum rest)
Looking at this function, you can see how the shape of the function, with the case statement, the base case with an empty list, and the recursive call that each time moves one element forward in the
list. Our implementation of foldr is more generic to be sure- we’ve replaced the knowledge that factorial 0 is 1 with a more general statement that the value of our fold at our base case is the
initial accumulator value that was provided, and now instead of doing multiplication directly in our recursive call we hand off the details to a function that is passed in, but if you squint a bit
you can see how similar the two functions really are.
Using functions like folds that deal with the shape of the data and handle the recursion for us has a number of benefits. First, it removes some unnecessary duplication of code. Traversing data
structures and doing something on all of the elements is quite common in functional programming, and if we were to implement it from scratch each time it would take us much longer to write programs,
and there are many more opportunities for errors. Second, it makes our code a bit more readable by letting us center the “business logic” of our function. In most cases, the fact that our data is
represented as a list, a binary tree, etc. is incidental to the problem at hand. By separating out the logic for dealing with individual elements from the logic for traversing data structures, we
center the relevant bits of our code. Finally, and perhaps most importantly, functions like folds give us a common language for talking about the structure of our programs. For someone who has been
programming for some time, saying that something is “simply a fold over some data” can convey a good deal of information about the general idea of how a program is implemented without the need to bog
them down in too many extraneous details.
Recursive Let Bindings
The final type of recursion we’ll look at in this first section is not so much a specific technique to do recursion as it is a feature of haskell that allows you to use manual recursion more easily:
recursive let bindings.
Haskell’s recursive let bindings mean that you can use recursion inside of a let expression in your function. A simple example of this would be, continuing with our factorial example, a function that
computers the double factorial, that is to say, the factorial of the factorial of an input number:
-- Note: This function grows very quickly.
-- doubleFactorial 5 is a 199-digit number
-- doubleFactorial 8 is a 168187-digit number
doubleFactorial :: Integer -> Integer
doubleFactorial n =
factorial a =
case a of
0 -> 1
a -> a * factorial (a - 1)
in factorial (factorial n)
The fix function, defined in Data.Function in basegives us another way to approach recursion in haskell. Let’s start by taking a look at the documentation for fix:
Fix By The Docs
For ease of readability, the documentation for fix is reproduced below:
fix fis the least fixed point of the function f, i.e. the least defined x such that f x = x.
For example, we can write the factorial function using direct recursion as
This uses the fact that Haskell’s let introduces recursive bindings. We can rewrite this definition using ‘fix’,
Instead of making a recursive call, we introduce a dummy parameter rec; when used within fix, this parameter then refers to fix’s argument, hence the recursion is reintroduced. fix :: (a -> a) -> a
Untangling the Type of fix
Whenever we want to understand something new in haskell, a good first instinct is to start by looking at the types, as this tells us quite a bit about what a function can, and often more importantly
can’t do.
The type of fix :: (a -> a) -> a tells us that it’s going to take a function, and return a value. For the sake of discussion, let’s give the function that we pass into fix the nameg. So, g :: a -> a
and fix g :: a.
At first look, this might not look all that difficult at all. fix just needs to call g with a value to get a value back out that it can return. We can imagine any number of similar functions that
would work for some specific type, an Int:
Similarly, we can think of any number of candidates for g that we could pass in and get a good result back out:
Unfortunately, this relies on the fact that applyZero can pick some number to pass in. It can do that because we know that it’s working with Int values, so we can pick an Int value to pass into it.
fix doesn’t have things so easy- since a could be anything there’s no value we can pick to pass into g to get back a value.
We can see this play out if we try to pass some function, like (+1) into fix: It will never give us back a value, because it can’t. You can try it yourself in ghci. When you are satisfied that you
won’t get back a value, you can type control+c to cancel the current function.
The trick to fix is that, sometimes, it can give back a result. It can do that when the final value that you get back out doesn’t depend on any particular input value. For example, if we use the
const function, which ignores any argument passed into it and just returns a value, then we can get a result from fix:
Ignoring the question of how this could possibly work, it makes sense. The definition of a fixed point of a function is that it’s a value that, when passed into a function, causes the function to
return that same value. This is exactly what const does- ignores its input and returns some value:
λ :t const
const :: a -> b -> a
λ :t const "foo"
const "foo" :: b -> [Char]
λ f = const "foo"
λ f 1
This means that whatever value we pass into const will be the fixed point of the function that it returns.
Outside of the mathematical definition of a fixed point, the behavior of fix also makes sense if we think about it in terms of laziness, and computability. We’ve already noted that because the fix is
polymorphic, fixed itself can’t ever get a value to pass into the function it’s trying to find the fixed point of. In a strictly evaluated language, that would be a problem, but thanks to haskell’s
laziness, “a value we can’t ever actually compute” is still something that we can work with.
In the case of fix, the parameter that it passes into its function might be a value that we can’t ever actually compute, but it turns that that’s actually perfectly okay so long as we never try to
compute it. In other words, if the function we pass in is lazy in its argument, then we never try to run the impossible calculation of creating a value, and so everything works out.
The Two-Argument Conundrum
Now that you understand how fix can take advantage of laziness to work at all, there’s another aspect to fix that might trip you up reading through the documentation. Recall that the type of fix is
fix :: (a -> a) -> a, but what the documentation passes in a factorial function that takes two arguments:
We can factor the function out from this example and give it a name, and confirm that it does, in fact, take two arguments: rec, a function with type (p -> p), and n, a value of type p.
λ factorial = \rec n -> if n <= 1 then 1 else n * rec (n-1)
λ :t factorial
factorial :: (Ord p, Num p) => (p -> p) -> p -> p
So that it’s a bit easier for us to talk about, let’s pick some specific type to use as we’re thinking about this. For no particular reason, let’s use Int, so we can let factorial have the type:
There are two things that we need to remember to be able to put together how this works. The first is that it can sometimes be quite helpful for us to stop and remember that haskell functions are
curried, and to think through what our type signatures really mean when we look at them.
We might naturally read the type (Int -> Int) -> Int -> Int as a function that takes a function from an Int to an Int, and an Int, returning an Int. Most of the time we can get by just fine when we
read our function types this way, but every once in a while it can throw us for a loop.
Since haskell functions are curried, a we can rewrite a function like:
Into one that takes a single argument and returns a new function:
factorial :: (Int -> Int) -> Int -> Int
factorial = \rec -> \n -> if n <= 1 then 1 else n * rec (n-1)
When we rewrite it that way, we might naturally want to describe the function as: a function that takes a function from and Int to an Int, and returns a function from an Int to an Int. We can rewrite
our type signature to reflect this restatement of our function so that it reads:
factorial :: (Int -> Int) -> (Int -> Int)
factorial = \rec -> \n -> if n <= 1 then 1 else n * rec (n-1)
Looking at this rewritten type signature now, we can start to see the second important thing that we need to keep in mind. When we’re dealing with polymorphic functions that take an a, the a could be
anything, including a function. If we replace the a type parameters with (Int -> Int) then the type of fix would become:
Or, if we let ghci render the type for us without any unnecessary parentheses:
In the next section we’ll take a look at how fix is actually implemented. Once you’ve had a chance to see the implementation, we’ll come back to both the type of fix and how it works with laziness
and put all of that knowledge together into a more cohesive understanding of how it actually works.
Implementing fix
For all of the discussion about how fix works, its implementation is remarkably short. Whenever we find ourselves facing something completely unknown in haskell, we can start by looking at the types,
and the next step is often to read the source code. The source code for fix is available on hackage, and it’s quite short:
Let’s walk through what’s happening here and see if we can get a handle on it. We start with a parameter, f, which is whatever function we want to find the fixed point for.
Next, we create a recursive let binding where we define x to be the result of applying f to x. This recursive let binding is the magic behind how the fixed point calculation works.
When we first call fix and create the let binding where we define x, we know that it has to have the type a, and a value that, when it’s needed, will be computed by the expression f x.
The x in that computation, likewise, isn’t a value yet. It’s a thunk that, if it is evaluated, will be computed by calling f x. In other words, we start with:
If whoever calls this function decides they need the value of x, then they’ll get:
If f is a function like const that always returns a value without ever looking at its input value, then x will get set to that value and can be evaluated without any issues at all.
On the other hand, if f does need to evaluate x, like when we tried to pass in (+1), we’ll end up with a computation that can never complete, because each time we try to look at x we’ll get back
another layer of some unevaluated thunk. On the surface, this might seem to be a bit limited. After all, if we need to pass in a function that always returns a value and never looks at its input,
we’re limited to permutations of const and not much else, unless we can get some data to work with from somewhere else…
Tying The Knot
The fix function doesn’t require a function that never evaluates its argument in order to eventually give us back a value. Instead, we need to give it a function that eventually doesn’t evaluate its
argument. The one-word difference here between never and eventually is the difference between a computation that terminates and is well-defined, and one that is undefined. This is where passing a
function of two parameters into fix comes into play. When we have a function like (Int -> Int) there’s no option except for the input value that we’re given to decide when to terminate, so we always
have to evaluate it. On the other hand, a function with the type (Int -> Int) -> (Int -> Int) has much more flexibility. To see how, let’s go back to our definition of factorial:
In this factorial function, we’re taking a parameter, rec :: Int -> Int, but we only ever evaluate it if n is greater than 1. Since n decreases with each step, we know that it will eventually reach 1
(assuming we started with a positive number), and so we know that rec will eventually not be evaluated, and we can return a good value.
When we look at this deeply we can see that this is actually a really interesting approach- we’re taking advantage of laziness so that we can return a function that only causes a value in its closure
to be evaluated when the input to the returned function is sufficiently high. It’s almost like we’re passing information backwards through time, but in fact we’re simply making use of the behavior of
lazy evaluation and the call stack to propagate information back and eventually resolve some thunks that have been hanging out patiently waiting around for us to allow them to be computed.
As a final exercise, let’s walk through the example step by step to get a much better idea of what’s happening when we make use of fix.
Fixing The Factorial
We’ll start our manual evaluation by defining two functions:
factorial' :: (Int -> Int) -> (Int -> Int)
factorial' rec = \n -> if n <= 1 then 1 else n * rec (n-1)
factorial :: Int -> Int
factorial = fix factorial'
In ghci we’ll start by calling factorial with 5:
We can expand this to:
And that, in turn, becomes:
If we apply this function to 5, and replace n with 5 we end up with:
Following the pattern until we get to our base case, we have:
let x = (\rec 5 ->
if 5 <= 1 then 1 else 5 * rec (5 - 1)
) $ (\rec' 4 ->
if 4 <= 1 then 1 else 4 * rec' (4 - 1))
) $ (\rec'' 3 ->
if 3 <= 1 then 1 else 3 * rec'' (3 - 1))
) $ (\rec''' 2 ->
if 2 <= 1 then 1 else 2 * rec''' (2 - 1))
) $ (\_rec 1 ->
if 1 <= 1 then 1 else {- never evaluated #-}
in x $ 5
Once we finally hit the case where n == 1 and we stop evaluating rec we can start to resolve the stack of calls in reverse order, so rec''' becomes 1 and we get:
let x = (\rec 5 ->
if 5 <= 1 then 1 else 5 * rec (5 - 1)
) $ (\rec' 4 ->
if 4 <= 1 then 1 else 4 * rec' (4 - 1))
) $ (\rec'' 3 ->
if 3 <= 1 then 1 else 3 * rec'' (3 - 1))
) $ (\rec''' 2 ->
if 2 <= 1 then 1 else 2 * 1)
in x $ 5
Which becomes:
let x = (\rec 5 ->
if 5 <= 1 then 1 else 5 * rec (5 - 1)
) $ (\rec' 4 ->
if 4 <= 1 then 1 else 4 * rec' (4 - 1))
) $ (\rec'' 3 ->
if 3 <= 1 then 1 else 3 * 2
in x $ 5
And so on until we finally get to:
In this post you’ve learned how the fix function from Data.Function relies on important features of haskell, like laziness and recursive let bindings, to provide us with a way of doing automatic
recursion without having to ever directly make a recursive call. By understanding how haskell’s type system, currying, and lazy evaluation work together, and taking time to sympathize with the
compiler and better understand how expressions are evaluated, you can start to see precisely how some of the more interesting, and at first more intimidating, areas of haskell work.
|
{"url":"https://rebeccaskinner.net/posts/2021-06-09-getting-to-the-fixed-point.html","timestamp":"2024-11-15T00:55:46Z","content_type":"text/html","content_length":"60430","record_id":"<urn:uuid:bf8afabe-47b4-4af3-9d26-4cc1562b46c6>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00427.warc.gz"}
|
Modern Quantum Mechanics - SILO.PUB
File loading please wait...
Citation preview
Modem Quantum Mechanics J. J. Sakurai
Revised Edition
Modem Quantum Mechanics Revised Edition J. J. Sakurai Late, University of California, Los Angeles
San Fu Tuan, Editor University of Hawaii, Manoa
Addison-Wesley Publishing Company Reading, Massachusetts • Menlo Park, California • New York Don Mills, Ontario • Wokingham, England • Amsterdam • Bonn Sydney • Singapore • Tokyo • Madrid • San Juan
• Milan • Paris
Sponsoring Editor: Stuart W. Johnson Assistant Editor: Jennifer Duggan Senior Production Coordinator: Amy Willcutt Manufacturing Manager: Roy Logan
Library of Congress Cataloging-in-Publication Data Sakurai, J. J. (Jun John), 1933-1982. Modern quantum mechanics / J. J. Sakurai ; San Fu Tuan, editor.— Rev. ed. p. cm. Includes bibliographical
references and index. ISBN 0-201-53929-2 1. Quantum theory. I. Tuan, San Fu, 1932II. Title. QC174.12.S25 1994 530.1'2—dc20 93-17803 CIP
Copyright © 1994 by Addison-Wesley Publishing Company, Inc. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of the publisher. Printed in the United States of America.
Foreword J. J. Sakurai was always a very welcome guest here at CERN, for he was one of those rare theorists to whom the experimental facts are even more interesting than the theoretical game itself.
Nevertheless, he delighted in theoretical physics and in its teaching, a subject on which he held strong opinions. He thought that much theoretical physics teaching was both too narrow and too remote
from application: "...we see a number of sophisticated, yet uneducated, theoreticians who are conversant in the LSZ formalism of the Heisenberg field operators, but do not know why an excited atom
radiates, or are ignorant of the quantum theoretic derivation of Rayleigh's law that accounts for the blueness of the sky." And he insisted that the student must be able to use what has been taught:
"The reader who has read the book but cannot do the exercises has learned nothing." He put these principles to work in his fine book Advanced Quantum Mechanics (1967) and in Invariance Principles and
Elementary Particles (1964), both of which have been very much used in the CERN library. This new book, Modern Quantum Mechanics, should be used even more, by a larger and less specialized group. The
book combines breadth of interest with a thorough practicality. Its readers will find here what they need to know, with a sustained and successful effort to make it intelligible. J. J. Sakurai's
sudden death on November 1, 1982 left this book unfinished. Reinhold Bertlmann and I helped Mrs. Sakurai sort out her husband's papers at CERN. Among them we found a rough, handwritten version of
most of the book and a large collection of exercises. Though only three chapters had been completely finished, it was clear that the bulk of the creative work had been done. It was also clear that
much work remained to fill in gaps, polish the writing, and put the manuscript in order. That the book is now finished is due to the determination of Noriko Sakurai and the dedication of San Fu Tuan.
Upon her husband's death, Mrs. Sakurai resolved immediately that his last effort should not go to waste. With great courage and dignity she became the driving force behind the project, overcoming all
obstacles and setting the high standards to be maintained. San Fu Tuan willingly gave his time and energy to the editing and completion of Sakurai's work. Perhaps only others close to the hectic
field of high-energy theoretical physics can fully appreciate the sacrifice involved. For me personally, J. J. had long been far more than just a particularly distinguished colleague. It saddens me
that we will never again laugh together at physics and physicists and life in general, and that he will not see the success of his last work. But I am happy that it has been brought to fruition. John
S. Bell CERN, Geneva iii
Preface to the Revised Edition Since 1989 the Editor has enthusiastically pursued a revised edition of Modern Quantum Mechanics by his late great friend J . J . Sakurai, in order to extend this
text's usefulness into the twenty-first century. Much consultation took place with the panel of Sakurai friends who helped with the original edition, but in particular with Professor Yasuo Hara of
Tsukuba University and Professor Akio Sakurai of Kyoto Sangyo University in Japan. The major motivation for this project is to revise the main text. There are three important additions and/or changes
to the revised edition, which otherwise preserves the original version unchanged. These include a reworking of certain portions of Section 5.2 on time-independent perturbation theory for the
degenerate case by Professor Kenneth Johnson of M.I.T., taking into account a subtle point that has not been properly treated by a number of texts on quantum mechanics in this country. Professor
Roger Newton of Indiana University contributed refinements on lifetime broadening in Stark effect, additional explanations of phase shifts at resonances, the optical theorem, and on non-normalizable
state. These appear as "remarks by the editor" or "editor's note" in the revised edition. Professor Thomas Fulton of the Johns Hopkins University reworked his Coulomb Scattering contribution (Section
7.13) so that it now appears as a shorter text portion emphasizing the physics, with the mathematical details relegated to Appendix C. Though not a major part of the text, some additions were deemed
necessary to take into account developments in quantum mechanics that have become prominent since November 1, 1982. To this end, two supplements are included at the end of the text. Supplement I is
on adiabatic change and geometrical phase (popularized by M. V. Berry since 1983) and is actually an English translation of the supplement on this subject written by Professor Akio Sakurai for the
Japanese version of Modern Quantum Mechanics (copyright © Yoshioka-Shoten Publishing of Kyoto). Supplement II is on non-exponential decays written by my colleague here, Professor Xerxes Tata, and
read over by Professor E. C. G. Sudarshan of the University of Texas at Austin. Though non-exponential decays have a long history theoretically, experimental work on transition rates that tests
indirectly such decays was done only in 1990. Introduction of additional material is of course a subjective matter on the part of the Editor; the readers will evaluate for themselves its
appropriateness. Thanks to Professor Akio Sakurai, the revised edition has been "finely toothcombed" for misprint errors of the first ten printings of the original edition. My colleague, Professor
Sandip Pakvasa, provided overall guidance and encouragement to me throughout this process of revision. iv
Preface to the Revised Edition
In addition to the acknowledgments above, my former students Li Ping, Shi Xiaohong, and Yasunaga Suzuki provided the sounding board for ideas on the revised edition when taking my graduate quantum
mechanics course at the University of Hawaii during the spring of 1992. Suzuki provided the initial translation from Japanese of Supplement I as a course term paper. Dr. Andy Acker provided me with
computer graphic assistance. The Department of Physics and Astronomy and particularly the High Energy Physics Group of the University of Hawaii at Manoa provided again both the facilities and a
conducive atmosphere for me to carry out my editorial task. Finally I wish to express my gratitude to Physics (and sponsoring) Senior Editor, Stuart Johnson, and his Editorial Assistant, Jennifer
Duggan, as well as Senior Production Coordinator Amy Willcutt, of Addison-Wesley for their encouragement and optimism that the revised edition will indeed materialize. San Fu FUAN Honolulu, Hawaii
J. J. Sakurai 1933^1982
In Memoriam Jun John Sakurai was born in 1933 in Tokyo and came to the United States as a high school student in 1949. He studied at Harvard and at Cornell, where he received his Ph.D. in 1958. He
was then appointed assistant professor of Physics at the University of Chicago, and became a full professor in 1964. He stayed at Chicago until 1970 when he moved to the University of California at
Los Angeles, where he remained until his death. During his lifetime he wrote 119 articles in theoretical physics of elementary particles as well as several books and monographs on both quantum and
particle theory. The discipline of theoretical physics has as its principal aim the formulation of theoretical descriptions of the physical world that are at once concise and comprehensive. Because
nature is subtle and complex, the pursuit of theoretical physics requires bold and enthusiastic ventures to the frontiers of newly discovered phenomena. This is an area in which Sakurai reigned
supreme with his uncanny physical insight and intuition and also his ability to explain these phenomena in illuminating physical terms to the unsophisticated. One has but to read his very lucid
textbooks on Invariance Principles and Elementary Particles and Advanced Quantum Mechanics as well as his reviews and summer school lectures to appreciate this. Without exaggeration I could say that
much of what I did understand in particle physics came from these and from his articles and private tutoring. When Sakurai was still a graduate student, he proposed what is now known as the V-A
theory of weak interactions, independently of (and simultaneously with) Richard Feynman, Murray Gell-Mann, Robert Marshak, and George Sudarshan. In 1960 he published in Annals of Physics a prophetic
paper, probably his single most important one. It was concerned with the first serious attempt to construct a theory of strong interactions based on Abelian and non-Abelian (Yang-Mills) gauge
invariance. This seminal work induced theorists to attempt an understanding of the mechanisms of mass generation for gauge (vector) fields, now realized as the Higgs mechanism. Above all it
stimulated the search for a realistic unification of forces under the gauge principle, now crowned with success in the celebrated Glashow-Weinberg-Salam unification of weak and electromagnetic
forces. On the phenomenological side, Sakurai pursued and vigorously advocated the vector mesons dominance model of hadron dynamics. He was the first to discuss the mixing of co and meson states.
Indeed, he made numerous important contributions to particle physics phenomenology in a Vll
In Memoriam
much more general sense, as his heart was always close to experimental activities. I knew Jun John for more than 25 years, and I had the greatest admiration not only for his immense powers as a
theoretical physicist but also for the warmth and generosity of his spirit. Though a graduate student himself at Cornell during 1957-1958, he took time from his own pioneering research in K-nucleon
dispersion relations to help me (via extensive correspondence) with my Ph.D. thesis on the same subject at Berkeley. Both Sandip Pakvasa and I were privileged to be associated with one of his last
papers on weak couplings of heavy quarks, which displayed once more his infectious and intuitive style of doing physics. It is of course gratifying to us in retrospect that Jun John counted this
paper among the score of his published works that he particularly enjoyed. The physics community suffered a great loss at Jun John Sakurai's death. The personal sense of loss is a severe one for me.
Hence I am profoundly thankful for the opportunity to edit and complete his manuscript on Modern Quantum Mechanics for publication. In my faith no greater gift can be given me than an opportunity to
show my respect and love for Jun John through meaningful service. San Fu Tuan
Contents Foreword Preface In Memoriam 1 FUNDAMENTAL CONCEPTS 1.1 The Stern-Gerlach Experiment 1.2 Kets, Bras, and Operators 1.3 Base Kets and Matrix Representations 1.4 Measurements, Observables, and
the Uncertainty Relations 1.5 Change of Basis 1.6 Position, Momentum, and Translation 1.7 Wave Functions in Position and Momentum Space Problems
iii iv vii 1 2 10 17 23 36 41 51 60
2 QUANTUM DYNAMICS 2.1 Time Evolution and the Schrodinger Equation 2.2 The Schrodinger Versus the Heisenberg Picture 2.3 Simple Harmonic Oscillator 2.4 Schrodinger's Wave Equation 2.5 Propagators and
Feynman Path Integrals 2.6 Potentials and Gauge Transformations Problems
3 THEORY OF ANGULAR MOMENTUM 3.1 Rotations and Angular Momentum Commutation Relations 3.2 Spin 1 / 2 Systems and Finite Rotations 3.3 SO(3), SU(2), and Euler Rotations 3.4 Density Operators and Pure
Versus Mixed Ensembles 3.5 Eigenvalues and Eigenstates of Angular Momentum 3.6 Orbital Angular Momentum 3.7 Addition of Angular Momenta 3.8 Schwinger's Oscillator Model of Angular Momentum 3.9 Spin
Correlation Measurements and Bell's Inequality 3.10 Tensor Operators Problems
4 SYMMETRY IN QUANTUM MECHANICS 4.1 Symmetries, Conservation Laws, and Degeneracies 4.2 Discrete Symmetries, Parity, or Space Inversion 4.3 Lattice Translation as a Discrete Symmetry 4.4 The
Time-Reversal Discrete Symmetry Problems
248 248 251 261 266 282 ix
5 APPROXIMATION METHODS 5.1 Time-Independent Perturbation Theory: Nondegenerate Case 5.2 Time-Independent Perturbation Theory: The Degenerate Case 5.3 Hydrogenlike Atoms: Fine Structure and the
Zeeman Effect 5.4 Variational Methods 5.5 Time-Dependent Potentials: The Interaction Picture 5.6 Time-Dependent Perturbation Theory 5.7 Applications to Interactions with the Classical Radiation Field
5.8 Energy Shift and Decay Width Problems
6 IDENTICAL PARTICLES 6.1 Permutation Symmetry 6.2 Symmetrization Postulate 6.3 Two-Electron System 6.4 The Helium Atom 6.5 Permutation Symmetry and Young Tableaux Problems
7 SCATTERING THEORY 7.1 The Lippmann-Schwinger Equation 7.2 The Born Approximation 7.3 Optical Theorem 7.4 Eikonal Approximation 7.5 Free-Particle States: Plane Waves Versus Spherical Waves 7.6
Method of Partial Waves 7.7 Low-Energy Scattering and Bound States 7.8 Resonance Scattering 7.9 Identical Particles and Scattering 7.10 Symmetry Considerations in Scattering 7.11 Time-Dependent
Formulation of Scattering 7.12 Inelastic Electron-Atom Scattering 7.13 Coulomb Scattering Problems
Appendix A Appendix B Appendix C Supplement I Adiabatic Change and Geometrical Phase Supplement II Non-Exponential Decays Bibliography Index
Modem Quantum Mechanics
CHAPTER 1
Fundamental Concepts
The revolutionary change in our understanding of microscopic phenomena that took place during the first 27 years of the twentieth century is unprecedented in the history of natural sciences. Not only
did we witness severe limitations in the validity of classical physics, but we found the alternative theory that replaced the classical physical theories to be far richer in scope and far richer in
its range of applicability. The most traditional way to begin a study of quantum mechanics is to follow the historical developments—Planck's radiation law, the EinsteinDebye theory of specific heats,
the Bohr atom, de Broglie's matter waves, and so forth—together with careful analyses of some key experiments such as the Compton effect, the Franck-Hertz experiment, and the DavissonGermer-Thompson
experiment. In that way we may come to appreciate how the physicists in the first quarter of the twentieth century were forced to abandon, little by little, the cherished concepts of classical
physics and how, despite earlier false starts and wrong turns, the great masters—Heisenberg, Schrodinger, and Dirac, among others—finally succeeded in formulating quantum mechanics as we know it
today. However, we do not follow the historical approach in this book. Instead, we start with an example that illustrates, perhaps more than any other example, the inadequacy of classical concepts in
a fundamental way. We hope that by exposing the reader to a "shock treatment" at the onset, he l
Fundamental Concepts
or she may be attuned to what we might call the "quantum-mechanical way of thinking" at a very early stage.
1.1. THE STERN-GERLACH EXPERIMENT The example we concentrate on in this section is the Stern-Gerlach experiment, originally conceived by O. Stern in 1921 and carried out in Frankfurt by him in
collaboration with W. Gerlach in 1922. This experiment illustrates in a dramatic manner the necessity for a radical departure from the concepts of classical mechanics. In the subsequent sections the
basic formalism of quantum mechanics is presented in a somewhat axiomatic manner but always with the example of the Stern-Gerlach experiment in the back of our minds. In a certain sense, a two-state
system of the Stern-Gerlach type is the least classical, most quantum-mechanical system. A solid understanding of problems involving two-state systems will turn out to be rewarding to any serious
student of quantum mechanics. It is for this reason that we refer repeatedly to two-state problems throughout this book. Description of the Experiment We now present a brief discussion of the
Stern-Gerlach experiment, which is discussed in almost any book on modern physics.* First, silver (Ag) atoms are heated in an oven. The oven has a small hole through which some of the silver atoms
escape. As shown in Figure 1.1, the beam goes through a collimator and is then subjected to an inhomogeneous magnetic field produced by a pair of pole pieces, one of which has a very sharp edge. We
must now work out the effect of the magnetic field on the silver atoms. For our purpose the following oversimplified model of the silver atom suffices. The silver atom is made up of a nucleus and 47
electrons, where 46 out of the 47 electrons can be visualized as forming a spherically symmetrical electron cloud with no net angular momentum. If we ignore the nuclear spin, which is irrelevant to
our discussion, we see that the atom as a whole does have an angular momentum, which is due solely to the spin— intrinsic as opposed to orbital—angular momentum of the single 47th (5s) electron. The
47 electrons are attached to the nucleus, which is - 2 x 1 0 5 times heavier than the electron; as a result, the heavy atom as a whole possesses a magnetic moment equal to the spin magnetic moment of
the 47th electron. In other words, the magnetic moment |i of the atom is
* For an elementary but enlightening discussion of the Stern-Gerlach experiment, see French and Taylor (1978, 432-38).
1.1. The Stern-Gerlach Experiment
(pole p i e c e s ) FIGURE 1.1.
The Stern-Gerlach experiment.
proportional to the electron spin S, >iocS,
where the precise proportionality factor turns out to be e/mec (e < 0 in this book) to an accuracy of about 0.2%. Because the interaction energy of the magnetic moment with the magnetic field is just
— |i-B, the z-component of the force experienced by the atom is given by Fz = f
( > - B ) - ^ ,
where we have ignored the components of B in directions other than the z-direction. Because the atom as a whole is very heavy, we expect that the classical concept of trajectory can be legitimately
applied, a point which can be justified using the Heisenberg uncertainty principle to be derived later. With the arrangement of Figure 1.1, the \iz > 0 (Sz < 0) atom experiences a downward force,
while the fiz 0) atom experiences an upward force. The beam is then expected to get split according to the values of fi z . In other words, the SG (Stern-Gerlach) apparatus "measures" the z-component
of |i or, equivalently, the z-component of S up to a proportionality factor. The atoms in the oven are randomly oriented; there is no preferred direction for the orientation of |i. If the electron
were like a classical spinning object, we would expect all values of jitz to be realized between ||i| and - ||x|. This would lead us to expect a continuous bundle of beams coming out of the SG
apparatus, as shown in Figure 1.2a. Instead, what we
Fundamental Concepts
FIGURE 1.2. Beams from the SG apparatus; (a) is expected from classical physics, while (b) is actually observed.
experimentally observe is more like the situation in Figure 1.2b. In other words, the SG apparatus splits the original silver beam from the oven into two distinct components, a phenomenon referred to
in the early days of quantum theory as "space quantization." To the extent that |i can be identified within a proportionality factor with the electron spin S, only two possible values of the
¿-component of S are observed to be possible, Sz up and Sz down, which we call Sz + and Sz - . The two possible values of Sz are multiples of some fundamental unit of angular momentum; numerically it
turns out that Sz = h/2 and - h / 2 , where /i = 1.0546 X10" 27 erg-s = 6.5822 X10" 1 6 eV-s
This "quantization" of the electron spin angular momentum is the first important feature we deduce from the Stern-Gerlach experiment. Of course, there is nothing sacred about the up-down direction or
the z-axis. We could just as well have applied an inhomogeneous field in a horizontal direction, say in the jc-direction, with the beam proceeding in the ^-direction. In this manner we could have
separated the beam from the oven into an Sx + component and an Sx - component. Sequential Stern-Gerlach Experiments Let us now consider a sequential Stern-Gerlach experiment. By this we mean that the
atomic beam goes through two or more SG apparatuses in sequence. The first arrangement we consider is relatively straightforward. We subject the beam coming out of the oven to the arrangement shown
in Figure 1.3a, where SGz stands for an apparatus with the inhomogeneous magnetic field in the z-direction, as usual. We then block the Sz — compo-
1.1. The Stern-Gerlach Experiment
Sz- comp.
S 2 - beam, (b)
Sz- beam. FIGURE 1.3.
Sx- beam,
Sequential Stern-Gerlach experiments.
nent coming out of the first SGz apparatus and let the remaining S2 + component be subjected to another SGz apparatus. This time there is only one beam component coming out of the second
apparatus—just the Sz + component. This is perhaps not so surprising; after all if the atom spins are up, they are expected to remain so, short of any external field that rotates the spins between
the first and the second SGz apparatuses. A little more interesting is the arrangement shown in Figure 1.3b. Here the first SG apparatus is the same as before but the second one (SGx) has an
inhomogeneous magnetic field in the x-direction. The Sz + beam that enters the second apparatus (SGx) is now split into two components, an Sx + component and an Sx - component, with equal
intensities. How can we explain this? Does it mean that 50% of the atoms in the Sz + beam coming out of the first apparatus (SGz) are made up of atoms characterized by both Sz + and Sx +, while the
remaining 50% have both Sz + and Sx - ? It turns out that such a picture runs into difficulty, as will be shown below. We now consider a third step, the arrangement shown in Figure 1.3(c), which most
dramatically illustrates the peculiarities of quantummechanical systems. This time we add to the arrangement of Figure 1.3b yet a third apparatus, of the SGz type. It is observed experimentally that
two components emerge from the third apparatus, not one; the emerging beams are seen to have both an Sz + component and an Sz - component. This is a complete surprise because after the atoms emerged
from the first
Fundamental Concepts
apparatus, we made sure that the Sz - component was completely blocked. How is it possible that the Sz - component which, we thought, we eliminated earlier reappears? The model in which the atoms
entering the third apparatus are visualized to have both Sz + and Sx + is clearly unsatisfactory. This example is often used to illustrate that in quantum mechanics we cannot determine both Sz and Sx
simultaneously. More precisely, we can say that the selection of the Sx + beam by the second apparatus (SGx) completely destroys any previous information about Sz. It is amusing to compare this
situation with that of a spinning top in classical mechanics, where the angular momentum L = /-filter despite the fact that right after the beam went through the jc-filter it did not have any
polarization component in the ^-direction. In other words, once the x '-filter intervenes and selects the a:'-polarized beam, it is immaterial whether the beam was previously x-polarized. The
selection of the x'-polarized beam by the second Polaroid destroys any previous information on light polarization. Notice that this situation is quite analogous to the situation that we encountered
earlier with the SG arrangement of Figure 1.3b, provided that the following correspondence is made: & ± atoms jc-, j>-polarized light (1.1.7) Sx±
atoms jc'-, jy'-polarized light,
where the x'- and the j>'-axes are defined as in Figure 1.5. Let us examine how we can quantitatively describe the behavior of 45°-polarized beams (*'- and '-polarized beams) within the framework of
Fundamental Concepts
FIGURE 1.5.
Orientations of the x'- and y'-axes.
classical electrodynamics. Using Figure 1.5 we obtain 1 1 E0x 'cos(/cz — cot) = E0 — xcos (kz — cot)+ ~—ycos(kz . v2 v2 E0y'co$(kz
— cot) = E0
prXCOS (kz —
— cot)
—L-y cos(/cz — cot)
In the triple-filter arrangement of Figure 1.4b the beam coming out of the first Polaroid is an x-polarized beam, which can be regarded as a linear combination of an x'-polarized beam and a
'-polarized beam. The second Polaroid selects the x'-polarized beam, which can in turn be regarded as a linear combination of an x-polarized and a ^-polarized beam. And finally, the third Polaroid
selects the ^-polarized component. Applying correspondence (1.1.7) from the sequential Stern-Gerlach experiment of Figure 1.3c, to the triple-filter experiment of Figure 1.4b suggests that we might
be able to represent the spin state of a silver atom by some kind of vector in a new kind of two-dimensional vector space, an abstract vector space not to be confused with the usual two-dimensional
(xy) space. Just as x and y in (1.1.8) are the base vectors used to decompose the polarization vector x ' of the x'-polarized light, it is reasonable to represent the Sx + state by a vector, which we
call a ket in the Dirac notation to be developed fully in the next section. We denote this vector by
1.1. The Stern-Gerlach Experiment
| SX; + ) and write it as a linear combination of two base vectors, | S,; + ) and |SZ\ — ), which correspond to the SZ + and the SZ — states, respectively. So we may conjecture +> =
J=\SZ; - >
\SX; -> = - J=R\SZ; +>+
J=R\SZ; - >
(1.1.9a) (1.1.9b)
in analogy with (1.1.8). Later we will show how to obtain these expressions using the general formalism of quantum mechanics. Thus the unblocked component coming out of the second (SGJc) apparatus of
Figure 1.3c is to be regarded as a superposition of S2 + and SZ - in the sense of (1.1.9a). It is for this reason that two components emerge from the third (SGz) apparatus. The next question of
immediate concern is, How are we going to represent the SY ± states? Symmetry arguments suggest that if we observe an SZ ± beam going in the x-direction and subject it to an SGy apparatus, the
resulting situation will be very similar to the case where an SZ ± beam going in the ^-direction is subjected to an SGx apparatus. The kets for SV ± should then be regarded as a linear combination of
|Sz; ± ), but it appears from (1.1.9) that we have already used up the available possibilities in writing | S X ; ± ) . How can our vector space formalism distinguish SV± states from SX ± states? An
analogy with polarized light again rescues us here. This time we consider a circularly polarized beam of light, which can be obtained by letting a linearly polarized light pass through a quarter-wave
plate. When we pass such a circularly polarized light through an ^-filter or a ^-filter, we again obtain either an jt-polarized beam or a ^-polarized beam of equal intensity. Yet everybody knows that
the circularly polarized light is totally different from the 45°-linearly polarized (x'-polarized or y '-polarized) light. Mathematically, how do we represent a circularly polarized light? A right
circularly polarized light is nothing more than a linear combination of an x-polarized light and a >>-polarized light, where the oscillation of the electric field for the jF-polarized component is
90° out of phase with that of the x-polarized component:* E = E0 j=r\cos(kz Ltf
- = E ) = *|a>,
and the resulting product is another ket. Operators X and Y are said to be equal, X=Y,
*Attempts to abandon this postulate led to physical theories with "indefinite metric." We shall not be concerned with such theories in this book. * For eigenkets of observables with continuous
spectra, different normalization conventions will be used; see Section 1.6.
1.2. Kets, Bras, and Operators
if X\a) = Y\a)
for an arbitrary ket in the ket space in question. Operator X is said to be the null operator if, for any arbitrary ket |a), we have X\a) = 0.
Operators can be added; addition operations are commutative and associative: X+Y=Y+ X + (Y+Z)
= (X+Y)+Z.
With the single exception of the time-reversal operator to be considered in Chapter 4, the operators that appear in this book are all linear, that is, X(ca\a) + Cp\p)) = caX\a) + cpX |/J>.
An operator X always acts on a bra from the right side «a|).*=)
= (XY)\a)
= *7|a>,
can be represented using our base kets. The expansion coefficients of |y) can be obtained by multiplying (a'| on the left:
= E
But this can be seen as an application of the rule for multiplying a square matrix with a column matrix representing once the expansion coefficients of |a) and |y) arrange themselves to form column
matrices as follows:
l«> =
(a ( 2 ) |a)
Likewise, given ,...)
= (< a ( 1 >|Y)*,( û < 2 >|y)*,( û ( 3 )|Y)*,...).
Note the appearance of complex conjugation when the elements of the column matrix are written as in (1.3.29). The inner product (p|a) can be written as
Fundamental Concepts
the product of the row matrix representing (fi\ with the column matrix representing |a): 08|«> = £08|fl'X. a' a'
When the measurement is performed, the system is "thrown into" one of the eigenstates, say |a') of observable A. In other words, ,
A measurement ,
. ^x
For example, a silver atom with an arbitrary spin orientation will change into either \SZ; + ) or \SZ; — ) when subjected to a SG apparatus of type SGz. Thus a measurement usually changes the state.
The only exception is when the state is already in one of the eigenstates of the observable being measured, in which case .
A measurement ,
with certainty, as will be discussed further. When the measurement causes |a) to change into | a ' ) , it is said that A is measured to be a\ It is in this sense that the result of a measurement
yields one of the eigenvalues of the observable being measured. Given (1.4.1), which is the state ket of a physical system before the measurement, we do not know in advance into which of the various
\a')9s the system will be thrown as the result of the measurement. We do postulate, however, that the probability for jumping into some particular |a') is given by Probability for a'=
'|a>| 2 ,
provided that |a> is normalized. Although we have been talking about a single physical system, to determine probability (1.4.4) empirically, we must consider a great number of measurements performed
on an ensemble—that is, a collection—of identically prepared physical systems, all characterized by the same ket |a). Such an ensemble is known as a pure ensemble. (We will say more about ensembles
in Chapter 3.) As an example, a beam of silver atoms which survive the first SGz apparatus of Figure 1.3 with the Sz - component blocked is an example of a pure ensemble because every member atom of
the ensemble is characterized by |Sz; + ). The probabilistic interpretation (1.4.4) for the squared inner product 2 \(a'\a)\ is one of the fundamental postulates of quantum mechanics, so it cannot be
proven. Let us note, however, that it makes good sense in extreme cases. Suppose the state ket is \a') itself even before a measurement is made; then according to (1.4.4), the probability for getting
a'—or, more precisely, for being thrown into \a')—as the result of the measurement is predicted to be 1, which is just what we expect. By measuring A once again,
1.4. Measurements, Observables, and the Uncertainty Relations
we, of course, get \a') only; quite generally, repeated measurements of the same observable in succession yield the same result.* If, on the other hand, we are interested in the probability for the
system initially characterized by \a') to be thrown into some other eigenket | a " ) with a" a\ then (1.4.4) gives zero because of the orthogonality between \a') and \a"). From the point of view of
measurement theory, orthogonal kets correspond to mutually exclusive alternatives; for example, if a spin \ system is in |S 2 \ + ), it is not in \SZ; - > with certainty. Quite generally, the
probability for anything must be nonnegative. Furthermore, the probabilities for the various alternative possibilities must add up to unity. Both of these expectations are met by our probability
postulate (1.4.4). We define the expectation value of A taken with respect to state |a) as (A) = (a\A\a).
To make sure that we are referring to state |a), the notation (A) a is sometimes used. Equation (1.4.5) is a definition; however, it agrees with our intuitive notion of average measured value because
it can be written as
a' a"
measured value a'
probability for obtaining a'
It is very important not to confuse eigenvalues with expectation values. For example, the expectation value of Sz for spin \ systems can assume any real value between — h / 2 and + h / 2 , say 0.273/
i; in contrast, the eigenvalue of Sz assumes only two values, h / 2 and - h / 2 . To clarify further the meaning of measurements in quantum mechanics we introduce the notion of a selective
measurement, or filtration. In Section 1.1 we considered a Stern-Gerlach arrangement where we let only one of the spin components pass out of the apparatus while we completely blocked the other
component. More generally, we imagine a measurement process with a device that selects only one of the eigenkets of A, say \a'), and rejects all others; see Figure 1.6. This is what we mean by a
selective measurement; it is also called filtration because only one of the A eigenkets filters through the ordeal. Mathematically we can say that such a selective * Here successive measurements must
be carried out immediately afterward. This point will become clear when we discuss the time evolution of a state ket in Chapter 2.
Fundamental Concepts
la >
A Measurement 1 a"> with a FIGURE 1.6.
Selective measurement.
measurement amounts to applying the projection operator A a , to |«): A »
= |a').
J. Schwinger has developed a formalism of quantum mechanics based on a thorough examination of selective measurements. He introduces a measurement symbol M(a') in the beginning, which is identical to
Aa, or \a')(a'| in our notation, and deduces a number of properties of M(a') (and also of M(b\a') which amount to |Z>')(±\([A,B])\2.
To prove this we first state three lemmas. Lemma 1. The Schwarz inequality (a\a)(p\p)>\(a\p)\2,
|a|2|b|2 > |a-b|2
((a\+\*(f3\)-(\a) + \ \ p ) ) > 0 ,
which is analogous to in real Euclidian space. Proof. First note where X can be any complex number. This inequality must hold when X is set equal to - /: (a\aXP\P)-\(a\f3)\2>0, which is the same as
(1.4.57) •
Lemma 2. The expectation value of a Hermitian operator is purely real. Proof The proof is trivial—just use (1.3.21).
Lemma 3. The expectation value of an anti-Hermitian operator, defined by C = — C\ is purely imaginary. Proof The proof is also trivial.
Armed with these lemmas, we are in a position to prove the uncertainty relation (1.4.53). Using Lemma 1 with |a
> = A^>l/8> = A « | > ,
Fundamental Concepts
where the blank ket | > emphasizes the fact that our consideration may be applied to any ket, we obtain < ( A ^ ) 2 ) ( ( A 5 ) 2 ) > |(A^tA5>| 2 ,
where the Hermiticity of A A and A J3 has been used. To evaluate the right-hand side of (1.4.59), we note AA AB =
AA, AB] + | {A A, A B},
where the commutator [A^4,A2?], which is equal to [A9B]9 is clearly anti-Hermitian ( [ , 4 , 5 ] ) + = (AB-BA)*
= BA-AB
= -[A,B].
In contrast, the anticommutator { A A, AB} is obviously Hermitian, so (AAAB)
= \ \ \([A, B])|. In this book, however, A/l and A B are to be understood as operators [see (1.4.50)], not numbers.
1.5. Change of Basis
set of base kets is referred to as a change of basis or a change of representation. The basis in which the base eigenkets are given by is called the A representation or, sometimes, the A diagonal
representation because the square matrix corresponding to A is diagonal in this basis. Our basic task is to construct a transformation operator that connects the old orthonormal set and the new
orthonormal set {\b')}. To this end, we first show the following. Theorem. Given two sets of base kets, both satisfying orthonormality and completeness, there exists a unitary operator U such that
(¿>(D) =t/|a ( 1 ) >, |Z>(2>) = £/|a< 2) >,..., \ b ^ ) = U\a( N ) ).
By a unitary operator we mean an operator fulfilling the conditions UW = l
as well as = 1.
Proof We prove this theorem by explicit construction. We assert that the operator t / = £ \b{k))(a{k)\
will do the job and we apply this U to \a(l)). Clearly, U\a{l)) = \bil)) is guaranteed by the orthonormality of UfU=
£ £ \a{l))(b{l)\b{k))(a{k)\ k
(1.5.5) Furthermore, U is unitary: = £ |«> -
Even though a particular set of base kets is used in the definition, tr(A')
1.5. Change of Basis
turns out to be independent of representation, as shown: W >
=I £
Z)(b>\x\b") b'
L(b'\X\b'). y
We can also prove tr(AT) =
t r ( i / t X i / ) = tr(X),
(1.5.16b) (1.5.16c)
This deceptively simple result is quite profound. It tells us that the \b'Ys are eigenkets of UAU~l with exactly the same eigenvalues as the A
1.6. Position, Momentum, and Translation
eigenvalues. In other words, unitary equivalent observables have identical spectra. The eigenket |Z>(/)), by definition, satisfies the relationship (1.5.26) Comparing (1.5.25) and (1.5.26), we infer
that B and UAU1 are simultaneously diagonalizable. A natural question is, is UAU1 the same as B itself? The answer quite often is yes in cases of physical interest. Take, for example, SX and SZ. They
are related by a unitary operator, which, as we will discuss in Chapter 3, is actually the rotation operator around the y-axis by angle it/2. In this case SX itself is the unitary transform of SZ.
Because we know that SX and SZ exhibit the same set of eigenvalues—namely, + h/2 and - h/2—we see that our theorem holds in this particular example.
1.6. POSITION, MOMENTUM, AND TRANSLATION
Continuous Spectra The observables considered so far have all been assumed to exhibit discrete eigenvalue spectra. In quantum mechanics, however, there are observables with continuous eigenvalues.
Take, for instance, p2, the ¿-component of momentum. In quantum mechanics this is again represented by a Hermitian operator. In contrast to Sz, however, the eigenvalues of pz (in appropriate units)
can assume any real value between — oo and oo. The rigorous mathematics of a vector space spanned by eigenkets that exhibit a continuous spectrum is rather treacherous. The dimensionality of,such a
space is obviously infinite. Fortunately, many of the results we worked out for a finite-dimensional vector space with discrete eigenvalues can immediately be generalized. In places where
straightforward generalizations do not hold, we indicate danger signals. We start with the analogue of eigenvalue equation (1.2.5), which, in the continuous-spectrum case, is written as
where £ is an operator and is simply a number. The ket |£') is, in other words, an eigenket of operator £ with eigenvalue just as |a') is an eigenket of operator A with eigenvalue a'. In pursuing
this analogy we replace the Kronecker symbol by Dirac's fi-function—a discrete sum over the eigenvalues {a'} by an integral over the
Fundamental Concepts
continuous variable
so (1.6.2a)
£ K> = x'|x'>,
(1.6.10a) >>|x'> = / | x ' > ,
z|x'> = z'\x').
To be able to consider such a simultaneous eigenket at all, we are implicitly assuming that the three components of the position vector can be measured simultaneously to arbitrary degrees of
accuracy; hence, we must have [ x „ x 7 ] = 0,
where xv x 2 , and x3 stand for x9 y9 and z, respectively.
Fundamental Concepts
Translation We now introduce the very important concept of translation, or spatial displacement. Suppose we start with a state that is well localized around x'. Let us consider an operation that
changes this state into another well-localized state, this time around x ' + d x ' with everything else (for example, the spin direction) unchanged. Such an operation is defined to be an
infinitesimal translation by dx\ and the operator that does the job is denoted by dx'): ^(dx')\x')
= | x ' + ¿x'>,
where a possible arbitrary phase factor is set to unity by convention. Notice that the right-hand side of (1.6.12) is again a position eigenket, but this time with eigenvalue x' + dx'. Obviously |x')
is not an eigenket of the infinitesimal translation operator. By expanding an arbitrary state ket |a) in terms of the position eigenkets we can examine the effect of infinitesimal translation on |a):
= f ( d x ' ) fd3x'\x')(x'\a)
= Jd3x'\x'
dx')(xf\a). (1.6.13)
We also write the right-hand side of (1.6.13) as jd3x'\x'
+ dx')(x'\a)
= fd3x'\x')(x'-dx'\a)
because the integration is over all space and x ' is just an integration variable. This shows that the wave function of the translated state £T(dx')\a) is obtained by substituting x ' - dx' for x' in
(x'|a). There is an equivalent approach to translation that is often treated in the literature. Instead of considering an infinitesimal translation of the physical system itself, we consider a change
in the coordinate system being used such that the origin is shifted in the opposite direction, - dx'. Physically, in this alternative approach we are asking how the same state ket would look to
another observer whose coordinate system is shifted by — dx'. In this book we try not to use this approach. Obviously it is important that we do not mix the two approaches! We now list the properties
of the infinitesimal translation operator = jdx'\x')(x'\a)9 and that the expansion coefficient (x'\a)
is interpreted in such a way that
is the probability for the particle to be found in a narrow interval dx' around x'. In our formalism the inner product (x'\a) is what is usually referred to as the wave function for state \a): (x'\«>
= * . ( * ' ) •
In elementary wave mechanics the probabilistic interpretations for the expansion coefficient ca/ ( = (a'\ot)) and for the wave function \pa(x') ( = (jt'|a)) are often presented as separate
postulates. One of the major advantages of our formalism, originally due to Dirac, is that the two kinds of probabilistic interpretations are unified; \pa(x') is an expansion coefficient [see
(1.7.3)] in much the same way as cfl/ is. By following the footsteps of Dirac we come to appreciate the unity of quantum mechanics. Consider the inner product (f$\a). Using the completeness of \x')9
we have =
= fdx'*l{x')*a{x')9
so (/?|a) characterizes the overlap between the two wave functions. Note that we are not defining (fi\oc) as the overlap integral; the identification of (/3|a> with the overlap integral follows from
our completeness postulate for |jc'). The more general interpretation of (/?|a), independent of representations9 is that it represents the probability amplitude for state \a) to be found in state
This time let us interpret the expansion l«> = £ K> = «(/>'-/>")•
The momentum eigenkets {|/>')} s P a n the ket space in much the same way as the position eigenkets {|x')}. An arbitrary state ket \a) can therefore be expanded as follows: t1-7-24)
l«> = jdp'\p')(p'\a).
We can give a probabilistic interpretation for the expansion coefficient (p'\a); the probability that a measurement of p gives eigenvalue p' within a narrow interval dp' is \(p'\a)\2dp'. It is
customary to call ( p ' \ a ) the momentum-space wave function; the notation a(p') is often used: < / > » = *«(/>')•
If |a) is normalized, we obtain jdp'(a\p')(p'\a)
= jdp'\4>a(p')
Let us now establish the connection between the ^-representation and the /^-representation. We recall that in the case of the discrete spectra, the change of basis from the old set to the new set {|
6')} is characterized by the transformation matrix (1.5.7). Likewise, we expect that the desired information is contained in (x'| />'), which is a function of x' and p\ usually called the
transformation function from the x-representation to the /^-representation. To derive the explicit form of (x'\/?'), first recall (1.7.17); letting |a) be the momentum eigenket |/?'), we obtain (X'\P
~ ~ ih~~(x'\P') ox
or (1.7.28)
Fundamental Concepts
The solution to this differential equation for (x'\ p') is (x'\p')
= Nexp\
ip x
where N is the normalization constant to be determined in a moment. Even though the transformation function (x'\ p') is a function of two variables, x' and p\ we can temporarily regard it as a
function of x' with p' fixed. It can then be viewed as the probability amplitude for the momentum eigenstate specified by p ' to be found at position .x'; in other words, it is just the wave function
for the momentum eigenstate |/?'), often referred to as the momentum eigenfunction (still in the x-space). So (1.7.29) simply says that the wave function of a momentum eigenstate is a plane wave. It
is amusing that we have obtained this plane-wave solution without solving the Schrodinger equation (which we have not yet written down). To get the normalization constant N let us first consider exp
ip x
1.7. Wave Functions in Position and Momentum Space
This pair of equations is just what one expects from Fourier's inversion theorem. Apparently the mathematics we have developed somehow " knows" Fourier's work on integral transforms. Gaussian Wave
Packets It is instructive to look at a physical example to illustrate our basic formalism. We consider what is known as a Gaussian wave packet, whose x-space wave function is given by /2 1 exp ikx '
— : (1.7.35) 2d [ TT^VJ J This is a plane wave with wave number k modulated by a Gaussian profile centered on the origin. The probability of observing the particle vanishes very rapidly for \x'\ > d;
more quantitatively, the probability density |(x'|a)| 2 has a Gaussian shape with width d. We now compute the expectation values of x, Jt2, p, and p2. The expectation value of x is clearly zero by
symmetry: OO /•OO / / OOdx'\(x'\a)\2x'= 0. (1.7.36) -00dx'(a\x')x'(x'\a)= For x2 we obtain /•OO OO
J - OO
dx'x'2\{x'\a)\2 — X/2
( ^ ) / > '
dl 2
which leads to d2 ( ( A * ) 2 ) = - 2 = T
for the dispersion of the position operator. The expectation values of p and p2 can also be computed as follows: (p) = hk h2 + 2d'
(1.7.39a) h2k2,
which is left as an exercise. The momentum dispersion is therefore given by ((A p ) 2 ) = ( p i ) - ( p y =
h2 2d'
Fundamental Concepts
Armed with (1.7.38) and (1.7.40), we can check the Heisenberg uncertainty relation (1.6.34); in this case the uncertainty product is given by (1.7.41)
independent of d, so for a Gaussian wave packet we actually have an equality relation rather than the more general inequality relation (1.6.34). For this reason a Gaussian wave packet is often called
a minimum uncertainty wave packet. We now go to momentum space. By a straightforward integration—just completing the square in the exponent—we obtain
This momentum-space wave function provides an alternative method for obtaining ( p ) and ( p 2 ) , which is also left as an exercise. The probability of finding the particle with momentum p' is
Gaussian (in momentum space) centered on hk, just as the probability of finding the particle at x ' is Gaussian (in position space) centered on zero. Furthermore, the widths of the two Gaussians are
inversely proportional to each other, which is just another way of expressing the constancy of the uncertainty product ( ( A j c ) 2 ) ( A / ? ) 2 ) explicitly computed in ( 1 . 7 . 4 1 ) . The wider
the spread in the /?-space, the narrower the spread in the x-space, and vice versa. As an extreme example, suppose we let d oo. The position-space wave function ( 1 . 7 . 3 5 ) then becomes a plane
wave extending over all space; the probability of finding the particle is just constant, independent of x'. In contrast, the momentum-space wave function is S-function-like and is sharply peaked at
hk. In the opposite extreme, by letting d 0, we obtain a position-space wave function localized like the 5-function, but the momentum-space wave function ( 1 . 7 . 4 2 ) is just constant, independent
of p'. We have seen that an extremely well localized (in the x-space) state is to be regarded as a superposition of momentum eigenstates with all possible values of momenta. Even those momentum
eigenstates whose momenta are comparable to or exceed mc must be included in the superposition. However, at such high values of momentum, a description based on nonrelativistic quantum mechanics is
bound to break down.* Despite this limitation *It turns out that the concept of a localized state in relativistic quantum mechanics is far more intricate because of the possibility of "negative
energy states," or pair creation (Sakurai 1967, 118-19).
1.7. Wave Functions in Position and Momentum Space
our formalism, based on the existence of the position eigenket wide domain of applicability.
has a
Generalization to Three Dimensions So far in this section we have worked exclusively in one-space for simplicity, but everything we have done can be generalized to three-space, if the necessary
changes are made. The base kets to be used can be taken as either the position eigenkets satisfying x|x') = x'|x'>
or the momentum eigenkets satisfying P|P'> = P'|P'>.
They obey the normalization conditions ^ 53(p'~~p")>
where 8 3 stands for the three-dimensional S-function 83(x'-x")
= 8(x'~x")8(y'-y")8(z'-z").
The completeness relations read pV|x')(x'|==l
and /¿y|p'Xp'|=l,
which can be used to expand an arbitrary state ket: |a> = fd3x'\x')(x'\a),
l«> = /rfy|p'> X .
In contrast, if we follow approach 2, we obtain |«>-|a>,
H^M - ^) = x + (^)[p.>
independent of t. This is in dramatic contrast with the Schrôdinger-picture state ket, 'o = 0; t)s = ^ ( 0 K 'o = 0>•
The expectation value (A) is obviously the same in both pictures: s(a,
to = 0; t\A^\a,10
= 0; t)s = ( a , t0 =
t0 = 0)
= „
S ^
This is known as the Ehrenfest theorem after P. Ehrenfest, who derived it in 1927 using the formalism of wave mechanics. When written in this expectation form, its validity is independent of whether
we are using the Heisenberg or the Schródinger picture; after all, the expectation values are the same in the two pictures. In contrast, the operator form (2.2.35) is meaningful only if we understand
x and p to be Heisenberg-picture operators. We note that in (2.2.36) the h's have completely disappeared. It is therefore not surprising that the center of a wave packet moves like a classical
particle subjected to F(x). Base Kets and Transition Amplitudes So far we have avoided asking how the base kets evolve in time. A common misconception is that as time goes on, all kets move in the
Schródinger picture and are stationary in the Heisenberg picture. This is not the case, as we will make clear shortly. The important point is to distinguish the behavior of state kets from that of
base kets. We started our discussion of ket spaces in Section 1.2 by remarking that the eigenkets of observables are to be used as base kets. What happens to the defining eigenvalue equation A\a') =
with time? In the Schródinger picture, A does not change, so the base kets, obtained as the solutions to this eigenvalue equation at t = 0, for instance, must remain unchanged. Unlike state kets, the
base kets do not change in the Schródinger picture. The whole situation is very different in the Heisenberg picture, where the eigenvalue equation we must study is for the time-dependent operator A
= WfA(0)W.
From (2.2.37) evaluated at t = 0, when the two pictures coincide, we deduce = a'W*\a')9
which implies an eigenvalue equation for A{H): a')) = aX°ti^\a')Y
If we continue to maintain the view that the eigenkets of observables form the base kets, then a ' ) } must be used as the base kets in the Heisen-
Quantum Dynamics
berg picture. As time goes on, the Heisenberg-picture base kets, denoted by \a\t)H9 move as follows: | a\t)H
= ^\a').
Because of the appearance of rather than °U in (2.2.41), the Heisenberg-picture base kets are seen to rotate oppositely when compared with the Schrodinger-picture state kets; specifically, | a ' , t
) H satisfies the " wrong-sign Schrodinger equation" ihjt\a\t)H=-H\a\t)H.
As for the eigenvalues themselves, we see from (2.2.40) that they are unchanged with time. This is consistent with the theorem on unitary equivalent observables discussed in Section 1.5. Notice also
the following expansion for A^H\t) in terms of the base kets and bras of the Heisenberg picture: ¿{H){t) = ZW,t)Ha>H(a\t\ a'
a')a'{a'\. (2.3.20) We can now successively apply the creation operator a t to the ground state |0>. Using (2.3.17), we obtain |1> = a t |0),
|2> =
n . |3> =
a* '
)|2) = /3 ,
V )
n ( a tn)
|#i> =
. yfn\ J
In this way we have succeeded in constructing simultaneous eigenkets of N and H with energy eigenvalues En=(n
+ \)hu,
(« = 0 , 1 , 2 , 3 , . . . ) .
From (2.3.16), (2.3.17), and the orthonormality requirement for we obtain the matrix elements (n'\a\")=^8n>,n-n
Using these together with x =
(a + a*), 2mco •" "
. / mhui , p = i\ - z - {-a ' 'V 2
+ ar),
we derive the matrix elements of the x and p operators: = (n'\p\n)
+ v/fT+TS,,, w + 1 ),
mho) +
2.3. Simple Harmonie Oscillator
Notice that neither x nor p is diagonal in the TV-representation we are using. This is not surprising because x and p, like a and a\ do not commute with N. The operator method can also be used to
obtain the energy eigenfunctions in position space. Let us start with the ground state defined by fl|0> = 0,
which, in the jc-representation, reads (xlafi)
=/ f f
= 0,
which also holds for the excited states. We therefore have = L f(n)\n), n=0
the distribution of \f(n)\2 with respect to n is of the Poisson type about some mean value n: l/(")|2=(^)exp(-n).
2. It can be obtained by translating the oscillator ground state by some finite distance. 3. It satisfies the minimum uncertainty product relation at all times. A systematic study of coherent states,
pioneered by R. Glauber, is very rewarding; the reader is urged to work out an exercise on this subject at the end of this chapter.*
2.4. SCHRODINGER'S WAVE EQUATION Time-Dependent Wave Equation We now turn to the Schrodinger picture and examine the time evolution of |a, t0; t) in the jc-representation. In other words, our task is
to study the behavior of the wave function xP(x',t) = (x'\a,t0;t)
as a function of time, where | a , / 0 ; / ) is a state ket in the Schrodinger *For applications to laser physics, see Sargent, Scully, and Lamb (1974).
Quantum Dynamics
picture at time t, and (x'| is a time-independent position eigenbra with eigenvalue x'. The Hamiltonian operator is taken to be H=-£.
+ V(X).
The potential V(x) is a Hermitian operator; it is also local in the sense that in the x-representation we have (x"\V(x)\x')
= F(x')S3(x' —
where V(x') is a real function of x'. Later in this book we will consider a more-complicated Hamiltonians—a time-dependent potential V(x, nonlocal but separable potential where the right-hand side of
( 2 . 4 . 3 ) is replaced by i; 1 (x // )i; 2 (x / ); a momentum-dependent interaction of the form p*A + A # p, where A is the vector potential in electrodynamics, and so on. We now derive
Schrodinger's time-dependent wave equation. We first write the Schrodinger equation for a state ket ( 2 . 1 . 2 7 ) in the x-representation: ihjt{x'\a,
i 0 ; 0 = (x'\H |a, t0-1>,
where we have used the fact that the position eigenbras in the Schrodinger picture do not change with time. Using ( 1 . 7 . 2 0 ) , we can write the kineticenergy contribution to the right-hand side
of ( 2 . 4 . 4 ) as 2m As for F(x), we simply use (x'|F(x) = (x'IF(x'),
where V(x') is no longer an operator. Combining everything, we deduce ihjt(x'\a,
t0; /> = - ( ^ j V ' 2 . Actually, in wave mechanics where the Hamiltonian operator is given as a function of x and p, as in (2.4.2), it is not necessary to refer explicitly to observable A that
commutes with H because we can always choose A to be that function of the observables x and p which coincides with H itself. We may therefore omit reference to a' and simply write (2.4.10) as the
partial differential equation to be satisfied by the energy eigenfunction uE(x')\ - ( ^ ) v '
M x ' ) + F(x>£(X')
This is the time-independent wave equation of E. Schrodinger—announced in the first of four monumental papers, all written in the first half of 1926—that laid the foundations of wave mechanics. In
the same paper he immediately applied (2.4.11) to derive the energy spectrum of the hydrogen atom. To solve (2.4.11) some boundary condition has to be imposed. Suppose we seek a solution to (2.4.11)
with E
as |x'|-»oo..
Physically this means that the particle is bound or confined within a finite region of space. We know from the theory of partial differential equations
Quantum Dynamics
that (2.4.11) subject to boundary condition (2.4.13) allows nontrivial solutions only for a discrete set of values of E. It is in this sense that the time-independent Schrodinger equation (2.4.11)
yields the quantization of energy levels.* Once the partial differential equation (2.4.11) is written, the problem of finding the energy levels of microscopic physical systems is as straightforward
as that of finding the characteristic frequencies of vibrating strings or membranes. In both cases we solve boundary-value problems in mathematical physics. A short digression on the history of
quantum mechanics is in order here. The fact that exactly soluble eigenvalue problems in the theory of partial differential equations can also be treated using matrix methods was already known to
mathematicians in the first quarter of the twentieth century. Furthermore, theoretical physicists like M. Born frequently consulted great mathematicians of the day—D. Hilbert and H. Weyl, in
particular. Yet when matrix mechanics was born in the summer of 1925, it did not immediately occur to the theoretical physicists or to the mathematicians to reformulate it using the language of
partial differential equations. Six months after Heisenberg's pioneering paper, wave mechanics was proposed by Schrodinger. However, a close inspection of his papers shows that he was not at all
influenced by the earlier works of Heisenberg, Born, and Jordan. Instead, the train of reasoning that led Schrodinger to formulate wave mechanics has its roots in W. R. Hamilton's analogy between
optics and mechanics, on which we will comment later, and the particle-wave hypothesis of L. de Broglie. Once wave mechanics was formulated, many people, including Schrodinger himself, showed the
equivalence between wave mechanics and matrix mechanics. It is assumed that the reader of this book has some experience in solving the time-dependent and time-independent wave equations. He or she
should be familiar with the time evolution of a Gaussian wave packet in a force-free region; should be able to solve one-dimensional transmissionreflection problems involving a rectangular potential
barrier, and the like; should have seen derived some simple solutions of the time-independent wave equation—a particle in a box, a particle in a square well, the simple harmonic oscillator, the
hydrogen atom, and so on—and should also be familiar with some general properties of the energy eigenfunctions and energy eigenvalues, such as (1) the fact that the energy levels exhibit a discrete
or continuous spectrum depending on whether or not (2.4.12) is satisfied and (2) the property that the energy eigenfunction in one dimension is sinusoidal or damped depending on whether E - V(x') is
positive or negative. In this book we will not cover these topics. A brief summary of elementary solutions to Schrodinger's equations is presented in Appendix A. * Schrödinger's paper that announced
(2.4.11) is appropriately entitled Quantisierung als Eigenwertproblem (Quantization as an Eigenvalue Problem).
2.4. Schrodinger's Wave Equation
Interpretations of the Wave Function We now turn to discussions of the physical interpretations of the wave function. In Section 1.7 we commented on the probabilistic interpretation of \\p\2 that
follows from the fact that (x'|a, i0; t) is to be regarded as an expansion coefficient of |a, t0; t) in terms of the position eigenkets {|x')}. The quantity p(x', /) defined by = \x^(x\t)\2
= \(x'\a,t0;t)\2
is therefore regarded as the probability density in wave mechanics. Specifically, when we use a detector that ascertains the presence of the particle within a small volume element d3x' around x', the
probability of recording a positive result at time t is given by p(x',t)d3x'. In the remainder of this section we use x for x ' because the position operator will not appear. Using Schrodinger's
time-dependent wave equation, it is straightforward to derive the continuity equation (2.4.15) where p(x, t) stands for |\//|2 as before, and j(x, t\ known as the probability flux, is given by
The reality of the potential V (or the Hermiticity of the V operator) has played a crucial role in our obtaining this result. Conversely, a complex potential can phenomenologically account for the
disappearance of a particle; such a potential is often used for nuclear reactions where incident particles get absorbed by nuclei. We may intuitively expect that the probability flux j is related to
momentum. This is indeed the case for j integrated over all space. From (2.4.16) we obtain =
where (p), is the expectation value of the momentum operator at time t. Equation (2.4.15) is reminiscent of the continuity equation in fluid dynamics that characterizes a hydrodynamic flow of a fluid
in a source-free, sink-free region. Indeed, historically Schrodinger was first led to interpret \\p\2 as the actual matter density, or e\xp\2 as the actual electric charge density. If we adopt such a
view, we are led to face some bizarre consequences.
Quantum Dynamics
A typical argument for a position measurement might go as follows. An atomic electron is to be regarded as a continuous distribution of matter filling up a finite region of space around the nucleus;
yet, when a measurement is made to make sure that the electron is at some particular point, this continuous distribution of matter suddenly shrinks to a pointlike particle with no spatial extension.
The more satisfactory statistical interpretation of \\p\2 as the probability density was first given by M. Born. To understand the physical significance of the wave function, let us write it as yP
with S real and p > 0, which can always be done for any complex function of x and The meaning of p has already been given. What is the physical interpretation of S? Noting r v ^ = v^v(v/p) + (~)pv5,
we can write the probability flux as [see (2.4.16)] (2.4.20) We now see that there is more to the wave function than the fact that is the probability density; the gradient of the phase S contains a
vital piece of information. From (2.4.20) we see that the spatial variation of the phase of the wave function characterizes the probability flux; the stronger the phase variation, the more intense
the flux. The direction of j at some point x is seen to be normal to the surface of a constant phase that goes through that point. In the particularly simple example of a plane wave (a momentum
eigenfunction) ^(x,/)ocexp(^-^),
where p stands for the eigenvalue of the momentum operator. All this is evident because V S = p. More generally, it is tempting to regard vS/m
(2.4.22) as some kind of " velocity,"
"v" = — , m
and to write the continuity equation (2.4.15) as
^ + V-(p"v") = 0,
just as in fluid dynamics. However, we would like to caution the reader
2.4. Schrodinger's Wave Equation
against a too literal interpretation of j as p times the velocity defined at every point in space, because a simultaneous precision measurement of position and velocity would necessarily violate the
uncertainty principle.
The Classical Limit We now discuss the classical limit of wave mechanics. First, we substitute \[/ written in form (2.4.18) into both sides of the time-dependent wave equation. Straightforward
differentiations lead to 2m + y[pV
= m
dS_ fP dt
So far everything has been exact. Let us suppose now that h can, in some sense, be regarded as a small quantity. The precise physical meaning of this approximation, to which we will come back later,
is not evident now, but let us assume h\V 2 S\ V , must therefore be modified. Fortunately an analogous solution exists in the E < V region; by direct substitution we can check that constant
iEt h (2.4.38)
*A similar technique was used earlier by H. Jeffreys; this solution is referred to as the JWKB solution in some English books.
Quantum Dynamics
satisfies the wave equation provided that h/yjlm(VE) is small compared with the characteristic distance over which the potential varies. Neither (2.4.35) nor (2.4.38) makes sense near the classical
turning point defined by the value of ;t for which F(jc) = £
because X (or its purely imaginary analogue) becomes infinite at that point, leading to a violent violation of (2.4.37). In fact, it is a nontrivial task to match the two solutions across the
classical turning point. The standard procedure is based on the following steps: 1. Make a linear approximation to the potential V(x) near the turning point x 0 , defined by the root of (2.4.39). 2.
Solve the differential equation
exactly to obtain a third solution involving the Bessel function of order ± j , valid near x0. 3. Match this solution to the other two solutions by choosing appropriately various constants of
integration. We do not discuss these steps in detail, as they are discussed in many places (Schiff 1968, 268-76, for example). Instead, we content ourselves to present the results of such an analysis
for a potential well, schematically shown in Figure 2.1, with two turning points, xx and x2- The wave function must behave like (2.4.35) in region II and like (2.4.38) in regions I and III. The
correct matching from region I into region II can be shown to be
FIGURE 2.1. Schematic diagram for behavior of wave function u^{x) in potential well V(x) with turning points xx and x2.
2.4. Schrodinger's Wave Equation
accomplished by choosing the integration constants in such a way that" (
1 ^exp -(j}f"dx-j2m[V(x')-E]
icos (2.4.42)
The uniqueness of the wave function in region II implies that the arguments of the cosine in (2.4.41) and (2.4.42) must differ at most by an integer multiple of IT [not of Inr, because the signs of
both sides of (2.4.42) can be reversed]. In this way we obtain a very interesting consistency condition, f*2 dx]j2m[E — V(x)] xi
=(n + ^)exP
Multiplying both sides by (x'| on the left, we have ~ 'o)1
. Because of these two properties, the propagator (2.5.8), regarded as a function of x", is simply the wave function at / of a particle which was localized precisely at x ' at some earlier time t0.
Indeed, this interpretation follows, perhaps more elegantly, from noting that (2.5.8) can also be written as tf(xV;xV0)
= ,
where the time-evolution operator acting on |x') is just the state ket at ? of a system that was localized precisely at x' at time t0 ( < t). If we wish to solve a more general problem where the
initial wave function extends over a finite region of space, all we have to do is multiply \p(x\ t Q ) by the propagator K(x'\ t\ x\ tQ) and integrate over all space (that is, over x'). In this
manner we can add the various contributions from different positions (x'). This situation is analogous to one in electrostatics; if we wish to find the electrostatic potential due to a general charge
distribution p(x'), we first solve the point-charge problem, multiply the point-charge solution with the charge distribution, and integrate: (2.5.11) The reader familiar with the theory of the
Green's functions must have recognized by this time that the propagator is simply the Green's function for the time-dependent wave equation satisfying
Quantum Dynamics
with the boundary condition K(x'\
t\x\ t 0 ) = 0,
for t < t0.
The delta function 8(t — t0) is needed on the right-hand side of (2.5.12) because K varies discontinuously at / = t0. The particular form of the propagator is, of course, dependent on the particular
potential to which the particle is subjected. Consider, as an example, a free particle in one dimension. The obvious observable that commutes with H is momentum; | p ' ) is a simultaneous eigenket of
the operators p and H: P\P') - P'\P')
H\p') =
The momentum eigenfunction is just the transformation function of Section 1.7 [see (1.7.32)] which is of the plane-wave form. Combining everything, we have K(x'\
J™ dp'exp
ip'2{t -10) 2mh
(2.5.15) The integral can be evaluated by completing the square in the exponent. Here we simply record the result: im(x"-x'f 2 h(t-t0)
This expression may be used, for example, to study how a Gaussian wave packet spreads out as a function of time. For the simple harmonic oscillator, where the wave function of an energy eigenstate is
given by ( - iE„t \
\/m (47r/co, and so forth) later. Certain space and time integrals derivable from AT(x", t\ x', t0) are of considerable interest. Without loss of generality we set t0 = 0 in the following. The first
integral we consider is obtained by setting x " = x ' and integrating over all space. We have G(t) = jd3x'K(x\t;x'9
(2.5.20) This result is anticipated; recalling (2.5.10),we observe that setting x' = x" and integrating are equivalent to taking the trace of the time-evolution operator in the x-representation. But
the trace is independent of representations; it can be evaluated more readily using the basis where the time-evolution operator is diagonal, which immediately leads to the last line of (2.5.20). Now
we see that (2.5.20) is just the "sum over states," reminiscent of the partition function in statistical mechanics. In fact, if we analytically continue in the t variable and make t purely imaginary,
with P defined by (2.5.21) real and positive, we can identify (2.5.20) with the partition function itself: Z=X>xp(-£2v).
For this reason some of the techniques encountered in studying propagators in quantum mechanics are also useful in statistical mechanics.
Quantum Dynamics
Next, let us consider the Laplace-Fourier transform of G(t): G(E) = -ij™ = -i
J dt 2 Qxp(-iEat/h)exp(iEt/h)/h. JO a>
The integrand here oscillates indefinitely. But we can make the integral meaningful by letting E acquire a small positive imaginary part: E We then obtain, in the limit e
E + ie.
G(E) = Z j ^
Observe now that the complete energy spectrum is exhibited as simple poles of G(E) in the complex £-plane. If we wish to know the energy spectrum of a physical system, it is sufficient to study the
analytic properties of G(E). Propagator as a Transition Amplitude To gain further insight into the physical meaning of the propagator, we wish to relate it to the concept of transition amplitudes
introduced in Section 2.2. But first, recall that the wave function which is the inner product of the fixed position bra (x'| with the moving state ket |a, / 0 ; t) can also be regarded as the inner
product of the Heisenberg-picture position bra (x\t|, which moves "oppositely" with time, with the Heisenberg-picture state ket |a, r 0 ), which is fixed in time. Likewise, the propagator can also be
written as K(x'\
f;x', io) =
-/0) h
- E a'
= (x", t\x\i0),
where |x', t0) and (x", t\ are to be understood as an eigenket and an eigenbra of the position operator in the Heisenberg picture. In Section 2.1 we showed that (b\t\a')9 in the Heisenberg-picture
notation, is the probability amplitude for a system originally prepared to be an eigenstate of A with eigenvalue a' at some initial time tQ = 0 to be found at a later time t in an eigenstate of B
with eigenvalue b\ and we called it the transition amplitude for going from state |a') to state l^'). Because there is nothing special about the choice of t0— only the time difference t - 1 0 is
2.5. Propagators and Feynman Path Integrals
relevant—we can identify (x", t\x\ t0) as the probability amplitude for the particle prepared at t0 with position eigenvalue x' to be found at a later time / at x". Roughly speaking, (x", t\x\ t0) is
the amplitude for the particle to go from a space-time point (x', t0) to another space-time point (x", r), so the term transition amplitude for this expression is quite appropriate. This
interpretation is, of course, in complete accord with the interpretation we gave earlier for K(x'\ t;x\ t0). Yet another way to interpret (x", t\x\ t0) is as follows. As we emphasized earlier, |x',
t0) is the position eigenket at t0 with the eigenvalue x' in the Heisenberg picture. Because at any given time the Heisenbergpicture eigenkets of an observable can be chosen as base kets, we can
regard (x", t\x\ t0) as the transformation function that connects the two sets of base kets at different times. So in the Heisenberg picture, time evolution can be viewed as a unitary transformation,
in the sense of changing bases, that connects one set of base kets formed by (|x', ¿ 0 )} to another formed by (|x", /)}. This is reminiscent of classical physics, in which the time development of a
classical dynamic variable such as x(t) is viewed as a canonical (or contact) transformation generated by the classical Hamiltonian (Goldstein 1980, 407-8). It turns out to be convenient to use a
notation that treats the space and time coordinates more symmetrically. To this end we write (x", t"|x', t') in place of (x", t\x\ t0). Because at any given time the position kets in the Heisenberg
picture form a complete set, it is legitimate to insert the identity operator written as jd3xn|x",
r " ) ( x " , t"\ = 1
at any place we desire. For example, consider the time evolution from t' to t'"\ by dividing the time interval (t\t"') into two parts, (t\t") and (¿", t "'), we have (x
, t"" |x • t ' ) = j d 3x"(x"',
f "" | x ' , t"> < x ' ,
(*'" > t " > t ' ) .
We call this the composition property of the transition amplitude.* Clearly, we can divide the time interval into as many smaller subintervals as we wish. We have tN-l\XN-2> In-i)
' ' ' (x2> t2\xl> h)(2.5.31)
To visualize this pictorially, we consider a space-time plane, as shown in Figure 2.2. The initial and final space-time points are fixed to be (xv tx) and (xN, tN), respectively. For each time
segment, say between tn_l and tn, we are instructed to consider the transition amplitude to go from (xn_l9tn_l) to ( x n , tn); we then integrate over x2, xN_v This means that we must sum over all
possible paths in the space-time plane with the end points fixed. Before proceeding further, it is profitable to review here how paths appear in classical mechanics. Suppose we have a particle
subjected to a force field derivable from a potential V(x). The classical Lagrangian is written as ¿classica,(*,*) = ^
" K*)-
Given this Lagrangian with the end points (xl9 tY) and ( x N , tN) specified, we do not consider just any path joining (xl9 tx) and (xN, tN) in classical mechanics. On the contrary, there exists a
unique path that corresponds to
2.5. Propagators and Feynman Path Integrals
FIGURE 2.2.
Paths in x/-plane.
the actual motion of the classical particle. For example, given V(x) = mgx,
= (M),
= |o, y ^ j , (2.5.33)
where h may stand for the height of the Leaning Tower of Pisa, the classical path in the jc/-plane can only be x=
More generally, according to Hamilton's principle, the unique path is that which minimizes the action, defined as the time integral of the classical Lagrangian: sf i 2 dtL c l a s s i c a l (x 9 x) =
from which Lagrange's equation of motion can be obtained. Feynman's Formulation The basic difference between classical mechanics and quantum mechanics should now be apparent. In classical mechanics a
definite path in the jc/-plane is associated with the particle's motion; in contrast, in quantum mechanics all possible paths must play roles including those which do not bear any resemblance to the
classical path. Yet we must somehow be able to reproduce classical mechanics in a smooth manner in the limit h 0. How are we to accomplish this?
Quantum Dynamics
As a young graduate student at Princeton University, R. P. Feynman tried to attack this problem. In looking for a possible clue, he was said to be intrigued by a mysterious remark in Dirac's book
which, in our notation, amounts to the following statement: exp
2 dt ^classical ( X > X ) J
corresponds to
( x 2 , t2\xl9 'i)-
Feynman attempted to make sense out of this remark. Is "corresponds to" the same thing as "is equal to" or "is proportional to"? In so doing he was led to formulate a space-time approach to quantum
mechanics based on path integrals. In Feynman's formulation the classical action plays a very important role. For compactness, we introduce a new notation: dtL
•jTn- 1
Because £ c i assica i is a function of x and x, S(n9 n -1) is defined only after a definite path is specified along which the integration is to be carried out. So even though the path dependence is
not explicit in this notation, it is understood that we are considering a particular path in evaluating the integral. Imagine now that we are following some prescribed path. We concentrate our
attention on a small segment along that path, say between ( x n - t n - 1 ) a n d (xn> O - According to Dirac, we are instructed to associate exp[iS(n 9 n - 1 ) / h ] with that segment. Going along
the definite path we are set to follow, we successively multiply expressions of this type to obtain N
n = 2
iS(n, n— 1)
= exp
S(n,n-1) = exp
r) E n=2
h"S(AU)l h (2.5.37)
This does not yet give (xN9 tN\xl9 tx)\ rather, this equation is the contribution to (xN91N\xl9 tx) arising from the particular path we have considered. We must still integrate over At the same time,
exploiting the composition property, we let the time interval between tn_x and tn be infinitesimally small. Thus our candidate expression for (xN, tN\xl9 tx) may be written, in some loose sense, as
£ all paths
iS(N91) h
where the sum is to be taken over an innumerably infinite set of paths! Before presenting a more precise formulation, let us see whether considerations along this line make sense in the classical
limit. As h 0, the exponential in (2.5.38) oscillates very violently, so there is a tendency for cancellation among various contributions from neighboring paths. This is
2.5. Propagators and Feynman Path Integrals
because exp[iS/h] for some definite path and exp[iS/h] for a slightly different path have very different phases because of the smallness of h. So most paths do not contribute when h is regarded as a
small quantity. However, there is an important exception. Suppose that we consider a path that satisfies &S(iV,l) = 0,
where the change in S is due to a slight deformation of the path with the end points fixed. This is precisely the classical path by virtue of Hamilton's principle. We denote the S that satisfies
(2.5.39) by Smin. We now attempt to deform the path a little bit from the classical path. The resulting S is still equal to Smin to first order in deformation. This means that the phase of exp[iS/h]
does not vary very much as we deviate slightly from the classical path even if h is small. As a result, as long as we stay near the classical path, constructive interference between neighboring paths
is possible. In the h 0 limit, the major contributions must then arise from a very narrow strip (or a tube in higher dimensions) containing the classical path, as shown in Figure 2.3. Our (or
Feynman's) guess based on Dirac's mysterious remark makes good sense because the classical path gets singled out in the h -> 0 limit. To formulate Feynman's conjecture more precisely, let us go back
to • where the time difference tn tn_x is assumed to be infinitesimally small. We write 1 w(Ai)
iS(n,n — 1) h
where we evaluate S(n, n — 1) in a moment in the A/ —> 0 limit. Notice that we have inserted a weight factor, l / w ( A i ) , which is assumed to depend only on the time interval tn — tn_x and not on
V(x). That such a factor is needed is clear from dimensional considerations; according to the way we
FIGURE 2.3.
Paths important in the h
0 limit.
Quantum Dynamics
normalized our position eigenkets, (xn9tn\xn_l9tn_l) must have the dimension of 1 /length. We now look at the exponential in (2.5.40). Our task is to evaluate the Ai 0 limit of S(n9n - 1 ) . Because
the time interval is so small, it is legitimate to make a straight-line approximation to the path joining (*„_!, and (xn9 tn) as follows: S(n9n-1)
= p dt ^i-i
x n
x n
(2.5.41) As an example, we consider specifically the free-particle case, V = 0. Equation (2.5.40) now becomes (Xn> tn\Xn-l> K-1)
im{xn-xn_l)1 1 exp 2h A/ w( AO
We see that the exponent appearing here is completely identical to the one in the expression for the free-particle propagator (2.5.16). The reader may work out a similar comparison for the simple
harmonic oscillator. We remarked earlier that the weight factor l / w ( A r ) appearing in (2.5.40) is assumed to be independent of V(x)9 so we may as well evaluate it for the free particle. Noting
the orthonormality, in the sense of S-function, of Heisenberg-picture position eigenkets at equal times, (2.5.43) we obtain 1 _ I m w(At) V IvrihAt ' w(Ai) T%
where we have used oo /
im£2 \ d£zx p 2h At
IrrihAt m
•) = « ( « .
and m lim AtZ o V ImhAt
This weight factor is, of course, anticipated from the expression for the free-particle propagator (2.5.16). To summarize, as A t 0, we are led to - J 2-Jihkt
2.5. Propagators and Feynman Path Integrals
The final expression for the transition amplitude with tN - tx finite is /
lm, N
x f d x
I dxN_2 ' ' ' jdx2
iS(n,n-1) h
(2.5.47) where the N -» oo limit is taken with xN and tN fixed. It is customary here to define a new kind of multidimensional (in fact, infinite-dimensional) integral operator Cxn T I 9[x(t)]
= Jim
( m \(w-i)/2 f ( 2 ^ 7 )
f f JdxN.1JdxN_2"-Jdx: (2.5.48)
and write (2.5.47) as N
ft* Î N
/ xx @lx(t)]exp
/ J/tx
J . ^classical ( X > * )
This expression is known as Feynman's path integral. Its meaning as the sum over all possible paths should be apparent from (2.5.47). Our steps leading to (2.5.49) are not meant to be a derivation.
Rather, we (or Feynman) have attempted a new formulation of quantum mechanics based on the concept of paths, motivated by Dirac's mysterious remark. The only ideas we borrowed from the conventional
form of quantum mechanics are (1) the superposition principle (used in summing the contributions from various alternate paths), (2) the composition property of the transition amplitude, and (3)
classical correspondence in the h —> 0 limit. Even though we obtained the same result as the conventional theory for the free-particle case, it is now obvious, from what we have done so far, that
Feynman's formulation is completely equivalent to Schrôdinger's wave mechanics. We conclude this section by proving that Feynman's expression for ( Xyy, tN\x±, t^) indeed satisfies Schrôdinger's
time-dependent wave equation in the variables xN, tN, just as the propagator defined by (2.5.8). We start with (XN> ¿nI*!'
~~ j dxN_x(xN, OO
dx N-1
/ -oo' X
m exp 2ex
Next we consider V0 that is spatially uniform but dependent on time. We then easily see that the analogue of (2.6.5) is | a , t Q ; t ) -»exp
dt M*')
Physically, the use of K(x)+ V0(t) in place of V(x) simply means that we are choosing a new zero point of the energy scale at each instant of time.
2.6. Potentials and Gauge Transformations
Even though the choice of the absolute scale of the potential is arbitrary, potential differences are of nontrivial physical significance and, in fact, can be detected in a very striking way. To
illustrate this point, let us consider the arrangement shown in Figure 2.4. A beam of charged particles is split into two parts, each of which enters a metallic cage. If we so desire, we can maintain
a finite potential difference between the two cages by turning on a switch, as shown. A particle in the beam can be visualized as a wave packet whose dimension is much smaller than the dimension of
the cage. Suppose we switch on the potential difference only after the wave packets enter the cages and switch it off before the wave packets leave the cages. The particle in the cage experiences no
force because inside the cage the potential is spatially uniform; hence no electric field is present. Now let us recombine the two beam components in such a way that they meet in the interference
region of Figure 2.4. Because of the existence of the potential, each beam component suffers a phase change, as indicated by (2.6.7). As a result, there is an observable interference term in the beam
intensity in the interference region, namely, cos(i - 4>2),
sin(c#>1 - 4>2),
where [V2{t)-Vx{t)\.
So despite the fact that the particle experiences no force, there is an observable effect that depends on whether V2(t)—Vx(t) has been applied. Notice that this effect is purely quantum mechanical;
in the limit h —> 0, the interesting interference effect gets washed out because the oscillation of the cosine becomes infinitely rapid.* *This gedanken experiment is the Minkowski-rotated form of
the Aharonov-Bohm experiment to be discussed later in this section.
Quantum Dynamics
Gravity in Quantum Mechanics There is an experiment that exhibits in a striking manner how a gravitational effect appears in quantum mechanics. Before describing it, we first comment on the role of
gravity in both classical and quantum mechanics. Consider the classical equation of motion for a purely falling body: mx = - mVOgrav = ~ rngi. (2.6.10) The mass term drops out; so in the absence of
air resistance, a feather and a stone would behave in the same way—a la Galileo—under the influence of gravity. This is, of course, a direct consequence of the equality of the gravitational and the
inertial masses. Because the mass does not appear in the equation of a particle trajectory, gravity in classical mechanics is often said to be a purely geometric theory. The situation is rather
different in quantum mechanics. In the wave-mechanical formulation, the analogue of (2.6.10) is
+ ' - *V =.
The mass no longer cancels; instead it appears in the combination h / m , so in a problem where h appears, m is also expected to appear. We can see this point also using the Feynman path-integral
formulation of a falling body based on /
. .
I =
m e x p
[•l f- ^ Jt n—d1t
{\mx2-mgz) h
( / „ - / „ _ ! = **-»).
Here again we see that m appears in the combination m/h. This is in sharp contrast with Hamilton's classical approach, based on S j y ^ - m g z ^ 0,
where m can be eliminated in the very beginning. Starting with the Schrodinger equation (2.6.11), we may derive the Ehrenfest theorem ^ (z x > = - g z . (2.6.14) dt However, h does not appear here,
nor does m. To see a nontrivial quantummechanical effect of gravity, we must study effects in which h appears explicitly—and consequently where we expect the mass to appear—in contrast with purely
gravitational phenomena in classical mechanics.
2.6. Potentials and Gauge Transformations
Until 1975, there had been no direct experiment that established the presence of the m$ grav term in (2.6.11). To be sure, a free fall of an elementary particle had been observed, but the classical
equation of motion —or the Ehrenfest theorem (2.6.14), where h does not appear—sufficed to account for this. The famous "weight of photon" experiment of V. Pound and collaborators did not test
gravity in the quantum domain either because they measured a frequency shift where h does not explicitly appear. On the microscopic scale, gravitational forces are too weak to be readily observable.
To appreciate the difficulty involved in seeing gravity in bound-state problems, let us consider the ground state of an electron and a neutron bound by gravitational forces. This is the gravitational
analogue of the hydrogen atom, where an electron and a proton are bound by Coulomb forces. At the same distance, the gravitational force between the electron and the neutron is weaker than the
Coulomb force between the electron and the proton by a factor of - 2 X10 39 . The Bohr radius involved here can be obtained simply: a
h2 o= — e me
h2 — , GNmemn
where GN is Newton's gravitational constant. If we substitute numbers in the equation, the Bohr radius of this gravitationally bound system turns out to be ~10 3 1 cm, or ~10 1 3 light years, which
is larger than the estimated radius of the universe by a few orders of magnitude! We now discuss a remarkable phenomenon known as gravity-induced quantum interference. A nearly monoenergetic beam of
particles—in practice, thermal neutrons—is split into two parts and then brought together as shown in Figure 2.5. In actual experiments the neutron beam is split and bent by silicon crystals, but the
details of this beautiful art of neutron interferometry do not concern us here. Because the size of the wave packet can be assumed to be much smaller than the macroscopic dimension of the loop formed
by the two alternate paths, we can apply the concept of a classical trajectory. Let us first suppose that path A B D and path A C —> D lie in a horizontal plane. Because the absolute zero of the
potential due to gravity is of no significance, we can set V = 0 for any phenomenon that takes place in this plane; in other words, it is legitimate to ignore gravity altogether. The situation is
very different if the plane formed by the two alternate paths is rotated around segment AC by angle 5. This time the potential at level BD is higher than that at level AC by rag/2sinS, which means
that the state ket associated with path BD "rotates faster." This leads to a gravity-induced phase difference between the amplitudes for the two wave packets arriving at Z). Actually there is also a
gravity-induced phase change associated with AB and also with CD, but the effects cancel as we compare the two alternate paths. The net result is that the wave packet
Quantum Dynamics
Interference region B
l>4 i I (2
fl FIGURE 2.5.
Experiment to detect gravity-induced quantum interference.
arriving at D via path ABD suffers a phase change exp
— imngl2 sin5 T
relative to that of the wave packet arriving at D via path A CD, where T is the time spent for the wave packet to go from B to D (or from A to C) and m n , the neutron mass. We can control this phase
difference by rotating the plane of Figure 2.5; 8 can change from 0 to 7r/2, or from 0 to — 7r/2. Expressing the time spent T, or ¡i/vW2LVC packct , in terms of X, the de Broglie wavelength of the
neutron, we obtain the following expression for the phase difference: (m^g/^XsinS) ABD ~~ ACD = f~2 n
In this manner we predict an observable interference effect that depends on angle 5, which is reminiscent of fringes in Michelson-type interferometers in optics. An alternative, more wave-mechanical
way to understand (2.6.17) follows. Because we are concerned with a time-independent potential, the sum of the kinetic energy and the potential energy is constant: £ i + m g z = E.
The difference in height between level BD and level AC implies a slight difference in p, or X. As a result, there is an accumulation of phase differences due to the X difference. It is left as an
exercise to show that this wave-mechanical approach also leads to result (2.6.17).
2.6. Potentials and Gauge Transformations
ca c
D a>
600 -30
FIGURE 2.6. Dependence of gravity-induced phase on the angle of rotation 8.
What is interesting about expression (2.6.17) is that its magnitude is neither too small nor too large; it is just right for this interesting effect to be detected with thermal neutrons traveling
through paths of "table-top" dimensions. For A =1.42 A (comparable to interatomic spacing in silicon) and lxl2 =10 cm2, we obtain 55.6 for As we rotate the loop plane gradually by 90°, we predict the
intensity in the interference region to exhibit a series of maxima and minima; quantitatively we should see 55.6/27T - 9 oscillations. It is extraordinary that such an effect has indeed been observed
experimentally; see Figure 2.6 taken from a 1975 experiment of R. Colella, A. Overhauser, and S. A. Werner. The phase shift due to gravity is seen to be verified to well within 1%. We emphasize that
this effect is purely quantum mechanical because as h 0, the interference pattern gets washed out. The gravitational potential has been shown to enter into the Schrodinger equation just as expected.
This experiment also shows that gravity is not purely geometric at the quantum level because the effect depends on (m/h)2.* * However, this does not imply that the equivalence principle is
unimportant in understanding an effect of this sort. If the gravitational mass (w g r a v ) and inertial mass (m i n e r t ) were 2 unequal, (m/h) would have to be replaced by w g r a v m i n e r t /
/* 2 . The fact that we could correctly predict the interference pattern without making a distinction between m grav and w inert shows some support for the equivalence principle at the quantum level.
Quantum Dynamics
Gauge Transformations in Electromagnetism Let us now turn to potentials that appear in electromagnetism. We consider an electric and a magnetic field derivable from the time-independent scalar and
vector potential, , p 2 - ( ^ ( p . A + A-p) + ( f ) V .
In this form the Hamiltonian is obviously Hermitian. To study the dynamics of a charged particle subjected to and A, let us first proceed in the Heisenberg picture. We can evaluate the time
derivative of x in a straightforward manner as dxl dt
[xnH] ih
(pt-eAt/c) m
which shows that the operator p, defined in this book to be the generator of translation, is not the same as mdx/dt. Quite often p is called canonical momentum, as distinguished from kinematical (or
mechanical) momentum, denoted by II: n =
Even though we have [p„pj] = 0
for canonical momentum, the analogous commutator does not vanish for mechanical momentum. Instead we have =
as the reader may easily verify. Rewriting the Hamiltonian as H = f ^ + e
and using the fundamental commutation relation, we can derive the quan-
2.6. Potentials and Gauge Transformations
tum-mechanical version of the Lorentz force, namely, d2x dU ^ 1 dx _ „ dx \ — m—— = —— = e E + - — X B - B X dt 2c \ dt dt dv
This then is Ehrenfest's theorem, written in the Heisenberg picture, for the charged particle in the presence of E and B. We now study Schrodinger's wave equation with and A. Our first task is to
sandwich H between (x'| and |a, t0: /). The only term with which we have to be careful is eA(x)
|
{"url":"https://silo.pub/modern-quantum-mechanics.html","timestamp":"2024-11-07T20:29:51Z","content_type":"text/html","content_length":"146083","record_id":"<urn:uuid:c6143051-48f2-4af7-bd15-d10daae77eaf>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00275.warc.gz"}
|