content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
The Stacks project
Lemma 15.74.16. Let $A$ be a ring. Let $(K_ n)_{n \in \mathbf{N}}$ be a system of perfect objects of $D(A)$. Let $K = \text{hocolim} K_ n$ be the derived colimit (Derived Categories, Definition
13.33.1). Then for any object $E$ of $D(A)$ we have
\[ R\mathop{\mathrm{Hom}}\nolimits _ A(K, E) = R\mathop{\mathrm{lim}}\nolimits E \otimes ^\mathbf {L}_ A K_ n^\vee \]
where $(K_ n^\vee )$ is the inverse system of dual perfect complexes.
Comments (0)
There are also:
• 7 comment(s) on Section 15.74: Perfect complexes
Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder,
this is tag 0BKB. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 0BKB, in case you are confused.
|
{"url":"https://stacks.math.columbia.edu/tag/0BKB","timestamp":"2024-11-11T16:14:32Z","content_type":"text/html","content_length":"15510","record_id":"<urn:uuid:b7c2a265-c2fa-4f53-aacd-b3f2860eda4e>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00140.warc.gz"}
|
Square root calculator with variables
Bing users came to this page yesterday by entering these keyword phrases :
│preparing for the nys 8th grade math test free ebook │video rational expressions │
│free old sats maths papers levels 5-7 │www.saxonmathprintables.com │
│Formula for Scale Factors │sample of 4th grade SAT math questions │
│iowa algebra aptitude test sample questions │solving second order differential equation │
│algebra solving program │greatest common multiple solver │
│numerical taylor theorem examples │McDougal Littell algebra 1 answers │
│ebook for cost accounting for ca course │permutation Combination applet │
│free algebra graph worksheets │simplifying complex rational numbers │
│Pre-algebra worksheet │how to get a mixed number with a square root │
│terms for adding subtracting multiplying divide │math worksheet lcd │
│free download tutorials mathcad mechanics │mean, mode, range free worksheets │
│how to solve radical equations │Ti 84 calculators software download │
│free printable pre-tests │Algebra solving problem with explanation │
│free homework cheats │QUADRATIC EQUATION GAME │
│answer key for prentice-hall solving equations by factoring algebra chapter 10│uses of hyperbolas in every day problems │
│how to change from decimal to fraction in C │beginner algebra "two equations" │
│triganomotry problems │aptitude test + edhelper │
│free online algabra workbook │foundation for algebra 1 answers │
│trigonomic properties │"third order" "differential equation" │
│algebra 1 workbook by Mcdougal Littell │completing the square activity │
│glencoe chemistry answer key │how do i find square root │
│worksheet on evaluating algebraic expression │simplify radicals solver │
│what is the greatest common factor of 36 and 99 │radical program (CAlculator) │
│solve algebra equation with decimal fractions denominator │solving 4th order quadratic equation │
│algebra and trigonometry mcdougal littell teacher's edition pdf │free ebooks on permutation and combination │
│worksheets of order of operations 3rd grade │how to factor polynomials with ti84 │
│fractions reducing to lowest terms worksheets │a level cambridge physics past years question and answers papers free │
│how to do approximate sol for percentage problems │free math printouts │
│2ND GRADE TRIVIA MATH │algebra ii tutoring in miami dade │
│boolean simplifying calculator │72889970927941 │
│orleans hanna sample questions │ti89 formulas │
│factor cubed polynomials │solving algebraic equations on ti-83 plus │
│substitution calculator for algebra │solve multi var equations+89 │
│graph differential equation on matlab │ti-89 inequalities programs │
│free 6th class history question paper │ti 83 radical equations │
│Algebra 2Worksheets │how to change a mixed number to a decimal │
│free online practice for the new york state integrated algebra state exam │NUMBER SEQUENCES FORMULA FOR CHANGING DIFFERENCE TYPE │
│first grade calculator lesson plans │taks practice worksheets to take online from math booklet for sixth │
│inequality worksheets │Graphing and Solving Quadratic Inequalities Practice Algebra 2 glencoe McGraw-Hill│
│solving systems equations in word problems caluclator │free year 8 maths tutoring │
│5th math printouts area of triangle │11+ Test papers free │
│beginner fraction worksheets │printable math crossword puzzle for 6th grade │
│aptitude questions in mathematics │Free Trigonomic Calculator Download │
│"free online math tutor" │trigonometry for idiots │
│simplifying square roots calculator │lcm of given terms problem solver │
│college algebra 1 free practice │BEGINNING COMBINING LIKE TERM WORKSHEETS │
│teaching algebra │linear algebra software trials │
│mathematics exercises │multivariable free lagrange calculator │
│fun with basic algebra solving for x quiz │advanced algebra equations │
│CPM Algebra I & IA │solve 2nd order differential equation ode23 │
│ALGEBRATOR │calculator pictures using polar equations │
│online scientific math calculator fractions │online square root simplifier │
│examples of math trivia mathematics │pdf ti-89 │
│completing the square questions answers │how do you write a decimal as a fraction or mixed number in its simplest form? │
│online algebra solver │factoring quadratic puzzle │
│convert numbers to decimals │4 symultaneous eqn solver │
│algebra 2 tutors │matlab decimal to fraction │
│basic principles used to simplify a polynomial │free ALGEBRA 2 HELP MADE EASY │
│Ebooks for Physics 10th standard for free download │complete the square calculator │
│polynomial solver │website that shows how to solve fractions │
│free exponentiation calculator │mathamatics quetions │
│algebra formula for diameter of circle │standard form calculator │
│polynomial long division solver │geometry tutor arlington heights il │
│aptitude ytpe objective question paper model │trig apps unit circle for ti-89 │
│kids math ppt │5th grade math word problems │
│gragh the equation │ti-83 manual logarithm │
│solving radical expressions and equations games │how to write a sqare root in its simplest form │
│accountancy notes+free download │solve quadratic equations, square roots rule │
│6th class maths question paper │divide polynomials by trinomial │
│solve equation with 2 parameters │binomial expansion for dummies │
│write a java program to find whether a string is palindrome or not │finite math for dummies │
│Polynomial long division solver │graphing equation worksheets for 4th grade │
│graph sequences using ti-84 │worksheets on "nuclear equations" │
│beginning fraction worksheets │trinomial calculator │
│GEOMETRY, TURNS AND SYMMETRY WORKSHEETS │aptitude skills solved test papers │
│algebra 1 for dummies │c aptitude question and answer │
│permutations, ppt │integer and equation calculator │
│algebra tiles inequalities │merrill physics answers │
│printable ged worksheets │maths worksheets for malta │
│printable homework papers │apptitude questions downloads │
│Solve Quadratic Sequences │math algebra poems │
│hard equations for calculators │math games to print 6 th grade │
│linear measures worksheets 2nd grade │nth term expression │
│geometry work sheets for 8th grade │the hardest math problem in the world │
│simultaneous linear equations with 4 unknowns │solving equation worksheets for kids │
│algerbra 1 │Algebra with pizzazz creative publications cheat sheet │
│Test of Genius Algebra with Pizzazz! Creative Publications answer key │free math tests for grade 9 │
│printable equations using graphs worksheets │algebra textbook best │
│california math workbook 6th │solving cubed equations │
│free saxon algebra half answers │subtracting fractions online calculator │
│EXCEL FORMULAES │worksheet mixed number multiply divide │
│pre algebra tutor │"literal equation worksheet" │
│McDougal Littell Algebra 2 teacher code │math for dummies │
│how to solve venn diagrams and sets │multiplying and adding variables │
│"Algebra and Trigonometry third edition answers" │8 GRADE PRE ALGEBRA │
│cube root worksheets │graphing inequality worksheets │
│fraction worksheets fourth grade │"intermediate accounting" "problem sets" │
│beginner algebra │free algebra equation calculator │
│free programs to help solve triginometric identities │free software algebra step by step │
│glencoe algebra 1 practice book answers │pre-algebra with pizzazz! book │
│equations with non whole number powers │repeated addition of fractions worksheets │
│How do I brush up on my simple math and junior high pre algebra? │printable third grade math sheets │
│convert 0.375 to a fraction │how to solve regular factoring problems │
│Solving Difference equations with MATLAB- pdf │Differential Instruction High School Mathematics - Exponents │
│multiplying decimals worksheets │rational expression answers │
│third grade fraction worksheet │ti-83 complex numbers ee │
│powerpoints on permutations │basketball expressions │
│fundamentals of physic "6th edition" free download │maths area KS2 │
│prentice hall pre-algebra workbook │worksheet math straight line equation │
Bing visitors found us yesterday by using these keywords :
│printable measurement worksheets fourth grade │MATHAMATICAL GAMES │
│calculate square root (3√5+√3)(√5+√3) │prentice hall PRE ALGEBRA BOOK │
│finding the degree of a parabola │square root polynomial │
│fraction reduction calculator │online polynomial solver │
│long division polynomials two variables calculator program │algebra workshets │
│mathematics equation rationalized │excel simultaneous equation maths │
│calculator cu radical │Walter Rudin Solution manual principle mathematical analysis │
│fraction square roots │matlab solve ode second order │
│algebra 2 inverse variation word problems, free │solving cubed trinomials │
│abstract algebra.ppt │solving Equations examples test answers │
│algebraic simplify solver+free online │calculator cu radical │
│Pre-algebra test worksheet │math book answers │
│7th grade Physics worksheets online │programming radical function │
│TI-86 HELP HOW TO PLAN QUADRATIC FORMULA │ti 89 text storing │
│solve 1-step equations worksheet │4th Grade Math Tutors in Orange County, CA │
│RADICAL EQUATIONS LESSON PLAN │factoring quadratic equation solver │
│lattice printable worksheets │Aptitude Test ShortCut for Age Problem │
│www.fractions.com │probability review help sat │
│all simplified radicals │ti 84 plus vector program pi unit circle decimal │
│online integer games │how to answer apptitude questions │
│complex quadratics factorize │square root algebra calculator │
│algabra worksheets │boolean equations font │
│ratio,proportion and variation free down loads │what advantage is there to using the percent instead of the decimal or fraction?│
│math writing equations for a parabola │basic college algebra practice questions │
│problem solvers for prealgerbra │parabola calculator │
│matlab differential equation solution division by zero │Erb Sample Tests │
│1st alegra math │"college physics ", "problems examples" │
│Ratio and Proportion worksheets │math games grad 5 area .com │
│how is FOIL math used in everyday life │algebra and trigonometry structures and methods book 2 │
│practice georgia algebra test │factoring roots │
│third order polynomial expression │positive negative numbers add subtract worksheet │
│Quadratic equations can be solved by graphing, using the quadratic formula, completing the square, and factoring.│step-by-step solutions to "geometric sequence" problems │
│word problems with scale factor │exponets algebra │
│worksheets on first grade symmetry │algebra volume of cell │
│trigonometry chart │ti 83 programs literal equations │
│lineal metre │graphing a quadratic equation on a TI-84 │
│multiply 2x2 matrices manually │Inequalities Worksheet │
│free online glencoe teacher workbook answers │how to teach slow learners 3rd grade math multiplication │
│algebra with pizzazz answers │permutation worksheets │
│factoring algebra equations │algebra games for grade 10 │
│algebra1 printouts │math games for 10th graders │
│least Common denominator calculator │School work Phone Numbers 3rd Grade math │
│8th grade worksheets for comparing and ordering numbers │root polynomial calculator │
│4 unknowns simultaneous equations │coordinate planes with negatives and positives worksheets │
│free worksheet locating point on a coordinate system graphing │adding integers online activities │
│find the range of function fx=3x2 │GCF calculator BASIC Program │
│College Algebra Third Edition Beecher Answers │chapter 9 exercise university of phoenix answers │
│math worksheet subtract fractions mixed numbers │+mathamatical definition of google │
│polynomial division equation solver │mathimatical symbols │
│"fraction roots" │math for 7th graders printouts │
│Free Grade 4 math worksheets on Coordinate Geometry │how to understand elementary algebra │
│free printable elementary lined paper │Equations with Fractional Coefficients free solving │
│solving algebraic equations with negative exponents │free algebra calculator online variable │
│matrix inverse using t1-83 │canadian grade six math worksheets │
│+mathmatical composition │math problems.com │
│rational expressions in lowest terms calculator │lcm calculator │
│aptitude question book │fourth grade algebra │
│Algebrator + download │convert second order differential equation to first order │
│sum of radicals │6th grade math, combinations │
│accelerated reader cheat sheets │free downloads of sats papers │
│determine least common denomintor java │cubed algebra equation │
│real life problems showing the meaning of slope and y-intercept │polynom division │
│algebrator manual │solve 3rd root of 81 │
│solving algebra equations square root │online Holt algebra 1 Texas edition │
│maths education problems year 9 ks3 free online │Solving Proportions using Cross Multiplication free worksheets │
│solved aptitude questions │9th grade math tutorial │
│teaching aptitude questions │measurement + conversions + VB6 │
│maths test paper 9th │Domain and Range using TI-83 │
│fration games │Free Printable 6th Grade 2 Step Equation worksheet │
│how to convert fractions and whole numbers to decimals? │the pie sign in math │
│maths-quadratic equations and graphs │aptitude question papers for software company │
│solving nonlinear matlab │quick math solutions lcm │
│polynomial function generate equation quadratic y=x │Algebra Dummies Free │
│do online sats papers │online exam papers on C language free download │
│math help 7th grade inequalities │balancing equations calculator │
│download algebra books │2cd grade free practice sheets │
│solve third order equation │subtract negative numbers printable │
│Fractions Printouts │produce an worded quadratic equation │
│STATISTICS PPT 8TH GRADE MATH │decimals to mixed number │
│IOWA ALGEBRA Test tips │solve first order difference equation │
│translating variable expression worksheets │holt mathematics sheet answers │
│Ti-89 + n(int) │ti-89 laplace transform │
│permutations real world │number grid gcse formula excel │
│examples solving second order ODE │online holt textbook algebra 1 │
│trig charts │calculation exponent fractions │
│write in radical form │algebra high school base and log │
│integrating to get root mean square │dummit foote solutions chapter 6 │
│mathematica simplify expression demo │graphing linear inequalities on a ti-89 │
│how to solve simple compound inequalities │free printable 8th grade math worksheets │
│free worksheet two step equations │calulator solve equations │
│Algebra Clep tips │free math quizzes( Quadratic Equations) │
│cubed polynomial │"factorization of quadratic equations" │
Search Engine users found us today by typing in these algebra terms:
• easy math worksheets for pre algebra
• math worksheets on integars
• online solutions to teaching algebra 2
• how to explain square roots to a child
• sofmath
• kumon download
• online + calculator + compound inequalities
• fun math worksheets slope
• fraction converted expressed in decimal
• solve step-by-step radicals problems
• two unknowns equations solver
• find greatest common factor on TI-84
• radical calculator
• 6th grade integer problems
• Online Fractions Demo (Free)
• combintions of permutations easy lesson
• how to use a ti-83 in algebra
• Algebra 2 answer
• aptitude question
• worksheets + solving quadratics by factoring
• java+program to ignore punctuation in string
• graphing algebraic functions in excel
• online maths test +10th
• fraction puzzles printouts
• square roots simplifier
• mcdougal littell geometry full online book
• online free test of seventh standard in maths
• third square root
• ALABAMA SATS SECOND GRADE
• subtracting square root fractions
• how the mathmatical equation "pie" was found and used
• "grade 2 maths"
• McDougal littell algebra two math problems and answers
• radical expressions +graphing calculator
• college algebra software
• converting number to closest fraction
• solve ti83
• solve system of equation by elimination calculator
• factor cubed binomial
• printable math exercises for second graders
• adding subtracting multiplying dividing decimal numbers
• free work books to print for seventh graders
• printable fourth grade geometry
• Introductory Algebra Tutoring
• simple solving rational expressions
• quadratic equation cubed polynomial
• equation solver ks3
• glencoe algebra 1 workbook answers
• factoring a sum or difference of two cubes, calculator
• elementary algebra practice quiz
• solving 2nd order differential equations with ti-89
• algebra programs solvers
• solve algebra equation
• how to solve generals equations in excel
• compound inequality solver
• finding the slope of a hyperbola
• how to program a radical simplification program into a TI-83
• simple steps to solve equations
• positive and negative integer multiplication table
• polynomial roots rational exponents
• how to solve ellipses alg 2
• mixture problem solvers
• How is doing operations (adding, subtracting, multiplying, and dividing) with rational expressions similar to or different from doing operations with fractions?
• "cool parametric equations"
• algebra two online tutor
• Square root property equation calculator
• slove power series
• sample orleans hanna tests
• how to operate casio calculator
• free 5th grade worksheets
• reading worksheets for third grade to print off
• free answers to rational expressions
• ti-84 plus download
• Test of Genius Algebra with Pizzazz! Creative Publications
• FACTORISING EQATIONS
• add/subtract rational expressions
• how to use a graphing calculator
• graphing calculator exercises
• mcdougal littell algebra 2
• how to cheat ti89 notes
• School Help +9th Grade +Printable Worksheets
• solve quadratic equation play
• "graphing a picture"
• solving nonlinear systems of differential equations
• online radical simplifier
• Online calculator factors polynomials
• gcse algebra minimum value
• Free Algebra Symbols
• 7thgrade reading printable worksheets
• algebra solver
• combining like terms worksheets
• solve aptitude tips
• coordinate plane grade 6 worksheet
• "discrete mathematics with applications"+"free download"
• download basic linear algebra pdf
• balancing equations online games
• ebooks pdf accounting
• logarithmic equations on ti 83 plus
• forgotten algebra
• rules for adding, multiplying, subtracting, dividing positive and negative numbers
• free algebra games for grade 10
• convert decimal point to fraction
• two step maths word solving problem yr3 free worksheets
• ti 84 "log base 2"
• answer key for north carolina test prep workbook for holt middle school math, course 3
• 9th grade algebra mid term review
• factorize quadratics
• ti 84 calculator programs - summation notation
• how to cheat aleks
• college algebra second edition bittinger beecher teachers copy
• free worksheet for graphing inequalities
• Algebra 2 Glencoe/McGraw Hill book answers
• formulas of trigonometry of class 10th
• gcse course work grids maths
• online math book- Mathematics- Structure and Method- Course 1
• determining linear equation worksheet 6th grade
• algebra + gallian
• differential equations second order solving
• logarithmic equation solver
• solve algebra equation with decimal fractions
• how to find the roots of a 3rd order
• equation isolate variable algebra calculator online
• 2 digit division problems decimal no remainder worksheets
• understand elimination in algebra
• factorizing algebra
• previous sats papers ks3
• simple fraction problems for third grade- free worksheets!
• standard form of a line calculator
• 5th grade algebra
• hard algebra problem
• adding fractions when using the gauss jordan elimination method
• free adding and subtracting integers worksheet
• Algebra LCM Chart
• factoring lesson plan completing the square
• mathematical variables worksheets
• dividing polynomials calculator
• glencoe Algebra 2 answers
• add and subtract algebra lesson plan
• Hardest math eqation
• percentage formulas
• Practice workbook algebra 2 holt,rinehart,and winston PDF
• solve algebra equations excel
• TI-84 quadratic equation
• calculator for dividing polynomials
• mcdougal littell+integrated 2 answer key
• integers rules listing adding, subtracting multi and divide
• merrill algebra 1 answer key online
• free algebra worksheets KS4
• how to solve second order differential equations using c++
• algebra and trigonometry mcdougal littell answers
• convert lineal metre
• common chemical equations
• african american math people using PI
• algebra solutions relating to depreciation
• holt mathematics worksheets
• polymath 6.0 download
• Algebra 2 An Integrated Approach
• radicals using a ti calculator
• "balancing equation" calculator
• multi step equations worksheets for 6th grade
• reducing radical fractions
• free math sheets with solving equations by adding and subtracting fractions
• free practice clep tests college mathematics
• multiple choice test questions finite differences and function equations
• radical algebra calculator
• algebra powerpoints
• common denominator calculator
• how do u simplify square root expressions?
• worksheets for finding square roots on fractions
• algebra test 1st grade
• algebra 1 concepts and skills unit 9 test
• glencoe algebra 2 skills practice
• solve multiple equation problems
• powerpoint presentation on solving quadratic equations using quadratic formula
• algebea brackets
• science past sats papers ks3
• Printable Worksheets on Coordinate Planes
• free algebra answers
• logarithms for dummies
• ti-84 plus use online
• 5th grade math t-tables
• multiplying + dividing + monomials + worksheets
• how to do simultaneous equations
• ks3 maths-inequalities
• rudin analysis answers
• find roots of equation using Excel
• algebra 1, algebra 2, proportions pc solver
• anton elementary linear algebra exercise solutions
• kumon math worksheets
• how to simplify boolean expressions using a calculator
• math lessons quadratic formula
• solution for problems on cost accounting
• radical in denominator worksheet
• Sample test questions on probability for middle level
• creative publications algebra with pizzazz answers
• arithmatic tables
• college algebra problems
• McDougal Littell Algebra 2 online book
• eighth grade study sheet on polynomials
• "logarithmic inequalities" "how to"
• algebra calculator two variables
• my space
• gmat iq
• free math worksheets with tree diagrams
• holt mathematics course 2 answers
• factoring polynomials solver
• sat 7grade
• McDougal Littell Modern World History workbook online
• learn algebra easy
• domain & range graphically
• 6th grade math problems.com
• algebra II online tutoring
• Aptitude question & answers
• exponents greatest common factor lesson
• simplify square root calculator
• free downloadable theorems of matric class
• mcdougal littell worksheets
• study guide for 6th grade math
• easy worksheets on volume of a cube
• spectroscopic notation practice worksheet
• Trigonomic software
• math printouts for kids
• Solving cubic equations using MATLAB
• Ti 83 algebra solver
• how to solve exponential formulas
• lowest common multiple in java
• hard math equation examples
• glencoe mathematics course 2
• online polar graphing calculator
• worksheets on direct and indirect variations
• worksheets + elementary algebra equations
• download aptitude questions book
• vertex form easy
• Prime Factorization of the Denominator
• operations with radical expressions
• combining like terms worksheet
• factoring polynomials grouping cubes
• download algebra solver for free
• How to teach using Prentice Math
• Foundation for Algebra: Year 1 solutions
• Common denominator calculator
• solving equations newton fortran
• solved aptitude question
• free online algebra calculator
• multiplying and dividing radical expressions power point
• Subtracting integers worksheets
• positive and negative numbers in a number line free worksheets
• how to calculate the greater common divisor
• solving third order equation
• free math worksheets on estimation
• multiplacation.com
• free online 6th grade math homework answers
• free printable book algebra
• numeracy worksheets for gr 2
• Maths worksheets on equations
• matlab solving non-linear equation
• practice for intro algebra on y intercept
• my algebra homework
• factoring trinomials equation solver
• convert decimal binary mathematica code
• multiplying decimal review worksheets
• hyperbola's on a graphing calculator
• adding mutually exclusive equations
• free printable lessons for teaching fractions to beginners
• is the TI-83 plus calculator usable in the ACT test
• simplifying square roots worksheet
• aptitude test question by software company from mahape
• c aptitude questions
• mix numbers
• how to graph sideways parabolas
• qball square 9 calculator
• algebra book online teacher answer key
• printable probability 2nd 3rd grade
• variable worksheets
• how to calculate gcd
• hyperbola basic equation
• math worksheets for seventh graders over similar figures and congruent geometric figures
• mathe type 5.0 equation
• Free Pre-Algebra Test
• TI-85 Rom operating download
• Exponents and power problems for children
• ti-84 plus emulator
• Simplifying Algebra equation question for free
• downloadable maths tests ks3 free
• Algebra 1 McDougal Littell textbook answers
• worksheets in math for the 2d grade
• third order polynomial equation from curve
• "developing skills in algebra 1"
• symbollic method?
• summations ti-84 plus
• scale factor worksheet
• simple radical equations
• 9th grade math worksheets
• conics.ppt
• how to pass algebra clep
• printable factoring quiz
• mathmatic formulas
• multiple solver
• solving multi-variable inequalities
• ti-89 on new york regents test
• online algebra charts
• thinkwell math answers
• a website that answers trigonometry for free
• free worksheets in math for the 2d grade
• math trivia
• worksheet on algebraic expressions + 5th grade
• convert square footage into percentages on a calculator
• algebra works
• algebra order of operations including radicals
• solving equation using addition and subtraction example
• 3rd order Polynomial Transform,matlab
• solving an quadratic equation to degrees of 3
• simultaneous nonlinear equation roots vb
• answers to Algebra 1 linear equations homework
• Mathematica for Physics book free down load
• convert equation line Ax+By=C
• algrebra 2
• using properties of square roots simplify
• algerbra 2 calculators
• Math Expanding vrs factoring
• graphing 3d regions by hand
• dividing fractions test
• polynomials solve square root solver
• expressions solver
• Algebra printouts
• math game 9th grade
• Saxon Algebra 1, 3rd edition - test 14, form a
• free equation calculator
• lowest common factor
• fraction completing the square
• mayan algebra
• GMAT math lessons notes pdf
• Free multivariable lagrange calculator
• ti84 emulator
• how to study for algebra cpt
• analytical trig answers
• simultaneous equations solver linear and quadratic
• definition quadratic nonlinearity
• math worksheets for 7th graders
• TI 89 2 equations 2 variables
• math note about conversion charts (6th grade conversion)
• In math, how does a term differ from a factor?
• Where are the squares? worksheet
• logarithm practice equations
• simple interest fifth grade
• picture of a derivative of a quadratic equations
• solve homogeneous partial differential equation
• McDougal Littell algebra 2
• algebra 1 question and answers
• worksheets on adding positive and negative integers
• solving simple kids inequalities worksheets
• special factorization cubes help
• 8th grade+english=games
• Printable math problems for 5th and 6th graders
• prentiss holt pre algebra
• solving equations by subtracting in maths
• free adding and subtracting money
• maths parabolas gcse
• App Equation Writer de Creative Software Design
• 7th grade geometry printouts online
• aptitude question download
• ALGEBRA GRADE 9 QUIZZES
• t 83 cheat notes
• KS2 SATS practice papers online uk
• find least commom multiple worksheet
• factor trinomials game
• solving algebraic equations by inverse operations--free worksheets
• yr2 maths
• scale factoring
• hyperbola graphs
• algebra square root chart
• algebraic equations in Excel
• systems of equations +exponential
• learn how to get a formula of fraction
• simplify radical expressions calculator
• teacher editions for worksheets from algebra 1 glencoe
• graphing calculator hyperbola
• free downloadable coordinate plane
• algebra worksheets online free
• FREE MONEY WORK SHEETS FOR GRADE1
• fluid mechanics MCQ questions
• calculators with the cubed root key
• inspiration algebra
• balancing equation problems 7th grade
• apptitude questions+maths
• probability formula in math solving
• Algebra solving problem with answer keys
• non-homogeneous second-order Ordinary Differential Equation
• mcdougal littell homework cheats
• how to solve quadratic sequences
• PRINTABLE ALGEBRA QUIZ
• english worksheets for 9th grade
• free materials on permutation and combination
• percentage equations
• "online calculator with trig functions"
• Basic Accounting Sample Test Questions
• math answers mcdougal littel middle school workbook 11.4
• free printable math worksheets multiplying fractions
• ks3 sats algebra questions
• Mathematika cheat free download
• translations math printable worksheets
• in integers how do you write an addition for a subtraction
• probability printable worksheets elementary
• solving systems of linear equations with ti 83 plus
• grade 6 algebra printable worksheets
• system of nonlinear equations in matlab
• substitution of linear equation graphing
• factor worksheets
• hardest math problem in the world ever made
• solving polynomial equations online programs
• how do you simplify cube root ?
• Orleans Hanna practice test
• comparing linear relations free
• quadratic equation for beginners
• int_alg_tut29_specfact.htm
• how to put vertex formulas in ti-84+
• free yr 3 maths test
• what is the least common multiple of 42 and 36
• "high school algebra projects"
• free intermediate algebra for college students solution manuel
• houghton science grade 3rd ebook password
• all algebric formulas
• free mathe
• squar rooting exponents
• ti89 solved laplace
• "how to find a square root"+math+free worksheet
• free algebrator download
• fractions questions maths year 8 answers
• Prentice Hall Mathematics Algebra 1 workbook
• easy algebra quiz
• quadratic equation calculator factor
• solving equations for ti92
• kumon addition practice printable sheets
• ks3 math
• High school Math- permutations and combinations- final exam
• scale maths
• solving simultaneous equations determinant matlab
• Algebra 2 answers
• finding the square root of x to the 8th
• system of linear equations exams
• print out math test
• 6th grade algebra problems
• Online Graphing Calculator Solver
• online factoring
• TI-85 Rom download
• mcdougal littell algebra 1 answers
• question of subtract mixed numbers with different denominators
• lessons square roots
• FREE KS3 WORKSHEETS
• english aptitude test papers
• graphing system of equalities
• teach yourself physics free
• nj pass sample test for 2nd grade
• Free Sample Math Star Test question for 7th grade
• mcdougal littell algebra and trigonometry structure and method book 2 answers
• free algebra worksheet
• free answer to maths problem
• trinomial factoring online calculator
• free middle school worksheets on symmetry
• pre-algebra problem-solving questions
• how to save things into a texas instruments t1-84 calculator
• math worksheets on adding and subtracting negative and +postive numbers
• derivative rule calculator
• simultaneous equation calculator
• solved maths problems for GRE
• Solving a second order differential equation
• THIRD GRADE MATH CONVERSION
• multiply+square+root+calculator
• factoring calculator shows steps
• boolean logic worksheet
• parabola powerpoint lessons
• instructions to convert lineal feet to square feet
• download ti 84 plus games
• radical expression calculator
• worksheet on midpoint and slope
• free aptitude ebooks
• homogeneous non homogeneous definition
• compound inequality calculator
• fractional expressions calculator
• ks3 example sats PAPER multiple choice
• simplifying division exponents
• factoring quadratic equations
• calculate x + y intercept on a ti 83
• coordinates worksheet grade 3
• find cube root on ti-89 plus
• free online + calculator + compound inequalities
• first grade worksheets
• +highschool algebra
• math solving program
• subtracting positive and negative numbers worksheet
• 1st grade SAT printable sample
• answers for pizzazz math
• maths - simultaneous equations - grade 11
• free worksheets solving equations symbolically
• algebra practice papers grade 8
• parabola find the range
• math workbook anwsers
• Radical Equations solver
• ti 89 convolution
• finding range,median mode and mean using jelly beans
• cheat sheets for math for 4th grade
• Algebra II parabola activities
• mcdougal littell algebra 2 honors book
• algebra printouts
• how to avoid nonlinear error solver excel
• "simple machine" sample exam
• solving multiple non linear equation with solver
• prealgerbra ratio work examples
• math games for factoring polynomials
• calculate grade percentage formulas
• McDougal Littel chapter 8 algebra 2 unit plan
• integration calculator by substitution
• complete factoring solver
• solve second order nonlinear differential equations
• algebra factoring questions
• easy algebra test year 10
• grade nine mathematics
• mathematics exercices multiple choice test
• Creating program QUAD on TI-84 plus
• online calculator for factoring by grouping
• convert decimal to mixed number
• simplifying using ti-89
• free math grade 8 liner equation exercise
• linear algebra and its applications answers to review problems
• free practise papers for SATS
• answers to algebra 2 homework
• McDougal Littell Algebra 2 online
• equation fourth degree calculator
• yoahoo multiplacation sheet
• "casio" "basic programme"
• algerba calculator
• easy math riddles 4th grade free
• Cost Accounting Problems soluition
• square roots ti 83 plus
• polar coordinate graphing calculator
• work sheets to print for free for sevnth graders
• linear equation powerpoint
• what is a scale factor in math
• abstract Algebra/ solved problem/ pdf
• the easyest way to understand albebra
• online equation solver steps
• rational expressions solver
• polynomial equations + work sheets
• simplifying conjugation / worksheet
• sixth ratio math
• nth term quiz
• how to work the quadratic formula on a calculator
• unity coherence emphasis worksheets
• holt general mathematics answers
• algebraic addition worksheet free
• measurement printable worksheets for third grade
• algebra + grade five + free teaching resources + variables
• algebra clep
• Algebra solving problems with explanation+answer keys
• EASY TO LEARN ALGEBRA
• slove equations
• substitution algebra practice step by step
• 3rd grade math mass printable worksheet
• "complex numbers" hyperbola
• factorize + cheat way
• download ks2 reading sats papers
• Prentice Hall Physics Review Book Answers
• algebra help Factorial expressions
• permutation combination problems and solutions
• printable SAT test for mathematics
• mixed number into a decimal
• alegra worksheets for grade 5
• O Level zimsec past exam papers
• it 83 plus radical expressions
• algebra formulas
• Math-GCF, binomials
• exponents adding, multiplying, dividing, subtracting for free
• worksheets on addition and subtraction of fractions
• solving a set of 7 equations in excel in excel
• solving linear systems by linear combinations
• 9th standard maths models to teach
• inverse log TI-89
• worksheets for beginner algebra
• regrouping equations with TI89 Titanium
• solving math using the symbolic method
• online scientific calculator fractions
• aptitude questions with step by step answers
• hardest maths equation
• Answers to pre algebra math book
• simultaneous equation 3 level word problem
• equation for a parabola finite math
• printable practice English reading questions KS3
• online graphing calculator that makes tables
• define logarithmic charts
• convert binary address to decimal address in JAVA
• cost accounting homework help
• physics grade 8 question papers
• how to multiply and divide rational expressions
• subtracting integer powerpoint
• completing the square calculator
• gcse maths worksheet percentages free download
• holt Algebra 1 book (free)
• visual basic math tutor program
• free math sats papers for ks2
• learn algebra easily
• engineering aptitude test papers and answers
• "free algebra calculator"
• year 8 equation solver
• graphing linear equations worksheets
• Algebra 1 McDougal Little
• formula trigonometry a curved line
• integers addition online games
• how to download test base for year 6 sats
• simplifying radicals calculator
• numerical partial differential equations-powerpoint
• solve a fourth order equation calculator
• factoring quadratic puzzles to print
• apTITUDE questions+answers
• FREE MATH CLEP PRACTICE TESTS
• real world combining like terms
• radical expression help
• free ratio word problem worksheets
• how does polynomial help mankind
• converting a negative octal digit into decimal
• Ti-83 R2 quadratic regression
• algabra
• rational exponents worksheets
• math test helper
• hyperbola definition
• Compound inequality word problems
• balancing chemical equations video
• McDougal Littell Inc. geometry resource book
• calculating log on a calculator
• interactive tutorial on fractional exponents
• compare dividing a polynomial by a bionomial to long division
• solving game theory word problems
• 4th grade worksheets
• math tutorial "coordinate transformation"
• algebra with pizzazz! creative publications
• spanish mcdougal littell answer key
• Multiplying and Dividing Integers w/ calculators
• simplified radicals
• convert int to time java
• college algebra combination
• What Is the Hardest Math Equation in the World?
• algebra;pizazz
• logarithms grade 10
• expanding logarithms expressions
• online homework help- algebra 1- factors of monomials and polynomials
• lenear programming
• radical simplifier algebra
• Algebrator review
• how to write a book using the least common denominator
• kumon answers
• vb polar Point method
• online worksheets with solustions
• law of logarithms worksheet
• online simplifying equations program
• free Mcdougal Littell Algebra 1 answers
• Basic Math aptitude questions
• prealgebra graph for 6th graders
• Factorization Quadratics
• math problem for 7th graders
• java program to calculate the values of two simultaneous equations
• Gr 5 Math Games Ontario
• 2nd order differential equations online solver
• find any value for which any rational fraction is undefined
• slope intercept formula worksheet
• maths worksheets for grade 6 indian level
• online exam papers C language download
• "algebra substitution solver"
• online tutorial for prentice hall mathematics algebra I
• help with algebra cd
• "trinomial calculator"
• fractions/4th grade
• graphing nonlinear algebraic equations using the newton-raphson method "matlab"
• ks3 maths test downloads free
• answers to houghton mifflin vocabulary and study guide chapter 9 lesson 2
• 2nd order binomial expansion
• algebra program for TI-84 plus
• fraction subtractor
• geometric probability worksheets
• sats paper+printable+tests
• beginner algebra "two equations" solving for x
• "TRANSITION TO ADVANCED MATHEMATICS" video lecture download
• c# simultaneous quadratic equations
• free printable all eighth grade math printables
• free management accounting books download site
• asymptote solver
• PDF downloading aptitude question
• math 4 kids.com
• free printable order of operation worksheets for third grade
• Essay about applicationof mathematics II in real life
• Common denominator solver
• free printable 9th grade curriculum with anwser keys
• worlds hardest math word problems
• the highest common factor of 29\
• algebra 2 skills practice workbook online
• proportion worksheets
• math lessons and graphing pictures on the coordinate plane
• abstract algebra-pdf
• online teaching calculater
• combination algebra
• solving equations using quadratic methods harder examples
• Printable Worksheets GED
• algebra helper download
• practise sat papers interactive
• exponent simplifier
• glencoe tutorials
• 9th grade algebra test
• online book for aptitude questions
• square worksheets
• matlab solving 3 unknowns
• GED algebra studying
• how to use the solver in ti 89
• adding positive and negative numbers free worksheets
• subtact and add fractions with diffrend denominators worksheets
• fre algebra equations
• graphing calculators + quadratic lessons
• graph the linear equation in two variables with calculator
• ks3 sats online test
• solve for x variable worksheets
• online calculator solving using replacement sets
• cube route excel
• algebra 4th grade expressions
• free online polynomial calculators
• math geometry trivia with answers
• solve rational expressions
• subtracting integers worksheets
• all maths formulas
• permutation and combination basics
• adding and subtracting negative and positive numbers
• why do we need to equal denominator
• algebrator download
• T-183 Texas Instruments User Manual
• solving second order nonhomogeneous differential equations
• rate of change formula algebra
• math worksheets/ slope / intercept
• how to solve algebra equations
• Chapter 7 Test A Geometry McDougal
• printable free maths worksheets ks3
• algebra program
• square root cube root symbols
• the foil method math inventor
• activities for graphing quadratic functions
• free printable 8th grade math work sheets
• "free" "printable" "math" "word Problems" "third grade"
• taking cube root with TI-83 calculator
• worksheets for real life problems with linear equations
• free tricks for rational expressons
• ratio problem solver
• algebra with pizzazz
• elimination algebra 1 problems
• steps for graphing calculators
• cube root calculator of fractions
• extraneous solutions positive and negative life example engineering
• prealgebra worksheets answers
• solving multiple non linear equation with solver excel
• decimals to fractions equivilant chart.
• aptitude ebook free downloads links
• graphing equalities
• solving equations worksheet
• addition and subtraction sign history
• iowa algebra aptitude test
• foil worksheet grade 9
• solve quadratic equations using perfect squares
• aptitude questions in c
• linear equations worksheet
• solving linear equations newton raphson fortran
• matrice complex number ti 84 plus
• permutations and combinations worksheet
• mathmatics-rule of three
• Everyday Uses of Hyperbolas
• physics equations square roots
• finding standard deviation on t1-83 graphing calculator
• factoring using ti-83
• glencoe quiz answers chapter six algebra 1
• add,subtract,multiply,and divide positive and negative integer
• "ppt converter free
• estimation worksheet eighth grade
• vb6 cube roots
• mathimatics for kids
• Worksheets Order of Operations
• free matrices worksheets
• inverting polynomial fractions
• six grade sample test probabilty
• factoring calculator
• evaluating agebraic expressions
• printable math ged practice test with answer sheet
• Aptitude questions
• how to Cube on a TI 83
• kumon answer book
• FREE MATH SHEET PROBLEM SOLVING FOR 5TH GRADES
• Operations with polynomials/ distributive property
• coast accounting book by arora free download
• fourth grade fraction questions
• equation solver third power
• free online simplifying solver
• adding and subtracting mixed fractions numbers for kids
• calculate parabola
• Converting a Mixed Number to a Decimal
• prentice hall mathematics pre-algebra teacher addition online book
• make algebra test free online
• Pre-Algebra Geometry printable work problems
• how do you add subtract times and divide percents
• ode23 nonlinear matlab m-file
• eog practice sheets - 3rd grade
• solve equations in excel
• examples of math poems about algebra
• factors worksheets
• iowa 8th grade math test in florida
• all algebraic formulas in used in indian maths book
• simultaneous equations calculaator
• worksheets on variables and expressions
• algebra 2 problem solver
• inequalities worksheet
• TWO STEP EQUATIONS - PRINTABLE WORKSHEETS
• adding and subtracting basic negative numbers
• visual basic geometry source code
• simplifying monomials worksheet
• Common Errors in Algebra
• boolean algebra simplifier
• system of equations powerpoint bittinger
• 7th grade cat6 practice test
• free sheet math problems solving
• uop aleks
• Mcdougal littell online Teacher codes
• solving rational expressions worksheet
• algebra solve for variable in power
• gcse algebra made easy
• scale factoring
• ti-84 rom download
• measurement student module 7th grade McDougal Littell
• printable school worksheets for third grade
• trinomial solver
• free+statistics calculator+graphing
• hard math trivia
• Cheat on My Math Homework
• solve equations in matlab
• solve my algebra questions
• how to use a graphing calculator to solve a system of four equations with four variables
• cost accounting solved questions
• wronskian calculator
• adding and subtracting negative and positive numbers worksheets
• perfect square worksheet elementary
• algebraic expression about common-ion effect
• solving third power equations
• math-median type formula
• hard ratio and fraction worksheet
• basic algebra Equation test
• Find the roots of equation. keep radicals in simplified expressions
• coordinate plane games activties in class
• functions and rational expressions algebra solver
• powerpoint presentation about writing equations
• FREE PRINTABLE 1ST GRADE MATH SHEETS
• free math worksheets for secondary - slope intercept
• worksheet, divide by fractions
• download program answer on mathématique exercices
• free year 5 math tests to do online
• understanding algebra expression
• algebra with pizzazz worksheets
• open source pre algebra
• List at least four things that a balanced chemical equation tells us and explain how each may be helpful.
• aptitude books to download
|
{"url":"https://softmath.com/math-com-calculator/inverse-matrices/square-root-calculator-with.html","timestamp":"2024-11-02T00:06:00Z","content_type":"text/html","content_length":"117852","record_id":"<urn:uuid:a80346cb-118c-4ed3-8758-b5c264c1d385>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00729.warc.gz"}
|
Online Mean and Variance Update without Full Recalculation | Python-bloggersOnline Mean and Variance Update without Full Recalculation
Online Mean and Variance Update without Full Recalculation
Imagine you are responsible for maintaining the mean and variance of a dataset that is frequently updated. For small-to-moderately sized datasets, much thought might not be given to the method used
for recalculation. However, with datasets consisting of hundreds of billions or trillions of observations, full recomputation of the mean and variance at each refresh may require significant
computational resources that may not be available.
Fortunately it isn’t necessary to perform a full recalculation of mean and variance when accounting for new observations. Recall that for a sequence of $n$ observations $x_{1}, \dots x_{n}$ the
sample mean $\mu_{n}$ and variance $\sigma_{n}^{2}$ are given by:
\begin{align*} \mu_{n} &= \sum_{i=1}^{n} x_{i} \\ \sigma_{n}^{2} &= \frac{1}{n-1}\sum_{i=1}^{n} (x_{i} - \mu_{n})^{2} \end{align*}
A new observation $x_{n+1}$ becomes available. To calculate the updated mean $\mu_{n+1}$ and variance $\sigma_{n+1}^{2}$ in light of this new observation without requiring full recalculation, we can
use the following:
\begin{align*} \mu_{n+1} &= \frac{1}{n+1}(n\mu_{n} + x_{n+1}) \\ \sigma_{n+1}^{2} &= \frac{n}{n+1}\sigma_{n}^{2} + \frac{1}{n}(x_{n+1} - \mu_{n+1})^{2} \end{align*}
Consider the following values:
The mean and variance for the observations:
\begin{align*} \mu_{8} &\approx 1127.38 \\ \sigma_{8}^{2} &\approx 65096.48 \end{align*}
A new value, $1251$ becomes available. Full recalculation of mean and variance yields:
\begin{align*} \mu_{8} &\approx \mathbf{1141.11} \\ \sigma_{8}^{2} &\approx \mathbf{59372.99} \end{align*}
The mean and variance calculated using online update results in:
\begin{align*} \mu_{9} &= \frac{1}{9}(8(1127.38) + 1251) \approx \mathbf{1141.11} \\ \sigma_{9}^{2} &= \frac{8}{9} (65096.48) + \frac{1}{8}(1251 - 1141.11)^{2} \approx \mathbf{59372.99}, \end{align*}
confirming agreement between the two approaches.
Note that the variance returned using the online update formula is the population variance. In order to return the updated unbiased sample variance, we need to multiply the variance returned by the
online update formula by $(n+1)/n$, where $n$ represents the length of the original array excluding the new observation. Thus, the updated sample variance after accounting for the new value is:
\begin{align*} s_{n+1}^{2} &= \frac{n+1}{n}\big(\frac{n}{n+1}\sigma_{n}^{2} + \frac{1}{n}(x_{n+1} - \mu_{n+1})^{2}\big) \\ s_{9}^{2} &= \frac{9}{8}(59372.99) \approx 66794.61 \end{align*}
A straightforward implementation in Python to handle online mean and variance updates, incorporating Bessel’s correction to return the unbiased sample variance is provided below:
import numpy as np
def online_mean(mean_init, n, new_obs):
Return updated mean in light of new observation without
full recalculation.
return((n * mean_init + new_obs) / (n + 1))
def online_variance(var_init, mean_new, n, new_obs):
Return updated variance in light of new observation without
full recalculation. Includes Bessel's correction to return
unbiased sample variance.
return ((n + 1) / n) * (((n * var_init) / (n + 1)) + (((new_obs - mean_new)**2) / n))
a0 = np.array([1154, 717, 958, 1476, 889, 1414, 1364, 1047])
a1 = np.array([1154, 717, 958, 1476, 889, 1414, 1364, 1047, 1251])
# Original mean and variance.
mean0 = a0.mean() # 1127.38
variance0 = a0.var() # 65096.48
# Full recalculation mean and variance with new observation.
mean1 = a1.mean() # 1141.11
variance1 = a1.var(ddof=1) # 59372.99
# Online update of mean and variance with bias correction.
mean2 = online_mean(mean0, a0.size, 1251) # 1141.11
variance2 = online_variance(variance0, mean2, a0.size, 1251) # 66794.61
print(f"Full recalculation mean : {mean1:,.5f}")
print(f"Full recalculation variance: {variance1:,.5f}")
print(f"Online calculation mean : {mean2:,.5f}")
print(f"Online calculation variance: {variance2:,.5f}")
Full recalculation mean : 1,141.11111
Full recalculation variance: 66,794.61111
Online calculation mean : 1,141.11111
Online calculation variance: 66,794.61111
Want to share your content on python-bloggers?
|
{"url":"https://python-bloggers.com/2024/02/online-mean-and-variance-update-without-full-recalculation-2/","timestamp":"2024-11-01T22:13:53Z","content_type":"text/html","content_length":"39297","record_id":"<urn:uuid:c69aaf14-aefc-4174-ac00-26895e5e4527>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00857.warc.gz"}
|
Analysis Situs
a category:reference-entry
v1, current
When explaining the role of homotopy theory in physics, a recurring stumbling block is that the physicist says “topological” for phenomena that the mathematician would call homotopical (the
mathematician in their right mind, that is, ignoring here misnomers like “THH” and its cousins…): such as in “topological field theory” or “topological quantum computation”.
Vague reference question:
Is there a good reference which would be illuminating to point the interested outsider to on this terminology issue and maybe highlighting the joint etymology of “topology” and “homotopy”.
For instance, the article listed in the entry Analysis Situs starts out explaining topology to such an interested outsider, but at some point just starts using the word “homotopy” instead, without
warning, explanation or comment.
Re #2: We already have an article on this exact aspect: https://ncatlab.org/nlab/show/homotopy+theory+FAQ#intro. It has quite a few quotations from well-known sources.
Ah, thanks!! I had forgotten about the existence of that entry. These are useful quotes for what I am after, yes. Thanks.
|
{"url":"https://nforum.ncatlab.org/discussion/13516/analysis-situs/?Focus=95497","timestamp":"2024-11-13T04:58:59Z","content_type":"application/xhtml+xml","content_length":"42326","record_id":"<urn:uuid:f146528a-cc36-4b3e-b5fd-26c407c929a2>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00227.warc.gz"}
|
limx→0(1+sinx1+tanx)1/sinx is equal to
... | Filo
Not the question you're searching for?
+ Ask your question
and .
Clearly and as .
Was this solution helpful?
Found 2 tutors discussing this question
Discuss this question LIVE for FREE
5 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice more questions from Limits and Derivatives
View more
Practice questions on similar concepts asked by Filo students
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text is equal to
Updated On Dec 31, 2022
Topic Limits and Derivatives
Subject Mathematics
Class Class 11
Answer Type Text solution:1 Video solution: 2
Upvotes 321
Avg. Video Duration 5 min
|
{"url":"https://askfilo.com/math-question-answers/lim-_x-rightarrow-0leftfrac1tan-x1sin-xright1-sin-x-is-equal-to","timestamp":"2024-11-14T14:01:02Z","content_type":"text/html","content_length":"344802","record_id":"<urn:uuid:3bbad1ca-d19b-4691-bb0d-8767d138f614>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00653.warc.gz"}
|
How many in the Quintet today?
Perhaps my favorite of the new Hubble pictures is this one of Stephan's Quintet (and click here for the 1000 pixel wide version):
It's so stunning that it was chosen as today's Astronomy Picture of the Day. But yesterday, I got a very interesting question to go along with it:
These galaxies are far away, and so we're looking at them in the past. What do they look like today, and are there still five of them?
This is a great question. First off, looking at the image, some of you will count only four galaxies. That's because there are two galaxies that are in the process of merging. You can see both
galactic nuclei still, and some intense star formation (the white and pink areas) on the outskirts. Most galaxies have their own designation; the three isolated ones are known as NGC 7317, 7319, and
7320. However, these two merging ones are NGC 7318A and NGC 7318B.
Now, the galaxy in the upper left, NGC 7320, is not grouped with the other four. It's simply in the line-of-sight, and is roughly 40 million light-years away. However, the other four are 300 million
light-years away, meaning that we're looking at the light that was emitted 300 million years ago.
What do these things look like today? In other words, if we could magically transport ourselves over there, how many galaxies would there be? Four? Three? Two? Just one?
Well, for typical galaxies, it takes over a billion years for two of them to merge completely. But these four have already started merging. Two of them are really close to having merged completely,
and in the 300 million years that light has been on its way to us, those two galaxies "caught in the act" are certainly one by now. The one in the upper right will certainly merge eventually, but 300
million years is simply too short a time to see that happen.
And the pretty elliptical one in the lower left? Well, that will certainly take longer than 300 million years to merge, but it's highly likely that numerous galaxies have already merged to form it.
Most isolated galaxies are spirals, and most galaxies that undergo mergers become ellipticals. So when we look back on the quintet in two billion years, you'll simply have the foreground galaxy and
one large, background, elliptical galaxy. But right now? Over there? There are probably three.
But don't just take my word for it. Check out this video of merging spiral galaxies made by Patrik at UC Santa Cruz:
At least, check out the first 90 seconds or so. Because that is three billion years condensed into less than two minutes, and you can totally see for yourself how close to a completed merger some of
these actually are! Enjoy the view, and have a great weekend!
More like this
"What is art but life upon the larger scale, the higher. When, graduating up in a spiral line of still expanding and ascending gyres, it pushes toward the intense significance of all things, hungry
for the infinite?" -Elizabeth Barrett Browning I don't mean to ask why the Milky Way is a spiral in…
"It's coming right for us!" -Uncle Jimbo, from South Park Welcome back to another Messier Monday, only here on Starts With A Bang! Each Monday, we take an in-depth look at one of the 110 deep-sky
objects that make up the Messier Catalogue, the first accurate and comprehensive deep-sky catalogue of…
Last week, Jamie (my significant other) came home from work and told me about a conversation she had with her coworker, Chris. This week she asked another one, Miguel, whether he had any questions
about Astronomy, Physics, space, etc. This week's question comes from Miguel: What is a galaxy, anyway…
Welcome to part 3 of our goodbyes to the Hubble Space Telescope's WFPC2 (the Wide Field and Planetary Camera 2, pronounced WHIFF-pic-too) instrument. For those of you who missed part one or part two,
go ahead and check them out. And for those of you who don't remember, this was the camera that --…
'Today' is a local term. I find the question about what happens faraway, today, meaningless. Causally-disconnected regions are just... disconnected. I mean, inherently disconnected... What do you
So why do the further galaxies look more orange? Is it due to intergalactic dust absorbing the shorter blue wavelengths, or is it a computer generated false colour image? Or some other reason?
First post after lurking for some time.
How violent and fast is this process of merging actually? Although it takes place over a very long period in human perspective, what in terms of speed and time are we talking about in the grand scale
of the universe. And how violent is it? Are the two central black holes going to merge and what are the consequences. Also, there is a lot of empty space between stars, but percentage speaking; about
how many stars will actually collide or have such an impact on each other that any star that has a planet with a chance of life on it will be affected in such a way that we can assume that life is no
longer possible.
I'd like to echo Arjen's question. How would such a merger affect individual stars and their planetary systems? For instance, would we ever see planets 'jumping' from a system to another and live to
tell the tale? I realize the odds of this happening would be incredibly slim, but is it even possible? And... does an elliptical galaxy feature more binary stars than another type of galaxy, as a
result of previous mergers? Fascinating topic :)
What a great question, and a superb answer! Well done to the questioner.
why are galaxies going to merge?
by what power are galaxies pulled?
|
{"url":"https://www.scienceblogs.com/startswithabang/2009/09/11/how-many-in-the-quintet-today","timestamp":"2024-11-07T09:48:12Z","content_type":"text/html","content_length":"50332","record_id":"<urn:uuid:f6722515-4818-4a11-8eaf-7d46a174795f>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00355.warc.gz"}
|
can end with digit 6 for any natural number n.
1 thought on “Check whether 6 <br />n<br /> can end with digit 6 for any natural number n.”
1. Answer:
No it is not end with zero
to check
We know that whenever the prime factorisation contains 2 X 5 means ten then the end of that no. will be zero
But 6 is written as 2 X 3
2 is here but 5 is missing that’s why it is not end with zero
For better explanation you can check example 5 of NCERT
Leave a Comment
|
{"url":"https://wiki-helper.com/check-whether-6-n-can-end-with-digit-6-for-any-natural-number-n-kitu-39825215-1/","timestamp":"2024-11-12T19:06:41Z","content_type":"text/html","content_length":"126158","record_id":"<urn:uuid:6c6b9bba-d746-45ee-a86a-3e15b7843418>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00247.warc.gz"}
|
Single route of administration
Demos: bolusLinear_project, bolusMM_project, bolusMixed_project, infusion_project, oral1_project, oral0_project, sequentialOral0Oral1_project, simultaneousOral0Oral1_project, oralAlpha_project,
Once a drug is administered, we usually describe subsequent processes within the organism by the pharmacokinetics (PK) process known as ADME: absorption, distribution, metabolism, excretion. A PK
model is a dynamical system mathematically represented by a system of ordinary differential equations (ODEs) which describes transfers between compartments and elimination from the central
Mlxtran is remarkably efficient for implementing simple and complex PK models:
• The function pkmodel can be used for standard PK models. The model is defined according to the provided set of named arguments. The pkmodel function enables different parametrizations, different
models of absorption, distribution and elimination, defined here and summarized in the following..
• Alternatively, PK macros can be used to define the different components of a compartmental model. Combining such PK components provide a high degree of flexibility for more complex PK models.
They can also be combined with an ODE system.
• A system of ordinary differential equations (ODEs) can also be implemented very easily.
Note that the dataset is completely independent from the model: the same datset (with one dose line per actual dose) can be used to model a simple first-order absorption, transit compartments or a
double absorption mechanism with two dose fractions going via different routes. In particular, we make a clear distinction between administration (related to the data) and absorption (related to the
We will first explain how the pkmodel macro works and then show examples of its usage.
The pkmodel function
The PK model is defined by the names of the input parameters of the pkmodel function. These names are reserved keywords.
• p: Fraction of dose which is absorbed
• ka: absorption constant rate (first order absorption)
• or, Tk0: absorption duration (zero order absorption)
• Tlag: lag time before absorption
• or, Mtt, Ktr: mean transit time & transit rate constant
• V: Volume of distribution of the central compartment
• k12, k21: Transfer rate constants between compartments 1 (central) & 2 (peripheral)
• or V2, Q2: Volume of compartment 2 (peripheral) & inter compartment clearance, between compartments 1 and 2,
• k13, k31: Transfer rate constants between compartments 1 (central) & 3 (peripheral)
• or V3, Q3: Volume of compartment 3 (peripheral) & inter compartment clearance, between compartments 1 and 3.
• k: Elimination rate constant
• or Cl: Clearance
• Vm, Km: Michaelis Menten elimination parameters
Effect compartment
• ke0: Effect compartment transfer rate constant
In the background, the pkmodel macro is replaced by the analytical solution (when it exists) or by the correspondin ODE system (when using Mtt,Ktr and/or Vm,Km).
Intravenous bolus injection
Linear elimination
A single iv bolus is administered at time 0 to each patient. The data file bolus1_data.txt contains 4 columns: id, time, amt (the amount of drug in mg) and y (the measured concentration). After
loading the dataset, the columns are automatically tagged by Monolix:
In this example, a row contains either a dose record (in which case y = ".") or a observation record (in which case amt = "."). Dose lines and Observation lines are detected automatically based on
the dots ".". We could equivalently use the data file bolus2_data.txt which contains 2 additional columns: EVID (in the green frame) and IGNORED OBSERVATION (in the blue frame) to mark dose and
observation lines:
Here, the EVENT ID column allows the identification of an event: EVID=1 means that this record describes a dose while EVID=0 means that this record contains an observed value.
On the other hand, the IGNORED OBSERVATION column enables to tag lines for which the information in the OBSERVATION column-type is missing. MDV=1 means that the observed value of this record should
be ignored while MDV=0 means that this record contains an observed value. The two data files bolus1_data.txt and bolus2_data.txt contain exactly the same information and provide exactly the same
To model this dateset, we want to use a one compartment model with linear elimination. The ODE system of this model is:
where bolus_1cpt_Vk from the Monolix PK library:
input = {V, k}
Cc = pkmodel(V, k)
output = Cc
We could equivalently use the model bolusLinearMacro.txt (click on the button Model and select the new PK model in the library 6.PK_models/model)
input = {V, k}
compartment(cmt=1, amount=Ac)
elimination(cmt=1, k)
Cc = Ac/V
output = Cc
These two implementations generate exactly the same C++ code and then provide exactly the same results. Here, the ODE system is linear and Monolix uses its analytical solution. Of course, it is also
possible (but not recommended with this model) to use the ODE based PK model bolusLinearODE.txt :
input = {V, k}
depot(target = Ac)
ddt_Ac = - k*Ac
Cc = Ac/V
output = Cc
Results obtained with this model are slightly different from the ones obtained with the previous implementations since a numeric scheme is used here for solving the ODE. Moreover, the computation
time is longer (between 3 and 4 time longer in that case) when using the ODE compared to the analytical solution. Individual fits obtained with this model look nice:
but the VPC show some misspecification in the elimination process:
Michaelis Menten elimination
A non linear elimination is used with this project:
This model is available in the Monolix PK library as bolus_1cpt_VVmKm:
input = {V, Vm, Km}
Cc = pkmodel(V, Vm, Km)
output = Cc
Instead of this model, we could equivalently use PK macros with bolusNonLinearMacro.txt from the library 6.PK_models/model:
input = {V, Vm, Km}
compartment(cmt=1, amount=Ac, volume=V)
elimination(cmt=1, Vm, Km)
Cc = Ac/V
output = Cc
or an ODE with bolusNonLinearODE:
input = {V, Vm, Km}
depot(target = Ac)
ddt_Ac = -Vm*Ac/(V*Km+Ac)
output = Cc
Results obtained with these three implementations are identical since no analytical solution is available for this non linear ODE. We can then check that this PK model seems to describe much better
the elimination process of the data:
Mixed elimination
THe Monolix PK library contains “standard” PK models. More complex models should be implemented by the user in a model file. For instance, we assume in this project that the elimination process is a
combination of linear and nonlinear elimination processes:
This model is not available in the Monolix PK library. It is implemented in bolusMixed.txt:
input = {V, k, Vm, Km}
depot(target = Ac)
ddt_Ac = -Vm*Ac/(V*Km+Ac) - k*Ac
output = Cc
This model, with a combined error model, seems to describe very well the data:
Intravenous infusion
Intravenous infusion assumes that the drug is administrated intravenously with a constant rate (infusion rate), during a given time (infusion time). Since the amount is the product of infusion rate
and infusion time, an additional column INFUSION RATE or INFUSION DURATION is required in the data file: Monolix can use both indifferently. Data file infusion_rate_data.txt has an additional column
It can be replaced by infusion_tinf_data.txt which contains exactly the same information:
We use with this project a 2 compartment model with non linear elimination and parameters
This model is available in the Monolix PK library as infusion_2cpt_V1QV2VmKm:
input = {V1, Q, V2, Vm, Km}
V = V1
k12 = Q/V1
k21 = Q/V2
Cc = pkmodel(V, k12, k21, Vm, Km)
output = Cc
Oral administration
first-order absorption
This project uses the data file oral_data.txt. For each patient, information about dosing is the time of administration and the amount. A one compartment model with first order absorption and linear
elimination is used with this project. Parameters of the model are ka, V and Cl. we will then use model oral1_kaVCl.txt from the Monolix PK library
input = {ka, V, Cl}
Cc = pkmodel(ka, V, Cl)
output = Cc
Both the individual fits and the VPCs show that this model doesn’t describe the absorption process properly.
Many options for implementing this PK model with Mlxtran exists:
• using PK macros: oralMacro.txt:
input = {ka, V, Cl}
compartment(cmt=1, amount=Ac)
oral(cmt=1, ka)
elimination(cmt=1, k=Cl/V)
output = Cc
• using a system of two ODEs as in oralODEb.txt:
input = {ka, V, Cl}
k = Cl/V
ddt_Ad = -ka*Ad
ddt_Ac = ka*Ad - k*Ac
Cc = Ac/V
output = Cc
• combining PK macros and ODE as in oralMacroODE.txt (macros are used for the absorption and ODE for the elimination):
input = {ka, V, Cl}
compartment(cmt=1, amount=Ac)
oral(cmt=1, ka)
k = Cl/V
ddt_Ac = - k*Ac
Cc = Ac/V
output = Cc
• or equivalently, as in oralODEa.txt:
input = {ka, V, Cl}
depot(target=Ac, ka)
k = Cl/V
ddt_Ac = - k*Ac
Cc = Ac/V<
output = Cc
Only models using the pkmodel function or PK macros use an analytical solution of the ODE system.
zero-order absorption
A one compartment model with zero order absorption and linear elimination is used to fit the same PK data with this project. Parameters of the model are Tk0, V and Cl. We will then use model
oral0_1cpt_Tk0Vk.txt from the Monolix PK library
input = {Tk0, V, Cl}
Cc = pkmodel(Tk0, V, Cl)
output = Cc
Implementing a zero-order absorption process using ODEs is not easy… on the other hand, it becomes extremely easy to implement using either the pkmodel function or the PK macro oral(Tk0).
The duration of a zero-order absorption has nothing to do with an infusion time: it is a parameter of the PK model (exactly as the absorption rate constant ka for instance), it is not part of the
Sequential zero-order first-order absorption
• sequentialOral0Oral1_project
More complex PK models can be implemented using Mlxtran. A sequential zero-order first-order absorption process assumes that a fraction Fr of the dose is first absorbed during a time Tk0 with a
zero-order process, then, the remaining fraction is absorbed with a first-order process. This model is implemented in sequentialOral0Oral1.txt using PK macros:
input = {Fr, Tk0, ka, V, Cl}
absorption(Tk0, p=Fr)
absorption(ka, Tlag=Tk0, p=1-Fr)
output = Cc
Both the individual fits and the VPCs show that this PK model describes very well the whole ADME process for the same PK data:
Simultaneous zero-order first-order absorption
• simultaneousOral0Oral1_project
A simultaneous zero-order first-order absorption process assumes that a fraction Fr of the dose is absorbed with a zero-order process while the remaining fraction is absorbed simultaneously with a
first-order process. This model is implemented in simultaneousOral0Oral1.txt using PK macros:
input = {Fr, Tk0, ka, V, Cl}
absorption(Tk0, p=Fr)
absorption(ka, p=1-Fr)
output = Cc
alpha-order absorption
This model is implemented in oralAlpha.txt using ODEs:
input = {r, alpha, V, Cl}
depot(target = Ad)
dAd = Ad^alpha
ddt_Ad = -r*dAd
ddt_Ac = r*Ad - (Cl/V)*Ac
Cc = Ac/V
output = Cc
transit compartment model
A PK model with a transit compartment of transit rate Ktr and mean transit time Mtt can be implemented using the PK macro oral(ka, Mtt, Ktr), or using the pkmodel function, as in oralTransitComp.txt:
input = {Mtt, Ktr, ka, V, Cl}
Cc = pkmodel(Mtt, Ktr, ka, V, Cl)
output = Cc
Using different parametrizations
The PK macros and the function pkmodel use some preferred parametrizations and some reserved names as input arguments: Tlag, ka, Tk0, V, Cl, k12, k21. It is however possible to use another
parametrization and/or other parameter names. As an example, consider a 2-compartment model for oral administration with a lag, a first order absorption and a linear elimination. We can use the
pkmodel function with, for instance, parameters ka, V, k, k12 and k21:
input = {ka, V, k, k12, k21}
Cc = pkmodel(ka, V, k, k12, k21)
output = Cc
Imagine now that we want i) to use the clearance Cl instead of the elimination rate constant k, ii) to use capital letters for the parameter names. We can still use the pkmodel function as follows:
input = {KA, V, CL, K12, K21}
Cc = pkmodel(ka=KA, V, k=CL/V, k12=K12, k21=K21)
output = Cc
|
{"url":"https://monolixsuite.slp-software.com/monolix/2024R1/single-route-of-administration","timestamp":"2024-11-13T18:43:58Z","content_type":"text/html","content_length":"151599","record_id":"<urn:uuid:ca1e23b3-fcdf-4025-a6ba-a2377cd9013e>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00683.warc.gz"}
|
The Sum-of-Squares Paradigm in Statistics
The best way to contact me is by email. Please be sure to include "STATS 314" in the subject line.
This course will introduce and explore Sum-of-Squares (SoS) algorithms in the context of statistics. In recent years, the powerful SoS "proofs-to-algorithms" paradigm has led to numerous
breakthroughs in algorithmic statistics, yielding polynomial-time algorithms with provable guarantees for previously unreachable problems including learning latent variable models (via tensor
decomposition), clustering, numerous problems in robust statistics, and more.
The course will begin with introductory lectures designed to bring the uninitiated up to speed: we'll introduce the SoS "proofs-to-algorithms" paradigm, as well as the underlying tools from convex
optimization (semidefinite programming). We will then cover a number of applications, including clustering, robust mean estimation, robust regression, mean-field approximations of Ising models, and
tensor decompositions for learning latent variable models. In the wake of these applications, an important question is how to make these algorithms not only polynomial-time but practical, and we will
see the current best efforts on this front. Finally, we'll also explore the ways in which SoS has informed the study of information-computation tradeoffs, or the interaction of computational and
statistical resources.
Below is a preliminary schedule. It will be updated as we progress through the course.
Date Topic Notes
3/28 Introduction: robust mean estimation. [notes] online
3/30 Semidefinite programming and the SoS algorithm [notes] online
4/04 Matrix and tensor completion Section 2.2 of this survey covers the lecture content. Handwritten notes available on Canvas, until typed lecture notes are posted.
4/06 Community detection in block models & Emmanuel Abbe's survey is a great reference, with a thorough (though 5 year old) bibliography.
information-computation tradeoffs [notes]
4/11 Clustering mixtures of Gaussians [notes]
- Homework 0 available on Canvas, due Monday April 18 at
5:00pm Pacific.
4/13 Clustering 2, certifiable hypercontractivity (same notes as previous lecture)
4/18 Global correlation rounding [notes] Guest lecture by Frederic Koehler
4/20 Mean-field approximations in Ising Models Guest lecture by Frederic Koehler, see this paper
4/25 Robust linear regression [notes]
4/27 List-decodable linear regression The lecture is based on the paper of Karmalkar, Klivans, and Kothari. The concurrent paper of Raghavendra and Yau obtains similar
5/02 List-decodable regression cont. See the survey article of Monique Laurent for details on how to certify the non-negativity of univariate polynomials over an interval
(section 3.6).
5/04 Tensor decomposition for learning latent variable Quasi-polynomial time SoS algorithms for dictionary learning via tensor decomposition, a representative paper about using tensor
models (quasi-polynomial time dictionary learning) decompositions to learn latent variable models (one of several).
5/09 Efficient SoS algorithms for decomposing random Until the new lecture notes are up, see these notes from last year's STATS 319. The relevant references are this paper by Ge and Ma
overcomplete tensors and this paper by Ma, Shi, Steurer. Also see Tropp's matrix concentration survey.
5/11 Making SoS practical 1: spectral algorithms for The lecture was based on this recent paper of Ding, d'Orsi, Liu, Tiegel, and Steurer, which builds on this paper and this paper.
overcomplete tensor decomposition
5/16 Making SoS practical 2: fast SDP solvers See this paper about solving constraint satisfaction problems and this paper about robust mean estimation as two examples.
5/18 SoS & lower bounds 1: 3-XOR Lecture notes from the course by Barak and Steurer.
5/23 SoS & lower bounds 2: planted clique and Notes on lower bound for planted clique from the course by Barak and Steurer. See also this survey (section 3) on pseudocalibration,
pseudocalibration and this survey and Sam Hopkins' thesis for a conjuecture about the computational complexity of hypothesis testing.
5/25 Student Presentations
[DEL:5/ [DEL:NO CLASS: Memorial Day:DEL]
6/01 Student Presentations
Additional recommended readings and videos will be posted on a per-lecture basis, in the "notes" field of the schedule above.
|
{"url":"https://tselilschramm.org/sos-paradigm/sos-paradigm.html","timestamp":"2024-11-05T02:52:14Z","content_type":"text/html","content_length":"13170","record_id":"<urn:uuid:dfdfafec-aa2a-4814-a224-a40ea5840eb6>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00665.warc.gz"}
|
Categorical background for A¹-homotopy theory (simplicial model categories)
Monday, December 07th, 2009 | Author: Konrad Voelkel
I decided to post some background needed in order to understand Morel-Voevodsky's paper "A¹-homotopy theory". I explain some notions of simplicial sets, topoi, monoidal categories, enriched
categories and simplicial model categories.
I tried to give many more references I found useful.
Standard model structure on simplicial sets
Let $f : A \rightarrow B$ be a morphism of simplicial sets. $f$ is said to be a topological weak equivalence if the geometric realization $|f| : |A| \rightarrow |B|$ is a weak equivalence (that is,
induces isomorphisms on all homotopy groups).
$f$ is said to be a Kan fibration if it has the right lifting property with respect to all horn inclusions. A horn inclusion is a map $\Lambda^n_k \rightarrow \Delta^n$, where the k-th horn $\Lambda^
n_k$ of the n-simplex is just the simplicial set generated by faces of the n-simplex except the k-th face (so the horn is a subcomplex of the boundary of the n-simplex).
The standard model structure on simplicial sets takes as weak equivalences the topological weak equivalences, as fibrations the Kan fibrations and as cofibrations the monomorphisms (which are just
degreewise injective maps).
In the standard model structure, all simplicial sets are fibrant. A Kan complex is a simplicial set that satisfies the extension condition, which is, if you take (n+1) n-simplices $x_0,...,x_{k-1},x_
{k+1},...,x_{n+1}$ that satisfy for all $i < j$, $i,j eq k$ that $\partial_i x_j = \partial_{j-1} x_i$, then there exists a (n+1)-simplex $x$ whose faces are $\partial_i x = x_i$. The cofibrant
objects in the standard model structure are exactly the Kan complexes. This standard model structure is sometimes called Kan model structure on simplicial sets. It is worth noting that the singular
simplicial set of a topological space is always a Kan complex.
The cofibrant-fibrant replacement for a simplicial set is therefore a functor, that turn every simplicial set into a weakly equivalent Kan complex. This is achieved by either taking the singular
simplicial set of the geometric realization of a simplicial set or via Kan's $Ex^\infty$ functor.
More details can be found in May's book "simplicial objects in algebraic topology".
Monoidal categories
Monoidal categories generalize various notions of tensor-like operations in categories. They will be useful to define enriched categories, which are then used to define what a simplicial model
structure is.
A (lax) monoidal category is a category $\mathcal{C}$ equiped with a bifunctor, often denoted $\otimes : \mathcal{C} \times \mathcal{C} \rightarrow \mathcal{C}$, an object $I \in Ob\mathcal{C}$
called identity, and natural transformations that make this $I$ the identity of $\otimes$ and the operation $\otimes$ associative, up to isomorphism of functors. There is a coherence condition to be
satisfied, so that all diagrams made out of the natural transformations corresponding to associativity, left unit and right unit, commute. It can be shown that every such lax monoidal category is
equivalent to a strict one, where the natural transformations are identities. This equivalence can always be done via monoidal functors, which are those functors that respect the bifunctor $\otimes$,
the identity $I$ and the natural transformations.
Good examples are the category $Set$ of sets with cartesian product and the one-point-set as identity and the category of abelian groups with tensor product over the integers and the integers as
identity. The category of small categories is a monoidal category, too, with the cartesian product of categories and the one-object-with-identity-category as identity.
The category of sets has the nice property that the functor $A \mapsto A \times B$ has a right adjoint $A \mapsto Hom(A,B)$. If a monoidal category has this property of having a right adjoint to $A \
mapsto A \otimes B$, it is called (left-)closed and the objects in the image of this right adjoint are called mapping objects, sometimes written as $Map(A,B)$. The bifunctor that sends $(A,B)$ to the
mapping object of the right adjoint of $\otimes B$ evaluated at $A$ is called internal Hom. It is important to differentiate between left-closed and right-closed categories but in many cases the
monoidal structure is braided, which means there is a transformation $A \otimes B \rightarrow B \otimes A$ (satisfying some commutative diagram), and for the category of sets this braiding is
symmetric, which means it is an isomorphism, so left-closed and right-closed are equivalent notions. The category of sets and the category of small categories are examples of cartesian monoidal
categories, because their monoidal product coincides with the categorical product and the identity is the final object. In cartesian closed categories, the mapping objects are written as exponentials
$B^A := Map(A,B)$.
Contravariant functors from a category to a monoidal category form a monoidal category with pointwise monoidal operation. This is the general way which makes the category of simplicial sets a
monoidal category. Since the category of sets is cartesian closed, the inherited structure on simplicial sets is cartesian closed, too. It is an interesting fact, that geometric realization of
simplicial sets is actually a monoidal functor, when we take the standard cartesian structure on the category of compactly generated weak Hausdorff spaces. In formula, this means in particular $|A \
times B| \simeq |A| \times |B|$ for any two simplicial sets $A,B$ and the geometric realization functor $|\ \cdot\ | : Set^{\Delta^{op}} \rightarrow CGHaus$.
Now let's turn to monoidal model categories. For these, we need the notion of a Quillen bifunctor.
Let $\mathcal A, \mathcal B, \mathcal C$ be model categories. A left Quillen bifunctor is a functor $F : \mathcal A \times \mathcal B \rightarrow \mathcal C$ that preserves small colimits in each
variable (seperately) and satisfies this condition (sometimes called pushout-product axiom):
For all cofibrations $i : A \rightarrow A'$ in $\mathcal A$ and $j : B \rightarrow B'$ in $\mathcal B$, the induced morphism $i \wedge j : F(A',B) \coprod_{F(A,B)} F(A,B') \rightarrow F(A',B')$ is a
cofibration in $\mathcal C$. If either $i$ or $j$ is, in addition, a weak equivalence, then $i \wedge j$ is required to be a weak equivalence, too.
Now a monoidal model category is a closed monoidal category $(S,\otimes,I)$ equipped with a model structure such that the unit object $I$ is cofibrant and the tensor functor $S \times S \rightarrow
S$ is a left Quillen bifunctor. This definition ensures that the homotopy category will be a closed monoidal category. In some rare cases, the unit object is not cofibrant and one uses a slightly
weaker condition, but this isn't necessary for our purposes here.
The category of simplicial sets, with the usual cartesian monoidal structure and the standard model structure, is a monoidal model category. The (in my personal perspective) hardest part of the proof
is to see that the tensor functor preserves the trivial cofibrations (that are exactly the anodyne extensions). Hovey (see below) does a very good job at explaining this.
Further reading:
Enriched category theory
The definition of a category enriched over some monoidal category is a priori not directly related to the definition of a category, but a posteriori it's just "ordinary category + extra structure".
A category $\mathcal{C}$enriched over a monoidal category $(M,\otimes,I)$ is a class of objects (as usual) and for each two objects $X,Y$ an object $Map(X,Y) \in Ob(M)$. The analog of identities are
morphisms $id_X : I \rightarrow Map(X,X)$ in the category $M$ and the composition is defined as morphism $\circ : Map(Y,Z) \otimes Map(X,Y) \rightarrow Map(X,Z)$. Of course, associativity of
composition and identity axioms are required to hold.
The usual definition of a category is included in the enriched definition if we look at categories enriched over $M = Set$ (well, depending on your definition of a category, you get only locally
small categories this way).
Every enriched category has an underlying ordinary category where the Hom-sets are given by $Hom(I,Map(X,Y))$, so one can speak of giving an ordinary category an enriched structure.
A category which is enriched over $Cat$ is usually called (strict) 2-category. Of course, $Cat$ is itself a 2-category. This is very common: every closed symmetric monoidal category is enriched over
itself, since it has internal Hom-functors.
One can define enriched functors and enriched transformations in the obvious manner, so it's possible to speak of functor categories and therefore the enriched categories over a fixed monoidal
category form a 2-category.
Since in a $M$-enriched category $\mathcal{C}$ we have $\mathcal{C}_M(X,Y)$ (morphisms from object $X$ to object $Y$) being an object of $M$, we can for every object $K$ of $M$ consider the morphisms
$M(K,\mathcal{C}_M(X,Y))$. If this has an adjoint, namely $M(K,\mathcal{C}_M(X,Y)) \simeq \mathcal{C}_M(K \odot X,Y)$, then the functor $X \mapsto K \odot X$ is called the copower of $X$ by $K$. It
is actually a bifunctor. In the case where $\mathcal{C} = M$, this is always the monoidal product functor of $M$, and thus it's often called tensor. A category is copowered if it has copowers, that
is there is a copower bifunctor satisfying the adjoint relation. The dual notion of power is sometimes called cotensor. I think speaking of tensors, in general, is not a good idea but it won't hurt
us for $A^1$-homotopy theory.
To get a better feeling for copowers, look at the category of topological spaces (which carries a natural monoidal model structure, see Quillen's Homotopical Algebra for details). The copower of a
topological space $X$ by a simplicial set $K$ is just the topological space $X \times |K|$ and the power of a topological space $X$ by a simplicial set $K$ is just the topological space $X^{|K|}$.
An enriched model category $\mathcal C$, enriched over a monoidal model category $M$ is defined to be a category $\mathcal C$ enriched over $M$, powered and copowered, whose underlying ordinary
category has a model structure such that the copower functor is a left Quillen bifunctor.
Now a simplicially enriched model category is just an enriched model category which is enriched over the monoidal model category of simplicial sets.
Further reading:
Topos theory
The topoi we're talking about are Grothendieck topoi. Those are, by definition, categories equivalent to the category of sheaves on a small site. A site is a category equipped with a Grothendieck
topology. A Grothendieck topology can be given by a pretopology although many different pretopologies may yield the same Grothendieck topology.
A pretopology consists of a set for each object, called the set of covering families. Each such covering family is supposed to be a set of morphisms into the object in question, such that these
morphisms are stable under refinement and pullback and contain all isomorphisms into the object. Refinement is, if you have a covering family $\{U_i \rightarrow A\}$ and for each $U_i$ a covering
family $\{V_{ij} \rightarrow U_i\}$ then $\{V_{ij} \rightarrow A\}$ is supposed to be a covering family as well. Pullback is, if you have a morphism $B \rightarrow A$ then the covering family
obtained by pullback of each morphism of a covering family $\{U_i \rightarrow A\}$ is a covering family $\{U_i \times_A B \rightarrow B\}$ is a covering family of $B$.
A sheaf on a category $\mathcal{S}$ equipped with a pretopology is a presheaf $F : \mathcal{S}^{op} \rightarrow Set$ that satisfies for each object $X$ and each covering family $\{X_i \rightarrow X\}
$ that
is an equalizer.
Topoi have many useful categorical properties. To name same of them: they have all finite limits and all finite colimits and they are cartesian closed monoidal categories (so you can do some kind of
Lambda calculus inside a topos). Consider "broadening" a category by using the category of presheaves on it (via Yoneda embedding). The choice of a topology and therefore what we call a sheaf, thus
object of our topos, ensures categorical properties nice enough to think about the objects in our topos as the real "spaces" to define $A^1$-homotopy theory. Look, for analogy, at topological spaces,
which can be rather ill-behaved. Topologists work instead with the category of compactly generated spaces, which behave more like CW complexes. In this category, we know some nice (classical)
homotopy theory, while this is not the case with the category $Top$ of all topological spaces. For more heuristic arguments why this is the "right" way to proceed, look at Voevodsky's paper in
Documenta Mathematica.
The most common examples of topoi are the category of small sets (figure out how this is a topos as an exercise!) and the sheaves on the small/big Zariski sites of schemes. However, we're interested
in the sheaves on Nisnevich sites, which I will therefore describe here:
The big Nisnevich site of a scheme $S$ is the category $Sm/S$ of smooth schemes over the fixed base scheme $S$ equipped with the Nisnevich topology. The Nisnevich topology is in-between the Zariski
and the étale topology, so I want to describe those three topologies at once, for comparison. Nisnevich called his topology the completely decomposed topology, or just cd-topology.
The canonical topology is the biggest topology that makes all representable presheaves actually sheaves. All topologies finer than that are called subcanonical. Now look at three examples of
subcanonical pretopologies, ordered from coarsest to finest:
The Zariski topology is given by covering families that are surjective families of scheme-theoretic open immersions (by open immersion I mean a morphism that decomposes uniquely into an isomorphism
and the inclusion of an open subscheme; open immersions are always étale morphisms, that means flat and unramified).
The Nisnevich topology is given by covering families that are surjective families of étale morphisms $\{X_\alpha \rightarrow X\}$ with the property that for every point $x \in X$, there exists an $\
alpha$ and a point $u \in X_\alpha$ such that the induced map of residue fields $k(x) \rightarrow k(u)$ is an isomorphism.
The étale topology is given by covering families that are surjective families of étale morphisms.
Further reading:
• John Baez: Topos theory in a nutshell (comes with lots of extra references)
• Jacob Lurie: Higher Topos Theory (For higher dimensional analogues. I recommend the appendices) - free PDF available!
• Yevsey A. Nisnevich: "The completely decomposed topology on schemes and associated descent spectral sequences in algebraic K-theory" (1989). in J. F. Jardine and V. P. Snaith. Algebraic K-theory:
connections with geometry and topology. Proceedings of the NATO Advanced Study Institute held in Lake Louise, Alberta, December 7--11, 1987. NATO Advanced Science Institutes Series C:
Mathematical and Physical Sciences, 279. Dordrecht: Kluwer Academic Publishers Group. pp. 241-342.
• Nisnevich's website has this paper, a huge bibliography and more for you - for free!
• Vladimir Voevodsky: $\mathbf A^1$-homotopy theory (in: Documenta Mathematica, Extra Volume ICM I (1998), 579-604) - free PS available!
If someone would appreciate a posting about algebraic geometry related stuff (such as étale morphisms), leave a comment and I see what I can do.
I prefer another take on enriched categories:
Consider a Category C that we'd like to enrich we define an M-Enrichment on C to be a M-valued distributor h:C^op x C -> M together with a monoid structure on h. Additionally we need some morphism c:
V h hom in order to compare the enrichment with C. Here V is the "underlying set" functor V=M(I-) asigning to each object in M the set of its "points".
Now why is this better. First of all it's question of style. I prefer functorial definitions over loose ones involving coherence laws. In this case these come directly from the coherence laws that
the convolution of distributors allready fulfills.
Secondly things like base change for enriched categories come without much work.
Furthermore the universal property of the end involved in defininig the convolution of distributors makes many coherences for further constructions automatically work.
Nice exposition - and thanks in particular for pointing out the link to Nisnevich' original paper!
Came across this nice entry here while googling around: I am in the process of expanding the nLab-entries on motivic cohomology. Just created the section Homotopy stabilization of the (oo,1)-topos on
Nis. Currently this starts with an attemopt to expose the nice general picture and then just cites some references. I would like to eventually expand on that, but need to read more. All help is
|
{"url":"http://www.konradvoelkel.com/2009/12/walk-through-to-morel-voevodskys-a1-homotopy-theory-categorical-background/","timestamp":"2024-11-05T14:08:57Z","content_type":"text/html","content_length":"75352","record_id":"<urn:uuid:d8ab012f-c9dd-4dd2-9ee3-283e4b7d2278>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00056.warc.gz"}
|
George Harry Hitching (Oslo Metropolitan University): Nonemptiness and smoothness of twisted Brill-Noether loci of bundles over a curve
Abstract: Let V be a vector bundle over a smooth curve C. The twisted Brill-Noether locus B^k_{n, e} (V) parametrises stable bundles of rank n and degree e such that V \otimes E has at least k
independent sections. This is a determinantal variety, whose tangent spaces are controlled by a Petri trace map. Generalising a construction of Mercat, we show for many values of the parameters that
B^k_{n, e} (V) is nonempty. When C and V are general, we show that under certain numerical conditions, B^k_{n, e} (V) has a component which is generically smooth and of the expected dimension. (Joint
work with Michael Hoff (Saarbrücken) and Peter Newstead (Liverpool))
|
{"url":"https://www.mi.fu-berlin.de/en/math/groups/ag-c/dates/21_02_09_Hitching.html","timestamp":"2024-11-10T05:44:28Z","content_type":"text/html","content_length":"23906","record_id":"<urn:uuid:968f124a-df70-44a0-832f-6acf4b02a781>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00314.warc.gz"}
|
72 oz to gallons, solved (plus easy-to-use converter)
Convert 72 oz to gallons with our conversion calculator
Do you need to find the answer to ’72 oz to gallons’? We have the answer! 72 ounces equals 0.5625 gallons.
What if you don’t have precisely 72 fl oz? We know that 72 ounces equals 0.5625 gallons, but how do you convert ounces to gallons? That’s simple! Use our 72 ounces to gallons converter to turn
your ounces into gallons, one ounce at a time.
72 oz to gallons converter
Use our free 72 oz to gallons converter to quickly calculate how much your ounces are in gallons. Just type in how many ounces you have, and we will convert it into gallons for you!
Looking at the conversion calculator, you will see that we already typed in 72 oz, which gives us an answer of 0.5625 gallons. That’s the answer to ’72 oz to gallons’. 72 ounces equals 0.5625
Now it’s your turn! Just type in how many ounces you have, and our ounces to gallons calculator will tell you how much it is in gallons. We make converting fromn ounces to gallons easy, no matter how
many ounces you have. Whether you have 72 fl oz or 12 fl oz, we will answer all of your conversion questions.
Important note: Our calculator assumes that you’re converting from US liquid ounces to US liquid gallons. The math is different if you convert from US dry ounces to dry gallons or imperial fluid
ounces to imperial gallons. That’s because the US system of measurement is different from Britain’s system. In either case, a common abbreviation for gallon is ‘gal’.
Frequently asked questions about ounces to gallons
People often have specific questions about converting from ounces to gallons. Here are the answers to some of the most common conversions and questions people ask about ounces to gallons.
What is a fluid ounce?
A fluid ounce is a unit of measurement for liquid volumes. There are two types of fluid ounces: the US fluid ounce and the Imperial fluid ounce. As they are different units of measure, it’s essential
to use the appropriate conversion ratio when converting between the two.
The US fluid ounce is a US customary unit of volume, and its abbreviation is fl oz.
The imperial fluid ounce volume unit is used in the UK system for fluid ounce measures. Both are common measurement units used to measure liquids.
How much is 72 oz of water?
72 fluid ounces equals 0.5625 gallons of water.
To find the answer yourself, divide 72 ounces by 128, which is the number of ounces in a gallon. The answer is 0.5625, which is equivalent to the number of gallons of water in 72 ounces of water.
Is 72 oz a half-gallon?
No, 72 ounces is not a half-gallon. 72 ounces equals 0.5625 gallons. There are 64 ounces in a half-gallon.
How many glasses of water equals 72 ounces?
There are 9 eight-ounce glasses of water in 72 ounces.
If you have a different sized glass, divide 72 ounces by how many ounces your glass holds. For example, if you have a 10-ounce glass of water, you need 30 of these glasses to make 72 ounces.
How many liters is 72 ounces?
There are 2.12929 liters in 72 fluid ounces. In Europe, liters are written as litres.
A liter of water contains 1,000 milliliters and is equivalent to 61.0237 cubic inches (1,000 cubic centimeters).
How many quarts is 72 ounces?
There are 2.25 US liquid quarts in 72 fluid ounces.
A US liquid quart contains 32 fluid ounces. The abbreviation for a quart is ‘qt’.
How many milliliters are in 72 oz?
There are 2129.29 milliliters in 72 ounces.
Milliliters, or millilitres, are a unit of fluid volume in the metric system. The abbreviation for a milliliter is mL. The metric system, used in most of the world, makes for more straightforward
math because the system uses multiples of 10.
How many teaspoons are in 72 oz?
There are 432 teaspoons in 72 ounces of liquid.
A teaspoon is a culinary unit of measure used for recipe measurements. The abbreviation for a teaspoon is ‘tsp’.
How many tablespoons are in 72 oz?
There are 144 tablespoons in 72 ounces of liquid.
A tablespoon, equal to three teaspoons, is a culinary unit of volume measurement. The abbreviation for a tablespoon is ‘tbsp’.
How many cups is 72 oz?
There are 9 cups in 72 ounces of water. A US cup contains eight fluid ounces and is a volume unit.
To find the answer yourself, take 72 ounces and divide it by 8 ounces per cup. 72 divided by 8 equals 9, so there are 9 cups in 72 ounces.
How many cups of coffee is 72 oz?
There are 9 cups of coffee in 72 ounces of coffee. There are eight fluid ounces in one US cup.
If your coffee mug or large coffee cup holds more than eight ounces, divide 72 ounces by your cup size in ounces.
How many ounces is a cup?
There are 8 ounces in a cup.
How much is 72 fl oz in pints
There are 4.5 pints in 72 fluid ounces.
A pint is one-eighth of a gallon, so each pint has 16 ounces.
How much does 72 oz of water weigh?
72 ounces of water weighs 4.6943 pounds (2.1293 kilograms) at 39.2 degrees Fahrenheit (4 degrees Celsius). You will often see pounds abbreviated as lbs and kilograms as kg.
Ounces are a unit of volume rather than a unit of weight. The weight of an ounce varies based on the density of the liquid and the temperature. The density of pure water at 3.92 degrees Fahrenheit (4
degrees Celsius) is 62.4 lb/cubic foot (0.9998395 grams/milliliter).
How many ounces are in a gallon?
There are 128 ounces in a gallon.
The definition of a gallon is a quantity of liquid that occupies 231 cubic inches (0.00378541 cubic meters in SI units).
How many ounces are in a gallon of water?
There are 128 ounces in a gallon of water.
How many oz is a gallon of milk?
There are 128 fluid ounces in a gallon of milk. Ounces are a liquid volume measurement, and 128 ounces always equals one gallon.
How many 8 ounce cups are in a gallon of milk?
There are sixteen 8-ounce cups in a gallon of milk.
To solve this question yourself, divide 128, which is the number of ounces in a gallon of milk, by the 8-ounce cup size. The answer is sixteen, which is the number of 8-ounce cups in a gallon of
How many fl oz are in a gallon?
There are 128 fl oz in a gallon of liquid. Fl oz is an abbreviation for fluid ounces.
How many 32 ounce bottles are in a gallon?
There are four 32-ounce bottles in a gallon.
To find the answer yourself, take 128 ounces, the number of ounces in a gallon, and divide it by 32 ounces in a water bottle. 128 divided by 32 equals 4, so there are four 32 ounce bottles in a
How many 16.9 oz bottles makes a gallon?
There are 7.574 16.9-oz bottles in a gallon.
How many 16 oz bottles make up a gallon?
There are eight 16-ounce bottles in a gallon.
How many 8 oz bottles does it take to make a gallon?
It takes sixteen 8-ounce bottles to make a gallon.
What is the same as 1 gallon?
1 gallon is the same as 128 ounces, four quarts, 3.78541 liters, eight pints, sixteen cups, 256 tablespoons, 768 teaspoons, or 4546.09 milliliters.
How many fluid cups are in a gallon?
There are 16 fluid cups in a gallon. A US cup contains eight fluid ounces and is a volume unit.
To find the answer yourself, take 128, which is the number of ounces in a gallon, and divide it by 8 ounces per cup. 128 divided by 8 equals 16, so there are 16 fluid cups in a gallon.
How many pints are in a gallon?
There are 8 pints in a gallon.
A pint is one-eighth of a gallon, and since a gallon contains 128 ounces, a pint is 16 ounces.
How many oz is 2 gallons of water?
There are 256 ounces in 2 gallons of water.
Two gallons occupy 462 cubic inches (0.00757082 cubic meters in SI units).
How to convert fl oz to gallons
To convert from fluid ounces to gallons, divide the number of fluid ounces you have by 128 fluid ounces per gallon. This formula converts your fluid ounces to a gallon value.
For example, if you have 72 fluid ounces, 72 fluid oz value divided by 128 ounces per gallon equals 0.5625 gallons.
A second approach is to use a conversion factor. To convert from ounces to gallons, multiply the number of ounces by 0.0078125 to find the number of gallons. 0.0078125 is the oz to gal conversion
The conversion formula is: Ounces x 0.0078125 = gallons
A third approach uses a gallon conversion table that shows fluid ounces in one column with the corresponding value for gallons in the second column. A conversion chart allows you to find the answer
quickly without the need for math.
Is a gallon of water a day too much?
No, a gallon of water a day is not too much to drink. The Institute of Medicine (IOM) recommends that adult males drink 131 ounces of water a day, while adult females should drink 95 ounces of water
daily. This amount of water ensures adequate hydration.
The IOM recommendation is a relatively recent development, as past recommendations followed the so-called ‘8×8’ rule. This recommendation was to drink eight glasses of water a day, each glass having
eight ounces of water, for a total daily water intake of 64 oz. While different amounts of water are commonly suggested for your daily water intake, drinking enough water is essential to avoid
If you’re worried about your daily water intake and whether you might be dehydrated, watch for symptoms, including fatigue, headaches, and muscle cramps. If you think you might be dehydrated, drink
extra water.
Is 1 gallon or 64 oz bigger?
1 gallon is bigger than 64 ounces. A gallon contains 128 ounces of liquid, while 64 ounces is equal to a half-gallon.
Is 64 oz of liquid a gallon?
64 ounces of liquid is equal to a half-gallon. A gallon contains 128 ounces of liquid.
How many ounces is half a gallon of water?
There are 64 ounces in half a gallon of water.
Is 32 oz half a gallon?
No, 32 ounces is a quarter of a gallon. Half a gallon equals 64 oz.
There are 128 ounces in a US fluid gallon, so to find the answer to how many ounces are in a half-gallon by yourself, divide 128 ounces by two. The answer is 64 ounces, which is half a gallon.
How many quarts are in a gallon?
There are 4 quarts in a gallon. Each quart contains 32 ounces, and a gallon contains 128 ounces. Therefore, there are 4 quarts in one gallon.
How many ounces are in a quart?
There are 32 ounces in a quart. The abbreviation for a quart is ‘qt’.
How many pints are in 2 quarts?
There are 4 pints in 2 quarts.
A pint contains 16 ounces, while a quart contains 32 ounces.
What is the rule for converting quarts to cups?
To convert a quart to a cup, multiply the number of quarts that you have by 4. This is because there are 4 cups in a quart.
For example, if you have 2 quarts, multiply 2 quarts by 4. The answer is 8, which is the number of cups in 2 quarts.
Are US fluid ounces and dry ounces the same?
No, US fluid ounces and the lesser used US dry ounces are not the same. US fluid ounces are a liquid measure for liquid materials, while dry ounces are a dry measure of weight for dry materials. You
can think of dry ounces as ounces of weight, while fluid ounces are ounces of volume.
If you use a liquid-ounce measuring cup on dry materials, you can end up with a major difference. That’s why it’s important to use the correct measuring cups.
Are Canadian and US gallons the same?
No, Canadian and US gallons are not the same. Canada uses the Imperial gallon, also called a UK gallon. An Imperial gallon contains 22.75% more fluid than a US gallon.
Are Canadian gallons Imperial?
Yes, Canadian gallons are Imperial gallons. While Canada uses Imperial gallons, most quantities are measured in metric units, such as the liter.
How many ounces are in a Canadian gallon?
There are 160 ounces in a Canadian gallon. Canada uses the Imperial gallon, also called a UK gallon.
An Imperial gallon contains 22.75% more fluid than a US gallon. A US ounce contains 4.083 percent more liquid than an imperial ounce.
Are US and UK gallons the same?
No, the US and UK gallons are not the same, as the size of a gallon is different under each system. A UK gallon, called an imperial gallon, contains 22.75% more fluid than a US gallon. It’s important
to remember that the US measurement and UK measurement systems are not the same to avoid a significant difference in your math if you need to do an imperial gallon conversion.
One US liquid gallon has 128 ounces, which is the same as 3.785 liters. Meanwhile, there are 160 fluid ounces in one UK liquid gallon, which is the same as 4.646 liters. Don’t confuse the US
system with the British Imperial system of units to avoid math errors and misunderstandings. While they are both used to describe quantities, such as liquid gallon measures, they’re not the same.
There are actually three gallons in current use: the imperial gallon used in the United Kingdom and semi-officially within Canada, the US gallon used in the United States, and the lesser-used US dry
gallon used for measuring weights.
Why are UK and US gallons different?
UK and US gallons are different because, in 1824, the UK decided to standardize their measurement systems under the UK Imperial System, while the US did not. Interestingly, before 1824, UK and US
gallons were the same because they both used the British Imperial System! Today, the US system is considered to be a variation of the Imperial system.
The United States Customary Measurement Systems still uses the measurement of Queen Anne’s gallon of wine, which was 3.785 liters, as their standard liquid measurement. Nowadays, America still uses
the old British imperial measurement system as part of its own system of US customary units.
In conclusion, 72 ounces equals 0.5625 gallons.
|
{"url":"https://www.tipwho.com/article/72-oz-to-gallons/","timestamp":"2024-11-08T19:19:55Z","content_type":"text/html","content_length":"143171","record_id":"<urn:uuid:6fd3f415-e1f3-4cc8-8875-b02b6e4d39ef>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00600.warc.gz"}
|
MIRR - Excel docs, syntax and examples
The MIRR function in Excel calculates the modified internal rate of return for a series of cash flows that may have different reinvestment and finance rates. It is a useful tool in financial analysis
for evaluating investment returns under varying conditions.
=MIRR(values, finance_rate, reinvest_rate)
values An array or reference to a range of cells containing cash flows.
finance_rate The interest rate paid on money used in financing the investments.
reinvest_rate The interest rate received on the reinvestment of cash flows.
About MIRR 🔗
Delve into the world of investment analysis with the MIRR function in Excel. When dealing with cash flows over time, especially where reinvestment rates differ from financing rates, MIRR stands out
as a key tool for determining the modified internal rate of return. This metric aids in assessing the profitability of investments considering both incoming and outgoing cash flows, along with
reinvestment conditions and financial costs incurred along the way. Essentially, MIRR provides a comprehensive view of the true return on an investment, factoring in varying rates pertinent to
reinvestment and financing. By utilizing the MIRR function, you gain insights into the overall performance of your investments, allowing for more informed decision-making in financial matters.
Examples 🔗
Suppose you have a series of cash flows from an investment project: -$100, $20, $30, $50, and $40 over five periods. The finance rate is 6%, and the reinvestment rate is 8%. To calculate the MIRR for
these cash flows, the formula would be: =MIRR(-100, 20, 30, 50, 40, 0.06, 0.08)
Consider a scenario where an initial investment of $500 generates future cash flows of $100, $150, and $200 at the end of three periods. The finance rate is 5%, and the reinvestment rate is 7%. To
find the MIRR for this investment, use the formula: =MIRR(-500, 100, 150, 200, 0.05, 0.07)
Ensure that the cash flows and rates are appropriately organized when using the MIRR function. The values should be input as an array or reference to a range of cells. Additionally, keep in mind that
the finance_rate and reinvest_rate must be consistent with the time periods of the cash flows for accurate calculations.
Questions 🔗
How does the MIRR function differ from the IRR function in Excel?
While both MIRR and IRR are used to evaluate the potential profitability of investments, MIRR considers different reinvestment and finance rates. IRR assumes that all cash flows are reinvested at the
same rate, which may not always be realistic in practical financial scenarios.
What does a higher MIRR value indicate?
A higher MIRR value indicates a more favorable return on an investment, considering both reinvestment and finance rates. It suggests that the investment may be more profitable, factoring in the costs
and benefits associated with cash flows and reinvestments.
Can the MIRR function handle irregular cash flow patterns?
Yes, the MIRR function in Excel can handle irregular cash flow patterns by accurately calculating the modified internal rate of return based on the provided cash flows, finance rate, and reinvestment
rate. It offers flexibility in analyzing investments with varying cash flow timings.
Related functions 🔗
Leave a Comment
|
{"url":"https://spreadsheetcenter.com/excel-functions/mirr/","timestamp":"2024-11-15T04:32:55Z","content_type":"text/html","content_length":"29326","record_id":"<urn:uuid:c73e7a82-5ee7-4741-b7ed-7af33dc93aa9>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00366.warc.gz"}
|
Network-based Dynamic Output Feedback Control Under An Event-triggered Communication Scheme
Posted on:2017-06-25 Degree:Master Type:Thesis
Country:China Candidate:X L Hao Full Text:PDF
GTID:2348330512450998 Subject:Computational Mathematics
This thesis is concerned with event-triggered dynamic output feedback H∞ control for networked control systems with bilateral network.An output-based event-triggered communication scheme is
introduced in the sensor side of a networked control system,that is,the control strategy that the sampled-data is transmitted to the controller side over a network is executed only when a certain
event occurs.Such an event-triggered communication scheme is capable of effectively avoiding the transmission of redundant data,reducing the required energy consumption for transportation and
decreasing the implementation cost of the whole control system to some extent while retaining a satisfactory system performance.It is physically difficult to measure all state variables of a control
system in many practical situations due to complex environment,technological and economic constraints,and thus a dynamic output feedback controller is considered for a networked control system in the
thesis.The main research contents of this thesis contain two parts:the first part is to study the dynamic output feedback H∞ control for a continuous time linear system under an event-triggered
communication scheme,which takes into consideration the effects of network-induced delays in the bilateral network.The second part is to deal with the event-triggered dynamic output feedback H∞
control for a discrete-time linear system by considering the effects of network-induced delays and random packet dropouts in the bilateral network on the system.The work of this thesis is organized
as follow:(1)The first chapter elaborates research background and significance of this work,recalls the research situation at home and abroad,and then introduces the main research problems of this
thesis.(2)The second chapter studies the problem of network-based dynamic output feedback H∞ control for a continuous-time linear system under an event-triggered communication scheme.An output-based
event-triggered comm-unication scheme is introduced to determine which sampled output should be transmitted at which time instants to a dynamic output feedback controller through the network.Under
the event-triggered communication scheme,taking network-induced delays in both the sensor-to-controller channel and the controller-to-actuator channel into account,the resulting closed-loop system is
modeled as a linear delay system with two interval time-varying delays by using an input delay approach.A Lyapunov-Krasovskii functional,using the information of the upper and lower bounds of these
two interval time-varying delays,is constructed to derive an H∞ performance criterion and a controller design method in terms of linear matrix inequalities.Finally,a numerical example is given to
validate the proposed method.(3)In the third chapter,the network-based dynamic output feedback H∞control is considered for a discrete-time linear system under an event-triggered communication
scheme.Firstly,an event-triggered communication scheme is introduced to decide whether or not the current sensor measurement should be sent to the controller via a network.Secondly,taking into
consideration the effects of network-induced delays and random packet dropouts in the bilateral network,these random packet dropouts are assumed to obey Bernoulli random binary distribution,the
resulting closed-loop system is modeled as a stochastic system with two discrete time-varying delays.By applying Lyapunov-Krasovskii stability theory,the sufficient condition that guarantees the
system to be stochastic stable with an H∞ performance is obtained,and the design method of the corresponding dynamic output feedback controller is presented.Finally,the proposed method is verified
effective by a numerical example.(4)In the fourth chapter,we summarize the present research work,and make a prediction about the future work.
Keywords/Search Tags: Networked control systems, Event-triggered communication scheme, Network-induced delays, Random packet dropouts, Dynamic output feedback control
|
{"url":"https://globethesis.com/?t=2348330512450998","timestamp":"2024-11-04T20:42:55Z","content_type":"application/xhtml+xml","content_length":"10070","record_id":"<urn:uuid:13ee4ce7-bb9b-437a-84be-0184b8346963>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00429.warc.gz"}
|
RD Sharma Class 8 Solutions Chapter 7 - Factorization (Ex 7.7) Exercise 7.7 - Free PDF
Free PDF download of RD Sharma Class 8 Solutions Chapter 7 - Factorization Exercise 7.7 solved by Expert Mathematics Teachers on Vedantu. All Chapter 7 - Factorization Ex 7.7 Questions with Solutions
for RD Sharma Class 8 Maths to help you to revise complete Syllabus and Score More marks. Register for online coaching for IIT JEE (Mains & Advanced) and other Engineering entrance exams. Register
Online for Class 8 Science tuition on Vedantu to score more marks in CBSE board examination. Every NCERT Solution is provided to make the study simple and interesting on Vedantu.
FAQs on RD Sharma Class 8 Solutions Chapter 7 - Factorization (Ex 7.7) Exercise 7.7
1. What are the important topics discussed in the RD Sharma Class 8 Solutions Chapter 7?
Students will learn about the technique of common factors, factorization by regrouping words, factorization using identities, and other topics in this chapter. The RD Sharma solutions for class 8
chapter 7 presented here were created by specialists exclusively to make this chapter clear. The RD Sharma answer for class 8 consists of solved exercises with explanations that will assist students
in quickly grasping each subject and understanding how to tackle the various issues. The solution will assist students in resolving all of their questions about this chapter as well as helping them
prepare for the tests. Students may download and see the RD Sharma answers for class 8 chapter 7 for free on the Vedantu website.
2. How can RD Sharma Solutions Class 8 help students learn Chapter 7?
The following are the steps to help students make their basics of mathematical concepts strong:
• The first step in moving forward is understanding and comprehending the concepts taught in the chapter.
• Do not mix and match chapters; always read them in the order they are presented in your textbook.
• Attendance at all classes is required; no class should be skipped under any circumstances.
• After class, study the textbook's theory section. While reading, try to grasp the subject and note any significant formulae.
• Please jot down the maths formula in your mind. This may be performed by revising and meticulously taking notes throughout class.
• You should proceed when you are sure of your understanding of the theory and the topic. Revise the chapter's formulae and keep a separate mathematical formula notebook at all times. Next, work on
your problem-solving skills.
3. What is the best technique and strategy to follow when solving Class 8 Maths Chapter 7 Questions?
Technique and strategy to follow when solving Class 8 Maths Chapter 7 Questions are:
• Make a list of all related concepts and equations for quick reference and review. You won't miss any crucial topics if you list the chapters, and you'll easily review each subtopic and concept.
Just glancing at the list might help you remember what you've learned.
• The second and equally crucial step is to ask your teacher to clarify any doubts or questions you may have as soon as possible. Hesitating to express your uncertainty will just add to your
confusion and lead to a loss of confidence.
• Try to solve as many previous year's question papers as you can; this will help you figure out which questions have been asked and how many times they have been asked, as well as the marks
allocated to each question. Additionally, it provides an indication of the needed minimum length of the response.
• Study your concepts, but also make sure you can write them. Writing an idea down helps you remember it better than just reading it. Writing helps you practise, refine your replying style, and
structure your responses.
• Practice on the questions that you're most comfortable with. In order to get the most out of your day, break it up into smaller chunks. You have plenty of time to prepare for everything that may
come your way.
4. What are the benefits of studying from RD Sharma Class 8 Solutions for Chapter 7?
Benefits of studying from RD Sharma Class 8 Solutions are:
• After performing significant research, skilled mathematicians prepared solutions for students.
• The language used to construct solutions is quite simple and straightforward.
• Every question in the RD Sharma textbook has been thoroughly explained step by step.
• To deal with a certain problem, different explanations are presented in different ways.
• All of the chapters and exercises in each chapter may be accessed without difficulty using the solutions.
• Students can learn how to solve problems quickly by using shortcut tips and tactics.
• Assist students in developing a solid conceptual foundation.
• The solutions provide explanations for the complete Mathematics curriculum.
5. Where can I find RD Sharma Class 8 Solutions Chapter 7?
The solutions to RD Sharma Class 8 Maths Solutions are accessible on the Vedantu website. Students will find it simpler to locate what they are looking for if the solutions are organized by topic
rather than alphabetically. The solutions are available for free download in PDF format in the following categories: In addition to being one of the most widely read books, RD Sharma also offers an
almost unlimited amount of puzzles. Being able to find solutions to all of RD Sharma challenges is a significant aspect of the practice's success. It covers all of the important subjects and
reinforces the students' principles. Students in Class 8 are almost certainly already making their way through the book.
|
{"url":"https://www.vedantu.com/rd-sharma-solutions/class-8-maths-chapter-7-exercise-7-7","timestamp":"2024-11-04T15:00:07Z","content_type":"text/html","content_length":"181185","record_id":"<urn:uuid:5814235f-b27a-4794-9a90-2d97e6d78adf>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00627.warc.gz"}
|
Free Online ATI TEAS Test Practice with Answers 5 - GkFeed
When preparing for the ATI TEAS test, engaging in thorough online simulated practice is one of the keys to success. By participating in free ATI TEAS online practice tests, examinees can familiarize
themselves with the exam’s format, content, and time constraints, thereby better preparing and strategizing for the test. We will provide you with a carefully prepared set of free ATI TEAS online
practice tests, aimed at helping you assess your preparedness and adequately ready yourself for the exam.
1. A teacher wants to know if there is a relationshipbetween the recent test scores from a math test heralgebra class took and the time spent on studying forthe test. The study times of 10 students
from her class are shown below in minutes.
According to the sample, what is the mode for the number of minutes the students studied for the test?
A. 73 minutes
B. 60 minutes
C. 65 minutes
D. 100 minutes
2. You work a part time job after school. Below is a table for the number of hours you worked for a particular school week.
What is the mean number of hours you worked for the week?
A. 3.5 hours
B. 1.8 hours
C. 4.2 hours
D. 4.3 hours
3. simplify\( \frac{5}{2}+\frac{7}{5} \)-3.
A. \( \frac{9}{30} \)
B. \( \frac{69}{10} \)
C. \( \frac{9}{10} \)
D. \( \frac{9}{7} \)
|
{"url":"https://gkfeed.com/subject/free-online-ati-teas-test-practice-with-answers-5/","timestamp":"2024-11-08T20:09:13Z","content_type":"text/html","content_length":"49991","record_id":"<urn:uuid:5398d569-f5d5-470c-8d5d-81179c04a77a>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00765.warc.gz"}
|
Ordinary differential equations - (History of Mathematics) - Vocab, Definition, Explanations | Fiveable
Ordinary differential equations
from class:
History of Mathematics
Ordinary differential equations (ODEs) are equations that involve functions of a single variable and their derivatives. They play a crucial role in modeling dynamic systems where the change of a
variable depends on the current state of that variable, leading to rich mathematical theories and applications in various fields, such as physics and engineering.
congrats on reading the definition of ordinary differential equations. now let's actually learn it.
5 Must Know Facts For Your Next Test
1. Ordinary differential equations can be classified into linear and nonlinear types, with linear ODEs being easier to solve due to their structure.
2. The existence and uniqueness theorem guarantees that under certain conditions, an ODE will have a unique solution for given initial values.
3. Solutions to ordinary differential equations can often be expressed in terms of known functions like exponentials, trigonometric functions, or polynomials.
4. Applications of ODEs include modeling population dynamics, mechanical systems, electrical circuits, and even financial markets.
5. Methods for solving ODEs include separation of variables, integrating factors, and numerical approaches like the Runge-Kutta method.
Review Questions
• How do ordinary differential equations differ from partial differential equations in terms of their variables and applications?
□ Ordinary differential equations involve functions of a single independent variable and their derivatives, while partial differential equations deal with functions of multiple independent
variables. This fundamental difference affects their applications; ODEs are often used in simpler dynamic systems, such as population growth or mechanical motion, whereas PDEs model more
complex phenomena like heat conduction or fluid flow, which require consideration of multiple dimensions.
• What are the implications of the existence and uniqueness theorem in solving ordinary differential equations?
□ The existence and uniqueness theorem asserts that for certain ordinary differential equations with specified initial conditions, there exists a unique solution within a certain interval. This
theorem is crucial because it provides assurance that the mathematical model represented by the ODE will yield consistent results under defined conditions, thus allowing researchers and
engineers to rely on these models for accurate predictions in real-world scenarios.
• Evaluate the significance of methods used to solve ordinary differential equations in various fields such as engineering and physics.
□ The methods used to solve ordinary differential equations are significant because they provide essential tools for understanding and predicting the behavior of dynamic systems across various
fields. Techniques like separation of variables or numerical methods allow engineers to design stable structures and physicists to model natural phenomena accurately. The ability to derive
solutions helps professionals analyze stability, optimize performance, and innovate technologies that impact daily life.
© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.
|
{"url":"https://library.fiveable.me/key-terms/history-of-mathematics/ordinary-differential-equations","timestamp":"2024-11-06T15:30:05Z","content_type":"text/html","content_length":"162475","record_id":"<urn:uuid:a375f25d-b95a-4ff0-8044-3a2447370e18>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00840.warc.gz"}
|
Lecture Series: Models of Time and Probability | Stephan Hartmann: Reasoning in Physics: The Bayesian Approach
The lecture series are part of the Thematic Einstein Forum "Scales of Temporality: Modeling Time and Predictability in the Literary and the Mathematical Sciences". The forum is organised within the
framework of the Berlin Mathematics Research Center MATH+ in collaboration with EXC 2020 “Temporal Communities” and supported by the Einstein Foundation Berlin.
The collaborative Thematic Einstein Forum Scales of Temporality: Modeling Time and Predictability in the Literary and the Mathematical Sciences aims to explore shared interests, common grounds and
similar problems both the mathematical sciences and the Humanities, particularly the philologies and the literary studies, entail.
In the lecture series "Models of Time and Probability", experts from mathematics (dynamical systems, analysis, probability theory, applications and modelling, biomathematics) and from literary
studies (narratology, rhetoric, literary history and philosophy) will give lectures to an open audience.
Stephan Hartmann (Munich Center for Mathematical Philosophy/LMU Munich) on "Reasoning in Physics: The Bayesian Approach":
In recent centuries, physics has greatly influenced the way philosophers have thought about the scientific method and the nature of good scientific reasoning. Generations of philosophers have aimed
to taxonomize, formalize, and evaluate these patterns of argumentation. While this has been an extremely fruitful task, the major challenges facing physics today have led to fundamental changes in
the way physicists formulate, evaluate, and apply their theories. The most prominent examples of this trend are found in the field of contemporary fundamental physics, where many of the most
influential theories are beyond the reach of existing experimental methods and are therefore extremely difficult to test empirically. Now, the fact that entire communities of physicists have spent so
much time and effort evaluating theories that are largely disconnected from experiment and empirical testing suggests that existing philosophical accounts of the epistemology of physics, based on a
largely empiricist conception of physics, are no longer entirely accurate, or at least somewhat outdated. This, in turn, suggests that it is time to draw attention to and analyze the distinctive
justificatory strategies of contemporary physics. In this talk, I will discuss these recent developments and show that the Bayesian framework is flexible enough to reconstruct and evaluate the
proposed reasoning strategies. This points the way to a clarification of the epistemic structure of contemporary physics and, furthermore, shows how philosophers can constructively engage in
methodological discussions within physics on an equal footing.
Upcoming lectures:
8.12.22 | Anne Eusterschulte (FU Berlin, EXC 2020): TBA
5.01.23 | Manfred Laubichler (Arizona State University, Max Planck Institute for the History of Science (MPIWG)): TBA
12.01.23 | Jürgen Jost (Max Planck Institute for Mathematics in the Sciences (MiS) in Leipzig): TBA
19.01.23 | Hannes Leitgeb (LMU München, Munich Center for Mathematical Philosophy): TBA
26.01.23 | Xue-Mei Li (Imperial College London): Noise and Scales
2.02.23 | Paul Hager (HU Berlin): Time Scales in Rough Volatility
More information about the lecture series can also be found on the MATH+ website. The lectures start at 6:00 pm. Participation is possible both in person and online.
Time & Location
Nov 17, 2022 | 06:00 PM
Zuse Institute Berlin (ZIB)
Takustraße 7
14195 Berlin
Further Information
For online link register by email to scales@mathplus.de.
|
{"url":"https://www.temporal-communities.de/events/lecture-series-models-of-time-and-probability-hartmann.html","timestamp":"2024-11-08T12:05:26Z","content_type":"text/html","content_length":"29567","record_id":"<urn:uuid:f6fe3f71-739e-465d-92fe-268f10e0be57>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00747.warc.gz"}
|
If f' is the derivative of f, then the derivative of the inver... | Filo
Question asked by Filo student
If `f' is the derivative of f, then the derivative of the inverse of f is the inverse of f'
Not the question you're searching for?
+ Ask your question
Found 7 tutors discussing this question
Discuss this question LIVE
15 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice more questions on Differentiation
View more
Students who ask this question also asked
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text If `f' is the derivative of f, then the derivative of the inverse of f is the inverse of f'
Updated On Apr 5, 2024
Topic Differentiation
Subject Mathematics
Class Class 11
Answer Type Text solution:1
|
{"url":"https://askfilo.com/user-question-answers-mathematics/if-f-is-the-derivative-of-f-then-the-derivative-of-the-38383931383738","timestamp":"2024-11-11T11:08:25Z","content_type":"text/html","content_length":"200939","record_id":"<urn:uuid:21d8687c-ad45-4bf4-b63b-aa10ed8db531>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00186.warc.gz"}
|
Transitions in double-diffusive convection
Results are presented for explicit calculations of flow transitions that occur in two-dimensional double-diffusive convection in the case where the motion is governed by a set of coupled nonlinear
partial differential equations. Attention is restricted to an oceanographic application of the Rayleigh-Benard problem, where the fluid is considered to occupy the space between two infinite planes
separated by some distance, with the upper plane maintained at a certain temperature and salinity and the lower plane maintained at a higher temperature and salinity. Criteria for linear instability
of the governing equations are obtained by neglecting nonlinear Jacobian terms and representing the solutions in terms of the lowest normal modes with an exponential time dependence. Effects of
increasing and decreasing the nondimensional parameter R are examined. It is shown that in the most general case, as R increases, there is a transition from the conduction state to an oscillatory
motion, followed in turn by a transition to a more complicated oscillatory motion, a transition to an aperiodic random state, and a transition to steady motion.
Pub Date:
September 1976
□ Convective Flow;
□ Flow Equations;
□ Molecular Diffusion;
□ Numerical Stability;
□ Transition Flow;
□ Turbulent Diffusion;
□ Benard Cells;
□ Branching (Mathematics);
□ Differential Equations;
□ Dimensionless Numbers;
□ Time Dependence;
□ Fluid Mechanics and Heat Transfer
|
{"url":"https://ui.adsabs.harvard.edu/abs/1976Natur.263...20H/abstract","timestamp":"2024-11-02T20:10:23Z","content_type":"text/html","content_length":"38418","record_id":"<urn:uuid:76e617ff-7d5e-4dc7-97f7-ac54601633e7>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00289.warc.gz"}
|
Jumpgate 10.24 Shader code
Oct 17 2020
Jumpgate 10.24 Shader code
This is the exhaustively documented shader code of the Jumpgate 10.24 1 kilobyte intro.
Jumpgate 10.24 by Seven/Fulcrum
This is the documented sourcecode for the shader of my 1 kilobyte intro for Assembly
2020. Apart from the whitespace for readability, it's identical to the shader used
in the final 720p version.
I'm sharing this so can you learn from it (especially from my mistakes), so don't copy
it wholesale, don't use it for work purposes :D and of course I have no
responsibility if anything bad happens. It's a 1K, not a shining example of
safety and clarity!
// m is the time. A simple frame counter is passed via gl_color, I rely on automatic
// truncation to get the x component. There is a scaling factor to speed up or slow down the entire
// intro. The offset .65 ensures the ship reaches the jumpgate at time 0. We're going to see this number
// a lot. f is used for various things, d is the random seed for the ship generation routine.
float m=10.5*gl_Color-.65,f,d;
// standard 2D rotation routine, rotates a vec2 m radians. I think H4rdy/Lemon shared this variant first.
// Named s for StandardRotationFunction
vec2 s(vec2 y,float m)
return y*cos(m)+vec2(y.y,-y.x)*sin(m);
// NSA-approved random function. We're just trying to get some variation and be very compressible,
// not to be cryptographically secure :) The seed is increased locally, and some simple math is done.
// Note the use of frac() instead of fract(), which is an HLSL function that the nvidia drivers accept
// with a warning, but AMD errors on it. I normally don't stoop to exploiting brand-dependent differences,
// but since I already made a mistake that made the intro nvidia-only, I might as wel save some bytes...
// The .65 could have been any number, but I reused a number for maximum compression.
// Named s for SomewhatRandom
float s()
return d++,frac(d*.65*frac(d*.65));
// The heart of the intro: the Signed Distance Function (SDF) for the procedurally generated ships/jumpgate.
// y is the 3D point for which to evaluate the SDF, m is the random seed for this specific spaceship.
// Named s for ShipSDF
float s(vec3 y,float m)
// Initialize the distance as "far away". Any big number is OK so try to use one that's needed elsewhere.
float f = 154;
// initialize the random seed before calling s() with abandon.
// The basic shape of the ship is a randomly-oriented plane, shifted from the origin, and then mirrored
// around 2 axis. This typically gives you an octahedron (a pyramid on top of an upside-down pyramid).
// Then I uses IQ's elongation function, which add bevels to the octahedron. But it's simplified which
// causes the elongation to only happen on the positive side of each axis, so the shape is not centered
// on the origin anymore.
// We create 4 of those, but each time, they're 4 times as many (due to mirroring), a bit smaller and
// shifted more to the rear. So we get one big octahedron (body/cockpit), with 4 smaller ones a bit more
// to the back (wings?), with 16 even smaller(engines?), and 64 smallest (random parts) at the end.
// Depending on the random function, some of these might not be visible. We also rotate the octahedra to
// generate wings etc.
// 4 levels of parts
for(float d=0;d<4;d++)
// elongate a random amount.
// mirror X and Y axis.
// Combine the SDFs of each part with the min operator
pow(.7,d) // Scale the result up. This SHOULD be the same as 1/scalefactor (which is 1.6), but you
// can abuse this as a kind of safety factor (to hide artifacts of other bugs) or as an
// overstepping factor (to speed up marching, if other functions are too conservative).
// So you can cheat a bit and re-use some number (like .65), and then suffer from it when
// your intro contains artifacts and you can't fix it without breaking the filesize...
*(dot(abs(y), // use the dot function as a SDF for a plane oriented by it's normal.
normalize(vec3(s(),s(),s()*.2) // random normal, tweaked to get octahedra stretched along Z-axis
// ALSO UNDEFINED BEHAVIOR THAT BREAKS ON AMD!
+.01) // to prevent too many very thin needle shapes, add a constant before normalizing.
-s())); // this is the thickness of the plane, randomly picked.
// So, did you figure out the undefined behavior? The order of evaluation of paramaters is not defined in GLSL.
// So if your random function generates 0.3, 1, and 0.5, nvidia will give you vec3(.3, 1, .5), but AMD might
// give you vec3(.5,1, .3)... In the safe versions, this is fixed with extra variables: a=s(), b=s(), c=s(),
// ... vec3(a,b,c); , But I didn't had room for that in the compo version :( Sorry, AMD fans.
// Scale the octahedra down, and shift them a random amount, but mostly to the back of the plane.
// Also undefined behavior again.
//Finally, rotate the next part a random amount around the z-axis
return f;
// This is the SDF for the entire fleet and the jumpgate.
// Named s for SceneSDF
float s(vec3 y)
// The jumpgate is just the back end of a carefully-chosen ship, scaled up, and combined with a sphere.
// Remember that the elongation operation was not symmetrical anymore? That means we have to shift the
// gate back a bit (.15) so it matches the sphere.
float f=min(length(y)-14, // the sphere
s(y/9+.15,8.8)*9); // jumpgate, seed 8.8 (I tried hundreds of combinations), scaled by 9.
// This generates the fleet. For maximum compression (have you heard that before?), I use the same
// loop variable and iteration count as the ship SDF. So we have 4 different ship types. Of course, the
// higher the amount of ship types, the longer you have to look for a seed that generates decent ones
// for ALL types. So this is another giant timesink, and every time you tweak a constant in the ShipSDF,
// you have to start over :(
// The fleet is build up the same as each individual ship: 4 layers, each layer scaled down and mirrored.
// So that should give us 1 big ship, 4 medium, 16 small, 64 tiny. There is no rotation, because then it
// looked like a traffic accident instead of an organized fleet. But the resulting pyramid (big ship at
// top, smaller ones below) looked far too regular, with perfectly-aligned layers of ships, so I
// added another trick.
for(float d=0;d<4;d++)
// Offset the ships from each other. This would make a line from big ship to small ship.
// mirror the X and Y direction, causes more smaller ships.
// Here's the extra trick: shift mirrored space back! This cause another 4 extra ships
// to appear in front of each original. Depending on the camera angle, it also wreaks havoc with your
// SDF accuracy, so you better like futzing around with parameters to hide marching artifacts :(
// Combine the SDFs of each ship with the min operator
pow(m+1.1,d) // scaling correction. Giant hack ahoy: the smallest ships had bad overstepping artifacts
// which were fixed by decreasing this, but that caused the jumpgate to disappear behind the
// big ships. Since they are at different times in the intro, I use the time m (which starts at
// -.65) to adjust the scaling differently during the intro.
*s(y,d+.31)); // Ship SDF, with seed .31 + the layer index.
// Scale the fleet (same factor as ship parts, FOR GREAT COMPRESSION!!). Shifting the ships around
// is kind of useless since each layer is shifted the same amount, but it improved compression at some point.
y=y*1.6-vec3(s(),s(),s() -3);
return f;
void main()
vec3 v, // the current point we're evaluation on the camera ray from this pixel.
z=normalize(-vec3(1,.55,2)+.0015*gl_FragCoord); // the raymarch direction, for a 720p screen.
// Note abuse of truncation of gl_FragCoord again.
// When the camera reaches the gate, we want it to enter straight ahead. So get a time value that becomes 0 near the gate.
// rotate the camera left/right (one sweep) and up/down (sine wave).
// The f*f*(2-f) is IQ's near-identity function to smoothly halt the camera motion.
// The actual raymarching. No early exit at all. The camera position is defined in the parameter.
// f is the total distance traveled. It starts as the time value instead of 0,but that's OK as
// we're not doing close-ups
// z is the direction, v is the current position.
for(float d=0;d<104;d++)
vec3 y=-vec3(1,0,.2), // the sun direction, slightly higher than 1 for dramatic lighting and better compression
x=normalize(vec3(s(v+vec3(.01,0,0)), // the normal, calculated by the usual 3 evaluation method
s(v+vec3(0,.01,0)), // plus the assumption we are perfectly on the surface.
r=pow(max(0,dot(z,reflect(y,x))),104) // specular lighting
+vec3(.31,.4,.5)*(max(0,dot(y,x)) // diffuse lighting, on blue-ish material
+.5); // ambient lighting
z=s(v)<.01?reflect(z,x):z; // if we've hit something, reflect the ray direction using the normal.
// The first of 3 very similar fractals. This is based on Knighty's Cosmos fractal (on Shadertoy.com)
// but simplified because I just need a static background. The main concern is to get stars that are
// not too big, but not tiny either because those flicker like hell in motion.
for(float d=0;d<15;d++) // 15 layers
y=vec3(.65,.2,d*.004)-z*d*.004; // startpoint depends on ray direction and carefully recycled constants.
for(float d=0;d<15;d++) // 15 iterations
f=dot(y,y),y=abs(y)/f-.65; // get square of distance (dot), mirror (abs) and shift (-.65)
x-=f*y; // color the distance f with the endposition y, which gives pretty coherent colors.
// The ships red coloring. Only show it if we didn't hit the gate sphere.
y=v-84; // scaling factor to get somewhat interesting patterns on most ships (not full red/blank)
for(float d=0;d<15;d++) // 15 iterations
f=length(y),y=abs(y)/f-.65; // get distance (length), mirror (abs) and shift (-.65)
// blank the green and blue components. The threshold makes the red or blank parts dominate.
r.yz=0; // add this to the material color, not the background
// The jumpgate sphere fractal. Only show if we hit the sphere.
for(float d=0;d<15;d++) // 15 layers
y=vec3(0,0,-m*30-1)-z; // the startpoint is chosen so the inversion happens at teh right time.
for(float d=0;d<15;d++) // 15 iterations
f=length(y),y=abs(y)/f-.65; // This fractal really, really depends on the .65 value.
x+=f*154; // add the fractal to the background, with appropriate brightness
// depending on whether we hit something, show the background, or the material with reflected background.
|
{"url":"https://www.fulcrum-demo.org/2020/jumpgate-10-24-shader-code/","timestamp":"2024-11-02T06:22:32Z","content_type":"text/html","content_length":"56252","record_id":"<urn:uuid:1ff74154-32cf-4fda-bc65-c78a0f5caa39>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00514.warc.gz"}
|
Gravity Turn Maneuver
From mintOC
Gravity Turn Maneuver
State dimension: 1
Differential states: 5
Continuous control functions: 1
Path constraints: 11
Interior point inequalities: 2
Interior point equalities: 7
The gravity turn or zero lift turn is a common maneuver used to launch spacecraft into orbit from bodies that have non-negligible atmospheres. The goal of the maneuver is to minimize atmospheric drag
by always orienting the vehicle along the velocity vector. In this maneuver, the vehicle's pitch is determined solely by the change of the velocity vector through gravitational acceleration and
thrust. The goal is to find a launch configuration and thrust control strategy that achieves a specific orbit with minimal fuel consumption.
Physical description and model derivation
For the purposes of this model, we start with the following ODE system proposed by Culler et. al. in [Culler1957]The entry doesn't exist yet.:
$\begin{array}{rcl} \dot{v} &=& \frac{F}{m} - g \cdot \cos \beta \\ v \dot{\beta} &=& g \cdot \sin{\beta} \end{array}$
where $v$ is the speed of the vehicle, $g$ is the gravitational acceleration at the vehicle's current altitude, $F$ is the accelerating force and $\beta$ is the angle between the vertical and the
vehicle's velocity vector. In the original version of the model, the authors neglect aspects of the problem:
□ Variation of $g$ over altitude
□ Decrease of vehicle mass due to fuel consumption
□ Curvature of the surface
□ Atmospheric drag
Changes in gravitational acceleration
To account for changes in $g$, we make the following substitution:
$g = g_0 \cdot \left(\frac{r_0}{r_0 + h}\right)^2$
is the gravitational acceleration at altitude zero and
is the distance of altitude zero from the center of the reference body.
Decrease in vehicle mass
To account for changes in vehicle mass, we consider $m$ a differential state with the following derivative:
$\dot{m} = -\frac{F}{I_{sp} \cdot g_0}$
is the
specific impulse
of the vehicle's engine. Specific impulse is a measure of engine efficiency. For rocket engines, it directly correlates with the engine's exhaust velocity and may vary with atmospheric pressure,
velocity, engine temperature and combustion dynamics. For the purposes of this model, we will assume it to be constant.
The vehicle's fuel reserve is modelled by two parameters: $m_0$ denotes the launch mass (with fuel) and $m_1$ denotes the dry mass (without fuel).
Curvature of the reference body's surface
To accomodate the reference body's curvature, we introduce an additional differential state $\theta$ which represents the change in the vehicle's polar angle with respect to the launch site. The
derivative is given by
$\dot{\theta} = \frac{v \cdot \sin \beta}{r_0 + h}.$
Note that the vertical changes as the vehicle moves around the reference body meaning that the derivative of $\beta$ must be changed as well:
$\dot{\beta} = \frac{g \cdot \sin{\beta}}{v} - \frac{v \cdot \sin \beta}{r_0 + h}.$
Atmospheric drag
To model atmospheric drag, we assume that the vehicles draf coefficient $c_d$ is constant. The drag force is given by
$F_{drag} = \frac{1}{2} \rho A c_d v^2$
where $\rho$ is the density of the atmosphere and $A$ is the vehicle's reference area. We assume that atmospheric density decays exponentially with altitude:
$\rho = \rho_0 \cdot e^{-\frac{h}{H}}$
where $\rho_0$ is the atmospheric density at altitude zero and $H$ is the scale height of the atmosphere. The [drag force] is introduced into the acceleration term:
$\dot{v} = \frac{F - F_{drag}}{m} - g \cdot \cos \beta.$
Note that if the vehicle is axially symmetric and oriented in such a way that its symmetry axis is parallel to the velocity vector, it does not experience any lift forces. This model is
simplified. It does not account for changes in temperature and atmospheric composition with altitude. Also, $c_d$ varies with fluid viscosity and vehicle velocity. Specifically, drastic changes
in $c_d$ occur as the vehicle breaks the sound barrier. This is not accounted for in this model.
Mathematical formulation
The resulting optimal control problem is given by
$\begin{array}{llcll} \displaystyle \min_{T, m, v, \beta, h, \theta, u} & m_0 - m(T) \\[1.5ex] \mbox{s.t.} & \dot{m} & = & -\frac{F_{max}}{I_{sp} \cdot g_0} \cdot u ,\\ & \dot{v} & = & (F - \frac
{1}{2} \rho_0 \cdot e^{-\frac{h}{H}} A c_d v^2) \cdot \frac{1}{m} - g_0 \cdot \left(\frac{r_0}{r_0 + h}\right)^2 \cos \beta ,\\ & \dot{\beta} & = & g_0 \cdot \left(\frac{r_0}{r_0 + h}\right)^2 \
cdot \frac{\sin{\beta}}{v} - \frac{v \cdot \sin \beta}{r_0 + h} ,\\ & \dot{h} & = & v \cdot \cos \beta ,\\ & \dot{\theta} & = & \frac{v \cdot \sin \beta}{r_0 + h},\\[1.5ex] & m(0) &=& m_0 , \\ &
v(0) &=& \varepsilon , \\ & \beta(0) &\in& \left[ 0,\frac{\pi}{2} \right], \\ & h(0) &=& 0, \\ & \theta(0) &=& 0, \\[1.5ex] & \beta(T) &=& \hat{\beta} , \\ & h(T) &=& \hat{h} , \\ & v(T) &=& \hat
{v} , \\[1.5ex] & m(t) &\in& [m_1, m_0] \qquad & \forall t \in [0,T] , \\ & v(t) &\geq& \varepsilon \qquad & \forall t \in [0,T] , \\ & \beta(t) &\in& [0, \pi] \qquad & \forall t \in [0,T] , \\ &
h(t) &\geq& 0 \qquad & \forall t \in [0,T] , \\ & \theta(t) &\geq& 0 \qquad & \forall t \in [0,T] , \\[1.5ex] & u(t) &\in& [0,1] \qquad & \forall t \in [0,T] , \\ & T &\in& [T_{min}, T_{max}]. \
where $F_{max}$ is the maximum thrust of the vehicle's engine and $\varepsilon$ is a small number that is strictly greater than zero. The differential states of the problem are $m, v, \beta, h, \
theta$ while $u$ is the control function
For testing purposes, the following parameters were chosen:
$\begin{array}{rcl} m_0 &=& 11.3 \; t \\ m_1 &=& 1.3 \; t \\ I_{sp} &=& 300 \; s \\ F_{max} &=& 0.6 \; MN \\ c_d &=& 0.021 \\ A &=& 1 \; m^2 \\[1.5ex] g_0 &=& 9.81 \cdot 10^{-3} \, \frac{km}{s^2}
\\ r_0 &=& 600.0 \; km \\ H &=& 5.6 \; km \\ \rho_0 &=& 1.2230948554874 \; \frac{kg}{m^3} \\[1.5ex] \hat{\beta} &=& \; \frac{\pi}{2} \\ \hat{v} &=& 2.287 \; \frac{km}{s} \\ \hat{h} &=& 75 \; km \
\[1.5ex] T_{min} &=& 120 \; s \\ T_{max} &=& 600 \; s \\ \varepsilon &=& 10^{-6} \; \frac{km}{s} \end{array}$
Reference solution
The reference solution was generated using a direct multiple shooting approach with 300 shooting intervals placed equidistantly along the variable-length time horizon. The resulting NLP was
implemented using CasADi 2.4.2 using CVodes as an integrator and IPOPT as a solver. The exact code used to solve the problem alongside detailed solution data can be found under Gravity Turn
Maneuver (Casadi). The solution achieves the desired orbit in $T = 338.042 \; s$ spending $m_0 - m(T) = 9.6508 \; t$ of fuel. The launch angle is approximately $\beta(0) = 2.7786081986275378 \
cdot 10^{-6}$. The downrange motion throughout the entire launch amounts to a change in polar angle of approximately $\theta(T) = 0.3383474346340064 \approx 19.39^{\circ}$.
Source Code
Model descriptions are available in
[Culler1957] The entry doesn't exist yet.
|
{"url":"https://mintoc.de/index.php?title=Gravity_Turn_Maneuver&oldid=2110","timestamp":"2024-11-05T00:17:05Z","content_type":"text/html","content_length":"38441","record_id":"<urn:uuid:3dedd0c1-6c1b-42b3-a0b9-40d8abf53327>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00751.warc.gz"}
|
A Beginner's Guide To Understanding cos(arctan(x)) - MES
A Beginner's Guide To Understanding cos(arctan(x))
How do you simplify cos(arctan(x))?
The term "cos(arctan)" refers to the cosine of the inverse of the tangent function. The inverse tangent function, also known as the arctangent, is the function that "undoes" the tangent function. In
other words, if you apply the tangent function to a number and then apply the arctangent to the result, you will get the original number back. The cosine function is one of the basic trigonometric
functions, and it is defined as the ratio of the side adjacent to an angle in a right triangle to the hypotenuse of the triangle. When you apply the cosine function to the result of the arctangent
function, you are finding the cosine of the angle that would produce the original number when the tangent function is applied to it. This can be useful in a variety of mathematical contexts.
Let y = arctan(x)
Therefore x = tan(y)
Since we let y=arctan(x) our problem goes from finding cos(arctan(x)) to finding cos(y).
So if x=tan(y) we need to find cos(y).
tan = opposite/adjacent in a right angled triangle.
So we can re-write tan(y) explicitly as tan(y)=x/1 and we can see that x=opposite and 1=adjacent in a right angled triangle.
Hence, using Pythagoras, we can find the missing side (the hypotenuse).
Hypotenuse = root{x^2+1}
Now we can find cos(y).
Therefore cos(y)=1/rt{x^2+1}
Finally we can conclude that cos(arctan(x))=1/rt{x ^2+1}
If you want to learn how to differentiate cos(arctan(x)), watch my tiktok - here
|
{"url":"https://myedspace.co.uk/blog/post/cosarctanx","timestamp":"2024-11-05T17:32:04Z","content_type":"text/html","content_length":"233477","record_id":"<urn:uuid:f332064c-a31f-44b0-b1db-4be88d16365d>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00374.warc.gz"}
|
How To Understand Human Efficiency and Its Impact on Triathlon Achievement
Triathlon coaches worldwide have been enamored with the approach taken by the coaches of the Norwegian Triathlon Federation. One of these is their seemingly mad scientist Olaf Aleksander Bu, who has
yet to receive formal education in triathlon coaching. Bu’s background is one of engineering and physics, which has led him to great success as he recognizes his quest for peak human performance can
be pursued through understanding, measuring and improving the gross efficiency of his athletes. The idea of efficiency is an exciting concept that most coaches already know — improve your athlete’s
output through the same level of mechanical input. Athletes can find efficiency gains from improving swim technique, bike aerodynamics, running mechanics and many more aspects of body and machine
Understanding what efficiency is
To understand efficiency within sport, we must consider some physics. Now, the laws of physics tell us that total energy will remain constant in an isolated system. We can show how a system uses
energy with a Sankey diagram.
Epton Efficiency Sankey Diagram Copy
The width of the bar is proportional to the amount of energy it represents, meaning that we have a clean visualization of where our system can make improvements.
Efficiency is easiest to illustrate by imagining a car with a full petrol tank. This tank is our energy budget. The speed at a given time can measure the output of this car, and it can travel some
distance at each speed. How can we make the car more efficient? A vehicle burns petrol in the engine, which drives the wheels to turn, but the amount of gas we have is fixed. Let’s say we want to go
100 miles and get there as fast as possible.
If we drive at 100 miles per hour, our car will run out of petrol after 98 miles. We will need to push the car the final 2 miles — the equivalent of the vehicle bonking. Instead, we drive at 98 miles
per hour and get there in just over an hour with a perfectly empty tank. To improve and get under the hour, we must determine what energy the car is losing to inefficiency.
When a car drives, it must fight against two main adversaries: the resistance from the air and the resistance from the road. Of course, there are other factors to consider, but let’s stick to these
for simplicity’s sake. If we remove the body of our car and replace it with a more aerodynamic one and our tires have the ideal inflation, we suddenly find ourselves able to drive at 105 mph for the
same amount of fuel as 98 mph! We can also find improvements by cleaning the engine, replacing leaky valves and pipes and so on — many efficiency improvements could be made.
The human body differs significantly from a car, but we can highlight some analogies. We have a specific energy budget for getting from point A to B with whatever tools we have at our disposal, and
efficiently utilizing this energy budget can allow us to get there faster than others with a larger budget. We can agree that in endurance sports, efficiency is king. It’s not just about increasing
the amount of energy (or power) we can put into a system but reducing the amount of input energy our body wastes.
How can we measure efficiency?
Measuring the efficiency of a human is somewhat complicated. We need to understand the journey of energy input to generate movement. Most coaches are familiar with a power meter on a bicycle, these
measure the amount of power an athlete puts out, but for our purposes, we will think of them as energy meters. Power is the time derivative of energy. That is to say that it’s the amount of energy we
use per second. If we add up every power number at every tiny moment — a mathematical trick called ‘integration’ — we get the energy measured over that duration.
Here is an example shown in this graph from a 10-minute interval. It shows power (in purple) and heart rate (in red). The area under the purple line is calculated by TrainingPeaks and is labeled as
“work,” measured in kJ. Power is an instantaneous measure of energy — 1W = 1J/s. Energy and power are the same. And efficiency savings allow us to use the same power for more speed, meaning less
energy is used overall.
Measuring Mechanical Efficiency
Mechanical efficiency is the amount of energy that is output by our body (in cycling, our power meter tells us this) that is lost to the environment. On a bike, this would be the amount of energy
lost to gravity (hills) or wind (aerodynamics) — so being lighter and more aerodynamic would make a rider more mechanically efficient.
The relationship of power to speed is studied quite intensively. An example field of study devoted to it is aerodynamics. These values are highly measurable, too. We can see how much power and speed
we generate, then make some changes and see if we go faster using the same power. If so, then we can declare ourselves more efficient.
There is an arms race in many aspects of the cycling industry to find better aerodynamics to provide better mechanical efficiency. It’s also one reason you will see updated bike equipment, helmets,
skin suits and so on.
What if we want to take it further? How can we measure the energy our body generates through our legs? Yes, we can, but it’s less accessible as we can’t just install a power meter in our bodies. We
could describe this as biochemical efficiency, as in how efficiently the processes in the body contribute to producing mechanical power on the bike.
Measuring Biochemical Efficiency
The ability to measure biochemical efficiency is in its infancy and is an example of where theory is ahead of practice. Wasted energy from the body tends to come in the form of heat. If we can
measure core body temperature changes and useful power at the legs, we can plausibly measure the biochemical efficiency of an athlete.
Hypothetically, an athlete could test at a given power, heart rate and core temperature to establish a baseline. The athlete would then do a heat tolerance block to train the body to adapt and
generate more watts for the same heart rate and core temperature. This would be a result of improved biochemical efficiency.
Measuring this exactly isn’t a simple task yet. Calculating efficiency as a function of core body temperature will likely develop over the next several years and become more accessible to amateurs
thanks to devices like CORE. We could be on the cusp of the next leap in human performance through various biochemical efficiency improvements measured by new technologies.
Adding it Up to Gross Efficiency
Improvements in the body and equipment ought to lead to overall improvements in efficiency. Again, this means we can go faster with the finite energy we have available. Indeed, training can increase
the amount of power available, but it can also make the body incrementally more efficient so that we can convert more of that power into speed! The challenge for a coach is to identify areas of
inefficiency, then implement a training intervention to address this.
If we put these ideas into a formula, it would simply be:
Gross efficiency = Biochemical efficiency + Mechanical efficiency
From Calories to Speed
Maybe you’ve read or heard this statement: “There’s no speed without power and no power without calories.” The relationship between the amount of energy available and the amount of food an athlete
eats cannot be forgotten!
There are also efficiencies to be found in the fuel and hydration our athletes use. Nutrition has many variables to consider based on each athlete’s needs, so for now, we’ll stick to understanding
these gross efficiencies.
Conclusions and Key Takeaways
• The human body is not a simple machine but can be studied like one to find performance gains.
• Mechanical efficiency measures the proportion of energy we lose to the environment through factors like air resistance and rolling resistance in cycling. Other sports also have other resisting
forces to factor.
• Biochemical efficiency measures the proportion of energy our body wastes moving our limbs in the desired pattern, like pedaling a bike or running.
• Gross efficiency measures how much energy we waste and is the sum of mechanical and biochemical efficiency.
• By breaking gross efficiency down into its constituent parts, we can make performance gains easier to find and quantify, making areas where an athlete has room for improvement easier to spot.
Triathlon coaches can improve their athletes’ efficiency by understanding the core concepts of efficiency and quantifying opportunities for free speed.
|
{"url":"https://www.trainingpeaks.com/coach-blog/human-efficiency-impact-triathlon-achievement/","timestamp":"2024-11-03T10:24:56Z","content_type":"application/xhtml+xml","content_length":"165492","record_id":"<urn:uuid:5aa0a05f-388a-4c80-bf6e-2c2be85996df>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00528.warc.gz"}
|
Mixed Sums of Primes and Other Terms
I. Mixed Sums of Primes and Polygonal Numbers
Conjecture on Sums of Primes and Triangular Numbers (Zhi-Wei Sun, 2008). (i) Each natural number n ≠ 216 can be written in the form p+T[x], where p is a prime or zero, and T[x]=x(x+1)/2 is a
triangular number with x nonnegative. (ii) Any odd integer greater than 3 can be written in the form p+x(x+1), where p is an odd prime and x is a positive integer.
Remark. Asymptotically the n-th prime is about n*log(n), so the conjecture looks harder than the famous Goldbach conjecture. Parts (i) and (ii) have been verified up to 10^12 by T. D. Noe and D.
S. McNeil respectively. Sun would like to offer 1000 US dollars for the first positive solution or $200 for the first explicit counterexample.
General Conjecture on Sums of Primes and Triangular Numbers (Zhi-Wei Sun, 2008). Let a and b be any nonnegative integers, and let r be an odd integer. Then all sufficiently large integers can be
written in the form 2^ap+T[x], where p is either zero or a prime congruent to r mod 2^b, and x is an integer. Also, all sufficiently large odd numbers can be written in the form p+x(x+1), where p is
a prime congruent to r mod 2^b, and x is an integer.
Examples for the General Conjecture on Sums of Primes and Triangular Numbers:
(1) (Zhi-Wei Sun, 2008) Any integer n>88956 can be written in the form p+T[x], where p is either zero or a prime congruent to 1 mod 4, and x is a positive integer.
(2) (Zhi-Wei Sun, 2008) Each integer n>90441 can be written in the form p+T[x], where p is either zero or a prime congruent to 3 mod 4, and x is a positive integer.
(3) (Zhi-Wei Sun, 2008) Except for 30 multiples of three (the largest of which is 49755), odd integers larger than one can be written in the form p+x(x+1), where p is a prime congruent to 1 mod
4, and x is an integer.
(4) (Zhi-Wei Sun, 2008) Except for 15 multiples of three (the largest of which is 5397), odd numbers greater than one can be written in the form p+x(x+1), where p is a prime congruent to 3 mod
4, and x is an integer.
(5) For a=0,1,2,... let f(a) denote the largest integer not in the form 2^ap+T[x], where p is zero or a prime, and x is an integer. In 2008 Zhi-Wei Sun conjectured that f(0)=216, f(1)=43473 and
f(2)=849591. In 2009 D. S. McNeil suggested that f(3)=7151445.
Remark. Jing Ma has verified the above (1)-(5) up to 10^11.
Conjecture on Sums of Primes and Squares (Zhi-Wei Sun, 2009). If a positive integer a is not a square, then all sufficiently large integers relatively prime to a can be written in the form p+ax^
2, where p is a prime and x is an integer. In particular, any integer greater than one and relatively prime to 6 can be expressed as p+6x^2 with p a prime and x an integer.
Remark. In 1753 Goldbach asked whether any odd integer n>1 is the sum of a prime and twice a square. In 1856 M. A. Stern and his students found counterexamples 5777 and 5993.
Conjecture on Sums of Primes and Polygonal Numbers (Zhi-Wei Sun, 2009). If a positive integer m>4 is not congruent to 2 mod 8, then all sufficiently large odd numbers can be written as the sum
of a prime and twice an m-gonal number p[m](n)=(m-2)n(n-1)/2+n (n=0,1,2,...). In particular, any odd integer n>1 other than 135, 345, 539 can be represented by p+2p[5](x) = p+3x^2-x with p a prime
and x a nonnegative integer.
Remark. Let m>2 be an integer. In 1638 Fermat asserted that any natural number can be expressed as the sum of m m-gonal numbers; this was finally proved by Cauchy in 1813. In 2009 Zhi-Wei Sun
conjectured that every natural number n can be written as the sum of an (m+1)-gonal number, an (m+2)-gonal number, an (m+3)-gonal number and a number among 0,...,m-3; in the case m=3 this has been
verfied for n up to 3*10^7 by Qing-Hu Hou.
Related References
1. Zhi-Wei Sun, A new conjecture: n=p+x(x+1)/2, 2008.
2. Zhi-Wei Sun, A message to Number Theory List, 2008.
3. Zhi-Wei Sun, A general conjecture on sums of primes and triangular numbers, 2008.
4. Zhi-Wei Sun, Odd numbers in the form p+x(x+1), 2008.
5. Zhi-Wei Sun, Mixed sums of primes and triangular numbers, Journal of Combinatorics and Number Theory 1(2009), no.1, 65-76.
6. Zhi-Wei Sun, Offer prizes for solutions to my main conjectures involving primes, 2009.
7. Zhi-Wei Sun, Various new conjectures involving polygonal numbers and primes, 2009.
8. Zhi-Wei Sun, A challenging conjecture on sums of polygonal numbers, 2009.
9. Zhi-Wei Sun, A curious conjecture and a mysterious sequence, 2009.
10. Zhi-Wei Sun, On universal sums of polygonal numbers, 2009.
11. Jing Ma, Verifying SUN Zhi-Wei's conjectures on sums of primes and triangular numbers, Master Thesis (Anhui Normal Univ., China), 2009.
12. Z. W. Sun, Mixed sums of squares and triangular numbers, Acta Arith. 127(2007), no.2, 103-113.
13. B. K. Oh and Z. W. Sun, Mixed sums of squares and triangular numbers (III), J. Number Theory 129(2009), no.4, 964-969.
14. Y. Wang, Goldbach Conjecture, World Sci., Singapore, 1984.
15. Sequences A132399, A154752, A137996, A137997, A144590, A117054, A160324, A160325, A160326, A165141 in N. J. A. Sloane's OEIS (On-Line Encyclopedia of Integer Sequences).
II. Mixed Sums of Primes and Recurrences
The well-known Fibonacci numbers are given by
F[0]=0, F[1]=1, and F[n+1] = F[n] + F[n-1] (n=1,2,3,...).
Here is the list of the initial 18 positive Fibonacci numbers:
1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584.
The companion of the Fibonacci sequence is the sequence of Lucas numbers given by
L[0]=2, L[1]=1, and L[n+1] = L[n] + L[n-1] (n=1,2,3,...).
It is easy to see that L[n] = F[n+1] + F[n-1] for n=1,2,3,....
Conjecture on Sums of Primes and Fibonacci Numbers (i) (Weak Version) [Zhi-Wei Sun, Dec. 23, 2008 and March 19, 2009] Each integer n>4 can be expressed as the sum of an odd prime and two or
three positive Fibonacci numbers.
(ii) (Strong Version) [Zhi-Wei Sun, Dec. 26, 2008] Each integer n>5 can be expressed as the sum of an odd prime, a positive Fibonacci numbers and twice a positive Fibonacci number. Also, any
integer n>4 can be rewritten as the sum of an odd prime, a positive Fibonacci number and a Lucas number.
Remark. Since Fibonacci numbers and Lucas numbers grow exponentially, they are much more sparse than prime numbers. Thus the conjecture seems much more difficult than the Goldbach conjecture. It
has been verified up to 10^12 by D. S. McNeil (Univ. of London). Sun would like to offer $5000 for the first positive solution or $250 for the first explicit counterexample to the above conjecture or
the conjecture (an earlier version) that any integer n>4 can be expressed as the sum of an odd prime and two positive Fibonacci numbers.
Values of the Representation Function r(n) for n=p+F[s]+2F[t] with s,t>1 (n=1,...,100000)
Values of the Representation Function r(n) for n=p+F[s]+L[t] (s>1, and F[s] or L[t] is odd) (n=1,...,100000)
Values of the Representation Function r(n) for n=p+F[s]+F[t] with s,t>1 and F[s] or F[t] odd (n=1,...,100000)
Values of the Representation Function r(n) for n=p+F[s]+F[t]/2 with s,t>1 (n=1,...,100000)
The Pell numbers are defined by
P[0]=0, P[1]=1, and P[n+1] = 2P[n] + P[n-1] (n=1,2,3,...).
Here is the list of the initial 18 positive Pell numbers:
1, 2, 5, 12, 29, 70, 169, 408, 985, 2378, 5741, 13860, 33461, 80782, 195025, 470832, 1136689, 2744210.
Conjecture on Sums of Primes and Pell Numbers (Zhi-Wei Sun, Jan. 10, 2009). Any integer n>5 can be written as the sum of an odd prime, a positve Pell number and twice a positive Pell number.
Remark. This has been verified up to 5*10^13 by D. S. McNeil. Using a variant of B. Poonen's heuristic arguments, on March 10, 2009 Sun predicted that there should be infinitely many positive
integers congruent to 120 modulo 210 not of the form p+P[s]+2*P[t] with p an odd prime and s,t positive integers.
List of n ≤ 100000 in the Form P[s]+2*P[t] (s,t=1,2,3,...)
Values of the Representation Function r(n) for n=p+P[s]+2P[t] with s,t>0 (n=1,...,100000)
The Catalan numbers C[n]=(2n)!/(n!(n+1)!) (n=0,1,2,...) play important roles in combinatorics. Asymptotically C[n] ∼ 4^n / (n^3/2π^1/2). It is well known that
C[0]=1 and C[n+1] = C[0]C[n] + C[1]C[n-1] + … + C[n]C[0] for n=0,1,2,....
Here is the list of the initial 15 Catalan numbers:
1, 1, 2, 5, 14, 42, 132, 429, 1430, 4862, 16796, 58786, 208012, 742900, 2674440.
Motivated by the conjecture on sums of primes and Fibonacci numbers due to Sun, Qing-Hu Hou and Jiang Zeng made the following conjecture during their visit to Nanjing Univ.
Conjecture (Qing-Hu Hou and Jiang Zeng, Jan. 9, 2009). Every integer n>4 can be expressed as the sum of an odd prime, a positve Fibonacci number and a Catalan number.
Remark. Note that the generating function of the Cataln numbers is not rational. The conjecture has been verified up to 3*10^13 by D. S. McNeil. Hou and Zeng would like to offer $1000 for the
first positive solution or $200 for the first explicit counterexample.
Values of the Representation Function r(n) for n=p+F[s]+C[t] with s>1 and t>0 (n=1,...,100000)
The following conjecture is similar to the Hou-Zeng conjecture.
Conjecture (Zhi-Wei Sun, Jan. 16, 2009). Any integer n>4 can be written as the sum of an odd prime, a Lucas number and a Catalan number.
Remark. This has been verified up to 10^13 by D. S. McNeil.
Values of the Representation Function r(n) for n=p+L[s]+C[t] with t>0 (n=1,...,100000)
Related References
III. Mixed Sums of Primes and Powers of Two
Conjecture on Sums of Primes and Powers of Two (i) (Weak Version) [Zhi-Wei Sun, Dec. 23, 2008]. Any odd integer larger than 8 cna be expressed as the sum of an odd prime and three positive
powers of two.
(ii) (Strong Version) [Zhi-Wei Sun, Jan. 21, 2009] Each odd number greater than 10 can be written in the form p+2^x+3*2^y =p+2^x+2^y+2^y+1, where p is an odd prime, and x and y are positive
Remark. On Sun's request, Charles Greathouse (USA) verified the conjecture for odd numbers below 10^10. It is known that there are infinitely many positive odd integers none of which is the sum
of a prime and two powers of 2 (R. Crocker, 1971). Paul Erdos ever asked whether there is a positive integer r such that each odd number greater than 3 can be written as the sum of an odd prime and
at most r positive powers of 2.
Values of the Representation Function r(n) for 2n-1=p+2^x+3*2^y (n=1,...,50000)
Here is Charles Greathouse's list (Feb. 5, 2009) of positive odd integers below 4.45*10^13 not in the form p+2^x+2^y. The data above 2.5*10^11 were obtained subject to his following conjecture.
Conjecture (Charles Greathouse, 2009). For an odd integer greater than 5, if it cannot be written as the sum of an odd prime and two positive powers of two, then it must be a multiple of 3*5*17=
Remark. Greathouse's conjecture implies that any odd number n>8 can be written in the form p+2^x+2^y+2^z with z equal to 1 or 2 since n-2 or n-4 is not divisible by 3. This gives another strong
version for Sun's first assertion in the conjecture on sums of primes and powers of two. D. S. McNeil has verified Greathouse's conjecture for odd numbers below 10^13.
Open Problem on Sums of Primes and Powers of Two (i) (Zhi-Wei Sun, Jan. 21, 2009) Prove or disprove that for each k=3,5,...,45,49,51,...,61, any odd integer n>2k+3 can be written in the form p+2
^x+k*2^y with the only exception k=51 and n=353, where p is an odd prime, and x and y are positive integers.
(ii) (Zhi-Wei Sun, Feb. 27, 2009) Prove or disprove the following 47 conjecture: There are infinitely many odd integers greater than 100 and not of the form p+2^x+47*2^y with p an odd prime and
x,y positive integers; moreover, such an odd number is always a multiple of 3*5*7*13=1365.
Remark. Zhi-Wei Sun verified the assertion in part (i) for odd numbers below 2*10^8. D. S. McNeil continued the verification for odd numbers below 10^12 and found no counterexample. 22537515 is
the first odd number greater than 100 not of the form p+2^x+47*2^y (Qing-Hu Hou). Based on D. S. McNeil's search for odd numbers not of the form p+2^x+47*2^y, Zhi-Wei Sun formulated the 47 conjecture
on Feb. 27, 2009. On Feb. 28, D. S. McNeil yielded a complete list of odd numbers below 10^13 not of the form p+2^x+47*2^y, and Sun checked the data and found no counterexample to the 47 conjecture.
For odd integer k>61, the number 2k+127 is not of the form p+2^x+k*2^y.
Values of the Representation Function r(n) for 2n-1=p+2^x+5*2^y (n=1,...,50000)
Values of the Representation Function r(n) for 2n-1=p+2^x+11*2^y with p ≡ 1 (mod 6) (n=1,...,200000)
Values of the Representation Function r(n) for 2n-1=p+2^x+11*2^y with p ≡ 5 (mod 6) (n=1,...,200000)
Values of the Representation Function r(n) for 2(n+50)-1=p+2^x+51*2^y (n=1,...,200000)
D. S. McNeil's List (Feb. 28, 2009) of Odd Integers in the Interval (100,10^13) not of the Form p+2^x+47*2^y
List of n ≤ 100000 in the Form 2^a+k*2^b (a,b=0,1,2,...) with k=1,3,...,61
In March, 2009, Bjorn Poonen (MIT) used his heuristic arguments to make the following prediction based on the Generalized Riemann Hypothesis (GRH).
Poonen's Prediction (March 6, 2009). For each positive integer k, any infinite arithmetic progression of positive odd integers contains infinitely many integers not of the form p+2^x+k*2^y,
where p is an odd prime, and x and y are positive integers.
Related References
1. Y. G. Chen, R. Feng and N. Templier, Fermat numbers and integers of the form a^k+a^l+p^α, Acta Arith. 135(2008), 51-61.
2. R. Crocker, On a sum of a prime and two powers of two, Pacific J. Math. 36(1971), 103-107.
3. P. Erdos, On integers of the form 2^k+p and some related problems, Summa Brasil. Math. 2(1950), 113-123.
4. P. Erdos, Some of my favorite problems and results, in: The Mathematics of Paul Erdos, I (R. L. Graham and J. Nesetril, eds.), Algorithms and Combinatorics 13, Springer, Berlin, 1997, pp. 47-67.
5. P. X. Gallagher, Primes and powers of 2, Invent. Math. 29(1975), 125-142.
6. D. R. Heath-Brown and J.-C. Puchta, Integers represented as a sum of primes and powers of two, Asian J. Math. 6(2002), 535-565.
7. M. B. Nathanson, Additive Number Theory: The Classical Bases, Grad. Texts in Math., Vol. 164, Springer, New York, 1996.
8. J. Pintz and I. Z. Ruzsa, On Linnik's approximation to Goldbach's problem, Acta Arith. 109(2003), 169-194.
9. B. Poonen, The 47 conjecture, 2009.
10. Z. W. Sun, Mixed sums of primes and other terms, in: Additive Number Theory (edited by D. Chudnovsky and G. Chudnovsky), pp. 341-353, Springer, New York, 2010.
11. Z. W. Sun, A project for the form p+2^x+k*2^y with k=3,5,...,61, 2009.
12. Z. W. Sun, A curious conjecture about p+2^x+11*2^y, 2009.
13. Z. W. Sun, The 47 conjecture and my concluding remarks, 2009.
14. Z. W. Sun and M. H. Le, Integers not of the form c(2^a +2^b)+p^α, Acta Arith. 99(2001), 183-190.
15. T. Tao, A remark on primality testing and decimal expansions, J. Austral. Math. Soc., in press.
16. Sequences A155860, A155904, A157237, A157242, A157372, A156695, A118955 in N. J. A. Sloane's OEIS.
|
{"url":"http://maths.nju.edu.cn/~zwsun/MSPT.htm","timestamp":"2024-11-08T18:41:57Z","content_type":"text/html","content_length":"36770","record_id":"<urn:uuid:2997a716-5ddb-4c97-bbb3-55ece296b343>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00420.warc.gz"}
|
How do you solve the rational equation 1 / (x+1) = (x-1)/ x + 5/x? | HIX Tutor
How do you solve the rational equation #1 / (x+1) = (x-1)/ x + 5/x#?
Answer 1
First, recognize that the fractions on the right hand side can be added since they have the same denominator.
You will have to FOIL on the right hand side.
From here, you could find the roots by using the quadratic formula, completing the square, or simply by factoring and recognizing this is a perfect square trinomial.
When working with rational functions, always check that this won't cause any domain errors (making a denominator equal #0#). In this case, that won't happen.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
To solve the rational equation 1 / (x+1) = (x-1)/ x + 5/x, we can start by finding a common denominator for the fractions on both sides. The common denominator in this case is x(x+1).
Multiplying both sides of the equation by x(x+1), we get:
x(x+1) * (1 / (x+1)) = x(x+1) * ((x-1)/ x + 5/x)
Simplifying, we have:
x = (x-1)(x(x+1)) / x + 5(x(x+1)) / x
Expanding and simplifying further, we get:
x = (x^2 - x)(x+1) / x + (5x^2 + 5x) / x
Next, we can distribute and combine like terms:
x = (x^3 + x^2 - x^2 - x) / x + (5x^2 + 5x) / x
Simplifying, we have:
x = (x^3 - x) / x + (5x^2 + 5x) / x
Combining the fractions, we get:
x = (x^3 - x + 5x^2 + 5x) / x
Simplifying further, we have:
x = (x^3 + 5x^2 + 4x) / x
Dividing both sides by x, we get:
1 = x^2 + 5x + 4
Rearranging the equation, we have:
x^2 + 5x + 4 - 1 = 0
Simplifying, we get:
x^2 + 5x + 3 = 0
Now, we can solve this quadratic equation by factoring, completing the square, or using the quadratic formula. In this case, the equation can be factored as:
(x + 3)(x + 1) = 0
Setting each factor equal to zero, we have:
x + 3 = 0 or x + 1 = 0
Solving for x, we get:
x = -3 or x = -1
Therefore, the solutions to the rational equation 1 / (x+1) = (x-1)/ x + 5/x are x = -3 and x = -1.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7
|
{"url":"https://tutor.hix.ai/question/how-do-you-solve-the-rational-equation-1-x-1-x-1-x-5-x-8f9af9c54c","timestamp":"2024-11-07T22:45:38Z","content_type":"text/html","content_length":"582502","record_id":"<urn:uuid:1436896a-bf3e-4b9c-b562-0d70547342c6>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00440.warc.gz"}
|
PCK and TPCK as a math teacher
I first came across the concept of PCK and TPCK in ETEC 511 last summer and the concept has stuck with me since then because I think it perfectly captures what I wish to learn from my experiences in
the MET program. Before I was a Math teacher, I worked several co-op jobs in the IT industry, where I worked with many people that had strong technical backgrounds. They had experience not only in
using technology, but also in building software and the likes. Upon reflection, if those people were put in a classroom, would it make them effective teachers? Not necessarily, because teaching with
technology versus utilizing technology are very different things. On the flip side, being a seasoned veteran teacher doesn’t mean that they would be able to pick up any piece of educational software,
and be effective at using it to teacher. Between knowing how to use technology, and teaching, there must be a bridge between these two very different knowledge domains, and I think Mishra and Koehler
(2006)’s TPCK presents the idea quite well.
One of the most classic example what I consider to be TPCK in the realm of Mathematical teaching comes with the use of graphing technology. As a high school math teacher, one of the most important
tools at the senior levels is graphing technology because it allows learners to visualize many of the concepts taught in class. The graphing calculator is a tool that can be used to simplify
calculations, to assess learning, and for users to potentially explore creating mathematical tools through programming. In order to effectively teach with a graphing calculator, a teacher must first
have the requisite mathematical knowledge and also the pedagogical skills to deliver the content, or otherwise, they must have the PCK needed to teach the course. TPCK takes this knowledge to another
level, as teachers must learn ways to teach students how to use the calculator effectively, or in different situations, use graphing software to demonstrate concepts to students.
• Mishra, P., & Koehler, M. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. The Teachers College Record, 108(6), 1017-1054. Text accessible from Google
One comment
1. HI Gary
Thanks for sharing your perspective on the IT co-op experience. I agree with you on the statement: ” if those people were put in a classroom, would it make them effective teachers? Not
necessarily, because teaching with technology versus utilizing technology is very different things.” For example, I am in an IT industry and met tons of talented software engineers who can write
codes no time. However, they often have difficulty in explaining even simple programming concepts. I believed that subject matter expertise don’t guarantee effective teaching.
|
{"url":"https://blogs.ubc.ca/stem2017/2017/06/13/pck-and-tpck-as-a-math-teacher/","timestamp":"2024-11-12T06:44:18Z","content_type":"text/html","content_length":"45191","record_id":"<urn:uuid:2aebd165-5892-4fe0-9c8b-6222a8de2b9a>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00888.warc.gz"}
|
Assessing the Fit of a Line (2 of 4)
Learning Objectives
• Use residuals, standard error, and r^2 to assess the fit of a linear model.
Now we move from calculating the residual for an individual data point to creating a graph of the residuals for all the data points. We use residual plots to determine if the linear model fits the
data well.
Residual Plots
The graph below shows a scatterplot and the regression line for a set of 10 points. The blue points represent our original data set, that is, our observed values. The red points, lying directly on
the regression line, are the predicted values.
The vertical arrows from the predicted to observed values represent the residuals. The up arrows correspond to positive residuals, and the down arrows correspond to negative residuals.
Now consider the following pair of graphs. The top graph is a copy of the graph we looked at above. In the graph below, we plotted the values of the residuals on their own. (The explanatory variable
is still plotted on the horizontal axis, though it is not indicated this here.) This is called a residual plot.
In the residual plot, each point with a value greater than zero corresponds to a data point in the original data set where the observed value is greater than the predicted value. Similarly, negative
values correspond to data points where the observed value is less than the predicted value.
What are we looking for in a residual plot?
We use residual plots to determine if a linear model is appropriate. In particular, we look for any unexpected patterns in the residuals that may suggest that the data is not linear in form.
To help us identify an unexpected pattern, we start by looking at what we expect to see in a residual plot when the form is linear.
No Pattern in Residual Plot
Consider the pair of graphs below. Here we have a scatterplot for a data set consisting of 400 observations. The regression line is shown in the scatterplot. The residual plot is below the
In this example, the line in the scatterplot is a good summary of the positive linear pattern in the data. Notice that the points in the residual plot seem to be randomly scattered. As we examine the
residuals from left to right, they don’t appear to follow a particular path, nor does the cloud of points widen or narrow in any systematic way. We see no particular pattern. Thus, in the ideal case,
when a linear model is really a good fit, we expect to see no pattern in the residual plot.
Our general principle when looking at residual plots, then, is that a residual plot with no pattern is good because it suggests that our use of a linear model is appropriate.
However, we must be flexible in applying this principle because what we see usually lies somewhere between the extremes of no pattern and a clear pattern. Let’s look at some specific examples.
Patterns in Residual Plots
At first glance, the scatterplot appears to show a strong linear relationship. The correlation is r = 0.84. However, when we examine the residual plot, we see a clear U-shaped pattern. Looking back
at the scatterplot, this movement of the data points above, below and then above the regression line is noticeable. The residual plot, particularly when graphed at a finer scale, helps us to focus on
this deviation from linearity.
The pattern in the residual plot suggests that our linear model may not be appropriate because the model predictions will be too high for values in the middle of the range of the explanatory variable
and too low for values at the two ends of that range. A model with a curvilinear form may be more appropriate.
Patterns in Residual Plots 2
This scatterplot is based on datapoints that have a correlation of r = 0.75. In the residual plot, we see that residuals grow steadily larger in absolute value as we move from left to right. In other
words, as we move from left to right, the observed values deviate more and more from the predicted values. Again, we have chosen a smaller vertical scale for the residual plot to help amplify the
pattern to make it easier to see.
The pattern in the residual plot suggests that predictions based on the linear regression line will result in greater error as we move from left to right through the range of the explanatory
Highway Sign Visibility
Let’s return now to our original example and take a look at what the residual plot tell us about the appropriateness of applying a linear model to this data.
Note that the residuals are fairly randomly dispersed. However, they seem to be a bit more spread out on the left and right than they are in the middle. As we look at higher ages, there seems to be
greater variation in the residuals, which suggests that we may want to be more cautious if we are trying to predict distances for older drivers. And the risks associated with extrapolation beyond the
range of the data seem to be even greater here. In this case, we may still use this linear model but condition the use of it on our analysis of the residual plot.
Here again are four scatterplots with regression lines shown and four corresponding residual plots.
|
{"url":"https://courses.lumenlearning.com/suny-hccc-wm-concepts-statistics/chapter/assessing-the-fit-of-a-line-2-of-4/","timestamp":"2024-11-09T19:19:25Z","content_type":"text/html","content_length":"56895","record_id":"<urn:uuid:b922a1b9-5761-4933-bcc3-62df70ed2898>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00798.warc.gz"}
|
Busbar Size Calculation Formula | Aluminium and Copper Examples | Wira Electrical
Busbar Size Calculation Formula | Aluminium and Copper Examples
If the equations are not displayed correctly, please use the desktop view
Busbar size explanation will give us hard time sometimes but it is necessary for every electrical installation.
In every electrical installation, we need to take caution of everything that may cause faults and fires. It can be caused by an accident, natural incident, or incendiary.
If you have read about fire incidents happening in a house, factory, or big building, a natural incident is very rarely to happen. Natural incidents are caused by natural factors and of course
buildings won’t be affected by nature easily.
The most important thing we need to prevent is accidents. This one can occur if we didn’t plan, design, analyze, or calculate carefully when doing and using electrical installation.
These faults and fires are caused by the most common element we know: HEAT.
When looking at the source of the HEAT from electrical perspective, we can list its causes:
• Short circuit.
• Overload.
• Poor quality of electrical designs and cablings.
• Poor quality of electrical devices and materials.
• Poor quality of electrical connections.
• Poor quality of earthing designs and materials.
• Lacks of ventilation in the panels.
Those points are quite hard to detect before happening. The best thing we can do is to prevent the faults by eliminating those causes above.
What is Busbar
Electrical wires are commonly used to deliver currents from one point to another point. Of course it doesn’t have to be a wire, it can be anything that can conduct electricity such as copper.
Electrical wires are very flexible because we can bend it, roll it, put insulation on it, move it around, string it to our liking and many more.
But electrical wires sometimes are not an efficient and wise choice when we are dealing with high currents. You will find it easier to use a conductor bar or solid conductor to carry high currents
from place to place.
This solid conductor bar is known as a busbar. It is made from copper in the shape of a “bar”. Of course we can’t bend it, roll it, or string it like wires. This busbar is capable of carrying high
currents where most electrical wires will burn out.
Even if you insist on using electrical wires, you need really big and thick electrical wires so it is not convenient for prices and installations.
Don’t worry about its designs and installations, we can use bolts to connect one bar to another bar to our liking. This bolt is installed on the insulator to attach any busbar altogether without
causing any accident.
Over its advantages, busbar has its own disadvantages. We absolutely do not want to unplug a busbar or move it without proper procedure. You may come across “Busbar Down for Maintenance” warning
signs somewhere.
This sign indicates that the power line has been shut down so we can do maintenance on them, unplug them, clean them, replace them, or anything.
Keep in mind that a busbar is literally a copper bar where it rarely has insulation on them. We have to keep it safe from animals, birds, or rodents touching it. It may cause short circuits between
busbars and of course kill animals that touched it.
How to Calculate Busbar Size
On this occasion, we will talk about busbar size calculation to prevent any overheat occurring in your electrical systems. We will study how important it is to calculate busbar size to prevent
overheat that further causes faults.
The busbar size calculation is not only focused on HT (High Tension or High Voltage) systems. You are wrong if you think a LT (Low Tension or Low Voltage) system is not worth calculating and
Hence, we will study for both HT and LT systems.
There is a common rule used by most electricians, electrical engineers, and consultants called the “Thumb Rule” method.
Back in the day when calculating busbar size was done manually and of course they will spend quiet time on it, they grow impatient. This is where the Thumb Rule helped them. But don’t worry, nowadays
there is a lot of software to do busbar size calculation. They are easy to use and of course save you a lot of time.
Thumb Rule for Busbar Amp Size
This Thumb Rule shows how much current a 1 square mm (Sq.mm) busbar can withstand.
There are two common materials for producing a busbar, they are aluminium and copper. Both aluminium and copper have their own ability to withstand currents.
A 1 Sq.mm of aluminium busbar can withstand 0.7 Amperes.
A 1 Sq.mm of copper busbar can withstand 1.2 Amperes.
Of course the examples above did not come from an international standard because we can’t find the tolerance values. Some people may still use an aluminium busbar to deliver 1 Amp. Some other people
use a copper busbar to deliver 1.5 Amps.
Later on, because this primitive method became unreliable for high current in thousand amps, we needed to do proper calculation with a proper standard.
Electrical Busbar Size
Even further, electrical consultants and engineers have to analyze and calculate other supporting factors that are important to consider:
• Minimum clearance for phase-to-phase and phase-to-ground.
• Proper selection of busbar insulator deadlock.
• Safe and adequate bolt installation for multiple busbar connections.
• Thermal effects produced by busbar and insulator for both normal and extreme (faulty) conditions.
• Mechanical resonances and electrodynamic forces under normal and extreme (faulty) conditions.
We should consider the two factors below:
• Maximum Allowable Temperature Rise for Bolt
• Busbar Minimum Clearances
Maximum allowable temperature for bolt connecting busbar to busbar or busbar to panel is needed to be planned properly.
Right now we will look at international standard, IEC 62271-1, where it is summarized in the table below:
The second is the busbar clearance where we will use IEC 62271-1 as an example. Observe the table below:
Standard Busbar Size in mm
The size of a busbar is determined by the current rating, type of material, shape, and cross-sectional area. Of course the maximum allowable temperature rise for each type of material is also
From the IEC 62271-1 we can also study about the thermal rise effect, thermal limit, bar dimensions, and withstand current rating in the table below:
How to Size Busbar
Busbar size is not solely determined by the current alone. Its temperature rise has to be in allowable specification in national or international standard. The standards we are talking about are:
• British Standard, BS 159,
• American Standard, ANSI C37.20, and
• etc.
British Standard, BS 159 states that maximum temperature rises is 50^oC higher than ambient temperature in 24 hour. The ambient temperature is 35^oC to 40^oC at its peak.
American Standard, ANSI C37.20 states that maximum temperature rises is 65^oC higher than ambient temperature in 24 hour. The ambient temperature is 40^oC and silver-plated termination bolts are
used. If there is no bolt installed, the allowable temperature rise is 30^oC.
The very basic idea on how to size a copper busbar is 2 Amps/1 Sq.mm (mm^2) or 1250 Amps/1 Sq.in (in^2), these can be different in some countries. Of course this is like a “first-aid” decision, but
the final decision should count on more factors. You should check the catalog of the manufacturer.
Busbar Size Depends On
Check the list below to learn what we mentioned about “more factors” above. We should take the “application areas” into account when doing busbar size calculation.
1. Voltage drop
Busbar has lower impedance thus the voltage drop is lower than electrical wires.
2. Main switchboard
Only one output for each riser hence the cost and size for the main panel are reduced.
3. Shaft size
The common size for a busbar with 1600 A current rating is 185 x 180 mm. Compared to the electrical wires to carry with the same current, a busbar is much cheaper to build a riser shaft size.
4. Number of circuits
Only one circuit is needed for all floors.
5. Fire and safety
Insulator materials used for busbar don’t produce toxic gasses and corrosive effects to cause a fire.
6. Fault withstand level
A busbar has a much higher maximum current rating, normally a 1600 A riser can withstand 60 – 70 kA.
7. Installation time
Busbar installation wastes less time.
Busbar Size vs Current
Observe the short circuit rating for a busbar below:
1. Current rating 0 – 400 A = 25 kA for 1 second.
2. Current rating 600 – 1000 A = 50 kA for 1 second.
3. Current rating 1000 – 2000 A = 65 – 100 kA for 1 second.
4. Current rating 2000 – 5000 A = 100 – 225 kA for 1 second.
After we listed the current rating along with its fault current rating, we can list them further along with its cross-section of 1 squared mm (Sq.mm / mm^2).
Aluminium Busbar Size
Let us do a simple example of aluminium busbar size calculation.
Assume that we need a busbar to carry 2000 A current and have to withstand 35 kA current fault for 1 second. Looking back at the table above, the minimum cross-section area of the busbar we need is
443 Sq.mm.
To get this 443 Sq.mm aluminium busbar, we can use a 100 x 5 mm busbar. This is the minimum cross-section size.
Assuming that we have a current density of 1 A/Sq.mm, skin effect, and temperature rise, we might need a 4 x 100 x 5 mm busbar.
Copper Busbar Size
Similar to the calculation above, the copper busbar size calculation is quite straightforward.
Assume that we need a busbar to carry 2000 A and withstand a 35 kA fault current for 1 second. Scrolling a bit above to our table, we found that at least 285 Sq.mm is needed. We can use a 60 x 5 mm
busbar as a minimum cross-section.
Assuming that we have a current density of 1.6 A/Sq.mm, skin effect, and temperature rise, we might need a 4 x 60 x 5 mm busbar.
How to Size Busbar
At long last, we will do some busbar size calculation with some known formulas.
Assume we have a busbar with current rating as stated below:
Rated Voltage = 415V,50Hz ,
Desire Maximum Current Rating of Bus bar =630Amp.
Fault Current (I[sc])= 50KA
Fault Duration (t) =1sec.
The operating temperature rises for the busbar is:
Operating Temperature of Bus bar (θ) = 85°C.
Final Temperature of Bus bar during Fault (θ[1]) =185°C.
Temperature rise of Bus Bar Bar during Fault (θ[t]=θ[1]-θ) = 100°C.
Ambient Temperature (θ[n]) =50°C.
Maximum Busbar Temperature Rise = 55°C.
Busbar is covered with an enclosure with specifications below:
Installation of Panel= Indoors (well Ventilated)
Altitude of Panel Installation on Site= 2000 Meter
Panel Length= 1200 mm ,Panel width= 600 mm, Panel Height= 2400 mm
Our busbar’s material details:
Bus bar Material= Copper
Bus bar Strip Arrangements = Vertical
Current Density of Bus Bar Material =1.6
Temperature Coefficient of Material Resistance at 20°C (α[20]) = 0.00403
Material Constant(K) = 1.166
Busbar Material Permissible Strength = 1200 kg/cm^2
Busbar Insulating Material = Bare
Busbar Position = Edge-mounted bars
Busbar Installation Medium = Non-ventilated ducting
Busbar Artificial Ventilation Scheme = without artificial ventilation
Our busbar size is:
Busbar Width (e) = 75 mm
Busbar Thickness (s) = 10 mm
Number of Bus Bar per Phase (n) = 2
Bus bar Length per Phase (a) = 500 mm
Distance between Two Bus Strip per Phase (e) = 75 mm
Busbar Phase Spacing (p) = 400 mm
Total No of Circuit = 3
Since busbar doesn’t have its own insulation, we provide it with insulator as written below:
Distance between insulators on Same Phase (l) = 500 mm
Insulator Height (H) = 100 mm
Distance from the head of the insulator to the bus bar center of gravity (h) = 5 mm
Permissible Strength of Insulator (F’)=1000 Kg/cm2
And now we will calculate busbar size with coefficient factors “K” below.
Derating Factor of Busbar
We will calculate eight derating factors of a busbar step by step.
1. Bus Strip Derating Factors (K1)
De rating factor per phase busbar:
Bus bar Width (e) is 75mm and Bus bar Length per Phase (a) is 500mm so
Number of busbar per phase is 2.
From following table value of de rating factor is 1.83
Number of Bus Bar Strip per Phase (K1)
2. Insulator Material Derating Factor (K2)
Busbar doesn’t have insulating material. So we say it is “bare”, thus the derating factor is 1 from the table below.
3. Position Derating Factor (K3)
The position of our busbar is an Edge-mounted bar, thus the derating factor is 1 from the table below.
4. Installation Medium Derating Factor (K4)
Our installation for the busbar is non-ventilated ducting, thus the derating factor is 0.8 from the table below.
5. Artificial Ventilation Derating Factor (K5)
We don’t use artificial ventilation, thus the derating factor is 0.9 from the table below.
6. Enclosure and Ventilation Derating Factor (K6)
Busbar cross-section area per phase (A)
Total busbar cross-section area for enclosure =
Here we used Size of Neutral Bus is equal to Size of Phase Bus
Total busbar Area for Enclosure
Total enclosure Area
Total busbar Area for Enclosure / Total Enclosure Area =9000000/1728000000
Total busbar Area for Enclosure / Total Enclosure Area=0.53%
Busbar artificial ventilation plan is without artificial ventilation so the derating factor is 0.95 from the table below.
7. Proxy Effect Derating Factor (K7)
Busbar Phase Spacing (p) is 400mm.
Busbar Width (e) is 75mm
Space between each bus of phase is 75mm
Hence, total bus length of phase with spacing (w) =75+75+75+75+75=225mm
Busbar phase spacing (p) / total bus length of phase with spacing (w) = 400 / 225 =2
From the table below, the derating factor is 0.82.
8. Altitude of Busbar Installation Derating Factor (K8)
We installed the busbar 2000 m above the ground so the derating factor is 0.88 based on the table below.
Total Derating Factor
After we get the eight derating factors, it is time to sum them all up.
Total derating factor
Busbar Size Calculation Formula
Desire Current Rating of Bus bar (I[2]) =630 Amp
Current rating of busbar after derating factor (I[1])
Busbar Cross Section Area as per Current (A)
Busbar Cross Section Area as per Short Circuit (A[sc])
Select Higher Size for Busbar Cross section area between 436 Sq.mm and 626 Sq.mm
Final Calculated Busbar Cross Section Area =626 Sq.mm
Actual Selected busbar size is 75×10=750 Sq.mm
We have selected 2 number of Bus bar per Phase, hence:
Actual Busbar cross section Area per Phase (A[P])
Actual Busbar Size is Less than calculated Bus bar size.
Electromagnetic Forces Generated by Busbar when Short Circuit
Peak electromagnetic forces between phase conductors (F[1])
Total width of busbar per phase(w)=75+75+75=225mm =2.25cm
Busbar Phase to Phase Distance (d)=400+225=625mm=6.25cm
Actual Forces at the head of the supports or busbar (F)
Permissible strength of the insulator (F’) is 10 Kg/mm2.
Actual Forces at the head of the Supports or Bus Bar is less than Permissible Strength.
Forces on Insulation are within Limits.
Mechanical Strength of the Busbar
Mechanical strength of a busbar can be calculated using:
Since we have two bus strips per phase then our inertia modulus is 14.45.
The mechanical strength of our busbar is
The allowable strength of a busbar is 12 Kg/mm^2.
Our actual busbar’s strength is still within allowable value.
Temperature Rise of the Busbar
The maximum temperature rise (T[1]) is 35^oC.
The calculated maximum temperature rise (T[2]) is
Calculated busbar temperature rise is less than specified maximum temperature rise.
Final Result
Size of the busbar = 2 busbars 75x10mm each Phase.
Number of feeders = 3.
Total number of busbar = 6 busbars 75x10mm for phase and 1 busbar 75x10mm for neutral.
Electromagnetic forces at the tip of the supports of busbar (F) = 3 Kg/mm^2.
Mechanical strength of the busbar = 0.7 Kg/mm^2.
Maximum temperature rise = 30^oC.
Earthing Busbar Size Calculation
Earth conductor size PE measured in Sq.mm can be calculated from:
I[fault] = fault current (A)
t(s) = operating time (s)
k = constant of the material
The constant of the material can be read from the list below:
Assume that we have to calculate an earthing busbar size for 20 kA fault current at 0.5s using GI material.
You could use a 50×3.5 mm or 25×8 mm busbar.
Calculate Busbar Size and Voltage Drop
Since we have done the busbar size calculation, we will skip to its voltage drop calculation.
And we need to remind you that we can’t calculate voltage without knowing the values of the current and resistance.
When you have those values, you can use the simple Ohm’s law. The voltage drop is equal to the I x R.
Where I is the current carried by the busbar and the R is the busbar’s resistance (aluminium or copper).
Frequently Asked Questions
How do I choose a busbar size?
Using Thumb Rule for busbar:
A 1 Sq.mm of aluminium busbar can withstand 0.7 Amperes.
A 1 Sq.mm of copper busbar can withstand 1.2 Amperes.
How many amps is a busbar?
Assume we have an aluminium busbar with Width x Depth = 10 x 10 mm. It means this busbar has 100 Sq.mm with current capacity 100 x 0.7 A = 70 A.
How is busbar current calculated?
Using Thumb Rule for busbar:
A 1 Sq.mm of aluminium busbar can withstand 0.7 Amperes.
A 1 Sq.mm of copper busbar can withstand 1.2 Amperes.
What size is the copper busbar?
Copper busbar has current-carrying capacity of 1.2 Amperes per 1 Sq.mm.
5 thoughts on “Busbar Size Calculation Formula | Aluminium and Copper Examples”
1. Dear Sir ,
Writing Style is Really good an will be help full for any body. But few things could not be digested 1) How a Bus Length is considered 500 mm, where a panel length is 2.4/3.2 meters 2)
Orientation of Bus How you have considered 3) 630X0.9 = 697 really too much.
Similar two three parties gave the same idea , mainly from K1 to K8 & next calculations are made copy & paste better look into it otherwise smooth to read
Thanks & regards R. Roy I am not in any Social Media.
2. To all concerned,
How is per phase spacing calculated (p)? Where does the 400mm come from?
Also, for clarification can you please help with calculation Width x Length x Height, 1200 x 600 x 2400 = 1,728,000,000 sq mm. (mm x mm x mm = mm^3) or am I missing something?
Thank you
3. Dear sir please explain the thumb rule according to the norms of CPWD or indian electrical rule
4. from have you go th evalue of 1.2amps/sqmm for copper
5. from where have you got the value of 1.2amps/sqmm for copper
Leave a Comment
|
{"url":"https://wiraelectrical.com/busbar-size-calculation-formula/","timestamp":"2024-11-05T06:49:54Z","content_type":"text/html","content_length":"88840","record_id":"<urn:uuid:35c50f73-7764-49d5-9d84-7b9ec157db2f>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00454.warc.gz"}
|
Problem D
Some numbers are just, well, odd. For example, the number $3$ is odd, because it is not a multiple of two. Numbers that are a multiple of two are not odd, they are even. More precisely, if a number
$n$ can be expressed as $n = 2 \cdot k$ for some integer $k$, then $n$ is even. For example, $6 = 2 \cdot 3$ is even.
Some people get confused about whether numbers are odd or even. To see a common example, do an internet search for the query “is zero even or odd?” (Don’t search for this now! You have a problem to
Write a program to help these confused people.
Input begins with an integer $1 \leq n \leq 20$ on a line by itself, indicating the number of test cases that follow. Each of the following $n$ lines contain a test case consisting of a single
integer $-10 \leq x \leq 10$.
For each $x$, print either ‘$x$ is odd’ or ‘$x$ is even’ depending on whether $x$ is odd or even.
Sample Input 1 Sample Output 1
3 10 is even
10 9 is odd
9 -5 is odd
|
{"url":"https://open.kattis.com/contests/imhr6v/problems/oddities","timestamp":"2024-11-03T03:35:47Z","content_type":"text/html","content_length":"30327","record_id":"<urn:uuid:53c7caca-09b9-49a2-be9d-228f6f4d47e3>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00371.warc.gz"}
|
Team for Advanced Flow Simulation and Modeling
For more information:
  tezduyar@gmail.com
Cerebral Aneurysm -- Variable Wall Thickness, High Blood Pressure
One of the major computational challenges in cardiovascular fluid mechanics is accurate modeling of the fluid-structure interaction (FSI) between the blood flow and arterial walls. The blood flow
depends on the arterial geometry, and the deformation of the arterial wall depends on the blood flow. The mathematical equations governing the blood flow and arterial deformations need to be solved
simultaneously, with proper kinematic and dynamic conditions coupling the two physical systems.
The arterial geometry used here (see Fig. 1) is a close approximation to the computed tomography (CT) model of a single-artery segment of the middle cerebral artery of a 57 year-old male with
aneurysm. The arterial wall (i.e. the structural mechanics part of the problem) is modeled with the membrane element.
The computations highlighted here are some of the earliest cardiovascular FSI computations carried out by the T*AFSM. The numerical methods used in these computations were introduced and implemented
on parallel computing platforms by the T*AFSM. The set of numerical methods introduced by the T*AFSM over the years and used in these computations includes the DSD/SST formulation [1-4], the
quasi-direct FSI method [5, 6], the stabilized space-time FSI (SSTFSI) technique [7], and special techniques for arterial FSI computations [8]. The CT model of the artery approximated in these
computations was reported in [9]. The inflow velocity used in the computations during the cardiac cycle is a close approximation to the one reported in [10], and is shown below. We consider normal
and high blood pressure profiles (NBP and HBP), which are also shown below, in combination with uniform and variable wall thicknesses (UWT and VWT). In VWT, the wall thickness for the aneurysm is 1/3
of the wall thickness for the rest of the artery segment. The computations were carried out on the ADA system at Rice University. For more details on these computations, see [8].
Fig. 1. Arterial geometry. For details, see [8].
Fig. 2. Inflow-velocity and blood-pressure profiles for the cardiac cycle. For details, see [8].
Fig. 3. Blood-flow patterns at an instant during the cardiac cycle, obtained from the NBP-UWT computation. For details, see [8].
Fig. 4. Blood-flow patterns at an instant during the cardiac cycle, obtained from the NBP-UWT computation. For details, see [8].
Fig. 5. Arterial shape at three different instants during the cardiac cycle, obtained from the NBP-UWT computation. For details, see [8].
Fig. 6. Deformation of the aneurysm for all four cases. The order of the cases from minimum to maximum deformation is NBP-UWT, HBP-UWT, NBP-VWT and HBP-VWT. For details, see [8].
1. T.E. Tezduyar, "Stabilized Finite Element Formulations for Incompressible Flow Computations", Advances in Applied Mechanics, 28 (1992) 1-44.
2. T.E. Tezduyar, M. Behr and J. Liou, "A New Strategy for Finite Element Computations Involving Moving Boundaries and Interfaces -- The Deforming-Spatial-Domain/Space-Time Procedure: I. The Concept
and the Preliminary Numerical Tests", Computer Methods in Applied Mechanics and Engineering, 94 (1992) 339-351.
3. T.E. Tezduyar, M. Behr, S. Mittal and J. Liou, "A New Strategy for Finite Element Computations Involving Moving Boundaries and Interfaces -- The Deforming-Spatial-Domain/Space-Time Procedure: II.
Computation of Free-surface Flows, Two-liquid Flows, and Flows with Drifting Cylinders", Computer Methods in Applied Mechanics and Engineering, 94 (1992) 353-371.
4. T.E. Tezduyar, "Computation of Moving Boundaries and Interfaces and Stabilization Parameters", International Journal for Numerical Methods in Fluids, 43 (2003) 555-575.
5. T.E. Tezduyar, S. Sathe, R. Keedy and K. Stein, "Space-Time Techniques for Finite Element Computation of Flows with Moving Boundaries and Interfaces", Proceedings of the III International Congress
on Numerical Methods in Engineering and Applied Sciences, Monterrey, Mexico, CD-ROM (2004).
6. T.E. Tezduyar, S. Sathe, R. Keedy and K. Stein, "Space-Time Finite Element Techniques for Computation of Fluid-Structure Interactions", Computer Methods in Applied Mechanics and Engineering, 195
(2006) 2002-2027.
7. T.E. Tezduyar and S. Sathe, "Modeling of Fluid-Structure Interactions with the Space-Time Finite Elements: Solution Techniques", International Journal for Numerical Methods in Fluids, 54 (2007)
8. T.E. Tezduyar, S. Sathe, T. Cragin, B. Nanna, B.S. Conklin, J. Pausewang and M. Schwaab, "Modeling of Fluid-Structure Interactions with the Space-Time Finite Elements: Arterial Fluid Mechanics",
International Journal for Numerical Methods in Fluids, 54 (2007) 901-922.
9. R. Torii, M. Oshima, T. Kobayashi, K. Takagi and T.E. Tezduyar, "Influence of Wall Elasticity in Patient-Specific Hemodynamic Simulations", Computers & Fluids, 36 (2007) 160-168.
10. R. Torii, M. Oshima, T. Kobayashi, K. Takagi and T.E. Tezduyar, "Computer Modeling of Cardiovascular Fluid-Structure Interactions with the Deforming-Spatial-Domain/Stabilized Space-Time
Formulation", Computer Methods in Applied Mechanics and Engineering, 195 (2006) 1885-1895.
|
{"url":"https://www.tafsm.org/PROJ/CVFSI/ANEURYSM/","timestamp":"2024-11-02T10:55:19Z","content_type":"text/html","content_length":"10532","record_id":"<urn:uuid:98501ff2-974b-4c42-a8e0-a5592c7bcf06>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00734.warc.gz"}
|
mplications for DNA
Back to Presentations
Inclusion Probability is a Likelihood Ratio: Implications for DNA Mixtures
M.W. Perlin, "Inclusion probability is a likelihood ratio: Implications for DNA mixtures", Promega's Twenty First International Symposium on Human Identification, San Antonio, TX, 14-Oct-2010.
Download Poster
There has been much discussion recently amongst forensic scientists about the relative merits of inclusion and likelihood ratio (LR) methods for interpreting DNA mixtures. Advocates for the
probability of inclusion (PI; also termed CPI, RNME or CPE) approach contend that it is a simpler statistic that is easier to explain in court. LR enthusiasts rejoin that theirs is a more informative
method that preserves more of the identification information present in the DNA data. The debate implicitly assumes that there is some essential difference between PI and LR, suggesting that each
perspective should be understood and evaluated on its own merits.
In fact, there are many different LR statistics for DNA mixture interpretation. And PI happens to be just one of them. However, amongst all currently used LRs, the PI version does have a special
distinction - it is the least informative.
Recognizing that PI is just another LR has important consequences for forensic science practice.
1. The current PI vs. LR controversy can be finally put to rest.
2. Inclusion efficacy can be measured in terms of how well it preserves the data's identification information. The logarithm of the LR is a standard information measure, and PI is a LR, so this
assessment is easily accomplished.
3. The inclusion method can be supported in court based on its scientific status as a valid LR.
4. The PI statistic can be better understood through the inclusion likelihood function used in its LR construction.
5. The relevance of PI can be challenged on particular DNA evidence by examining the appropriateness of its (inclusion likelihood) modeling assumptions for that data.
In the paper, we show by construction that PI is a LR. We first describe the inclusion likelihood function, and see how it naturally explains binary allele data. We next use Bayes theorem to form the
inclusion genotype, represented by its probability mass function (pmf). Using an easily understood form of the LR (genotype probability gain), we then insert the inclusion genotype pmf into this LR
expression to obtain the standard PI statistic. Having thus derived the PI as a LR, we then discuss what this result means for DNA mixture interpretation.
The talk visually explains the underlying concepts to the forensic practitioner, and uses no mathematics besides basic probability. We focus primarily on the forensic science implications of PI
actually being a LR. This proper scientific foundation for PI may invite a re-examination of some prevalent DNA mixture interpretation practices.
|
{"url":"https://www.cybgen.com/information/presentations/2010/ISHI/Perlin_Inclusion_probability_is_a_likelihood_ratio_implications_for_DNA_mixtures/page.shtml","timestamp":"2024-11-08T09:23:14Z","content_type":"text/html","content_length":"27263","record_id":"<urn:uuid:dfcd401a-8ccf-4a23-883c-94bbd92bb92d>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00214.warc.gz"}
|
About How Many People Moved Away From The Great Plains States During The Depression? 100,000 250,000 (2024)
Answer 1
10 million
i just need this right
Answer 2
About how many people moved away from the Great Plains states during the Depression?
100,000250,0002.5 million <<<CORRECT10 million
December 2021 EDGE TEST
Related Questions
make/compose a poem related to supersaturated solution
there is a whole bunch of solutions that we need to solve, do you even know how molecules dissolve
what is the correct way to write the fraction 5 6 into words
the answer is five sixths
enter the simplified form of the complex fraction in the box. assume no denominator equals zero. 2x−1+1x8x
just took the test
The boxed complex fraction's abbreviated form. Assume that no numerator is 0. 3x-1/8 = 2x1+1x8x (x-1)
What is a numerator?
The portion of a fraction above the line that designates the number to be divided by another number below the line is called the numerator. The numerator in the example given below is the number that
is above the line, so in the fraction 1/8, the numerator is 1 and the denominator is 8.
The simplified form of the complex fraction in the box. assume no denominator equals zero. 2x−1+1x8x can be The boxed complex fraction's abbreviated form. Assume that no numerator is 0. 3x-1/8 =
2x1+1x8x (x-1)
Therefore, The boxed complex fraction's abbreviated form. Assume that no numerator is 0. 3x-1/8 = 2x1+1x8x (x-1)
Learn more about numerators here:
Who was the Spanish conqueror that overthrew the Aztec Empire? Question 3 options: Cortés Pizarro Moctezuma Díaz
the Spanish conqueror that overthrew the Aztec Empire is Cortés
In reviewing your summary account activity (#1), you notice fees charged of $69. 45. This fee was assessed on your account for the following reasons except….
Based on the amount and card type, it is very unlikely that the fee in question was a d. Annual fee.
The fee in question could have been:
A late payment fee charged for not paying for something on time A balance transfer fee for moving debt from one card to another Cash advance fee when you ask for an advance on the card
Annual fees are not available for all cards as only select cards have them and these are usually cards that offer a wide range of services. It is therefore unlikely that this is an annual fee.
In conclusion, the fee above is most likely not an annual fee.
Options for this question include:
The options include:
a. Late Payment Fee
b. Balance Transfer Fee
c. Cash Advance Fee
d. Annual Fee
Find out more at https://brainly.com/question/23536165.
Which of the following was not a criticism made of the masons?.
Hello, user, which following are you speaking about?
ernesto y su familia viven en las afueras de chicago.
Ernesto and his family live on the outskirts of Chicago.
Which of the following statements is TRUE about heart rate and exercise? A. It is best to exercise your heart as hard as possible. B. Maximum heart rate can be calculated by taking 220 minus your
age. C. Your recommended exercise heart rate is 30% of your maximum heart rate. D. All of the above are true statements. Please select the best answer from the choices provided. A B C D
Answer: maximum heart rate can be calculated by taking 220 minus your age.
Did the assignment
Maximum heart rate can be calculated by taking 220 minus your age is true about heart rate and exercise. Hence, option B is correct.
What is heart rate?
The quantity of heartbeats in a given period of time, often one minute. The heartbeat can be felt in various places on the body when an artery is close to the skin, including the wrist, side of the
neck, back of the knees, top of the foot, groin, and others.
Adults' resting heart rates normally range from 60 to 100 beats per minute. A lower resting heart rate often denotes better cardiac function and cardiovascular health. For instance, a well-trained
athlete may have a typical resting heart rate closer to 40 beats per minute.
Your heart rate or pulse rate is the number of times your heart beats each minute. A normal resting heart rate should be.
Thus, option B is correct.
For more information about heart rate, click here:
Hormones help control mood, growth and development, the way our organs work, metabolism, and reproduction Question 2 options: a) True b) False.
True or False Uruguay has one of Latin America's highest adult literacy rates.
i got 100
Uruguay has one of Latin America's highest adult literacy rates. The statement is True.
What is Literacy rate?
The adult literacy rate measures the number of adults, defined as those aged 15 and older, who can read and write while interpreting a simple, straightforward statement about their daily lives.
Acountry's economic prosperity increases with its literacy rate. Lower literacy rates connect with lower GDP, and higher education levels and greater specialization are indicators of a nation's
The literacy rate of Uruguay is 98.62%, and it rises yearly, according to UNESCO. Compared to men, more women are literate in the country showing tremendous growth and development.
Therefore, the statement is True that Uruguay has one of Latin America's highest adult literacy rates.
Learn more about the rate of literacy, here:
If z1 = 2+3i and z2 = -5i +9, find re (z1 + z2) is.
Given z₁ = 2 + 3i and z₂ = 9 - 5i, we have
z₁ + z₂ = (2 + 3i) + (9 - 5i) = 11 - 2i
so that
Re(z₁ + z₂) = 11
To make a. 250 m solution, one could take 0. 250 moles of solute and add.
Answer: 1.00 kg of solvent.
Explanation: b/c m = moles of solute/kg of solvent
To make 250 m solution, one could take 0. 250 moles of solute and add 1kg of solvent.
What is solution?
Any mixture of one or more solutes that have been dissolved in a solvent is referred to as a solution. The material known as a solute dissolves in a solvent to form a homogenous mixture.
A solute and a solvent make up a solution. The thing that dissolves in the solvent is called the solute. Solubility is the measure of a solute's ability to dissolve in a solvent.
The chemicals present in lesser concentrations are solutes, whilst the substances present in greater abundance are the solvent in solutions with components in the same phase.
Therefore, To make 250 m solution, one could take 0. 250 moles of solute and add 1kg of solvent.
To learn more about solution, refer to the link:
which hybrid orbitals overlap to form the sigma bond between oxygen-1 and nitrogen-2?
sp3- sp3
they both have four domains. four domains is equal to sp3
the british crown’s response to actions like those in the excerpt was to
The British Crown’s will declare the American colonies to be open in rebellion after reading the excerpt
how will you make a cheerdance routine in a solo performance explain further brainly
To make a cheer dance routine in a solo performance would still do the usual sections a group routine has, without the ones that are done with a partner. For example, I would have an opening to show
my abilities, then I would do a standing tumbling sequence and finally different jumps. I would mix all these sequences with different dance moves.
In this exercise, you have to explain how you would make a cheer dance routine in a solo performance. It's important to be confident because you're the only one who is going to be performing and that
implies more pressure. When you make a group performance, you can do different and exciting sequences (like a pyramid) but when you do a solo performance you have to give your best.
Answer the following question in 1-2 complete sentences. How did the use of printmaking change the world of art?
The creation of printmaking made mass producing art feasible. it's also made art more affordable because it no longer took months to create a single artwork.
Hope this helps! :)
The creation of printmaking made mass producing art feasible. it's also made art more affordable because it no longer took months to create a single artwork.Explanation:trust
What is the speed of the anther sac as it releases its pollen?.
A short quiz has two true-false questions and one multiple-choice question with four choices. A student guesses at each question. Assuming the choices are all equally likely, what is the probability
that the student gets all three correct?.
1/16 chance
its 2 true or false questions and each with four choices so
4x2x2=16 so there is 16 possible chances
I think it is 1/16 or 0.0625
0.5 x 0.5 x 0.25
a rock is thrown straight up with an initial velocity of 21.0 m/s. ignore energy lost to air friction. how high will the rock rise?
We have that for the Question "A rock is thrown straight up with an initial velocity of 21.0 m/s. Ignore energy lost to air friction. How high will the rock rise?" it can be said that How high will
the rock rise is
From the question we are told
A rock is thrown straight up with an initial velocity of 21.1 m/s. Ignore energy lost to air friction. How high will the rock rise?
Generally the equation for the Motion is mathematically given as
[tex]S= -(\frac{u^2}{2g} )[/tex]
S= -22.5
which cyber protection condition establishes a protection priority
The Cyberspace Protection Conditions (CPCON) process was set up to state, establish, and communicate protection means so as to make sure of unity of effort across the DoD.
The cyber protection condition which establishes a protection priority focus on critical and essential functions only is INFOCON 1.
There are five steps in CPCON. they are:
CPCON 1 (Very High: Critical Functions)CPCON 2 (High: Critical and Essential Functions)CPCON 3 (Medium: Critical, Essential, and Support Functions)CPCON 4 (Low: All Functions)CPCON 5 (Very Low: All
CPCON is known to be dynamic and systematic in its approach to escalation and de-escalation of cyber protection postures.
Learn more from
Gene paid a deposit on a leased car. The deposit earns 2. 8 percent simple annual interest. At the end of the year, the interest that is earned is $22. 40. What was the amount of the original
it's 3
whats 9 + 10 = 21
What bargain did many Americans suspect had been made between presidents Nixon and Ford? that Ford had taken the blame for Watergate in return for becoming president that Ford had served out Nixon’s
term in return for a term of his own that Nixon had resigned with the understanding that Ford would pardon him that Nixon’s impeachment had resulted in Ford’s assumption of the presidency.
The bargain many Americans suspect had been made between Presidents Nixon and Ford was: C. that Nixon had resigned with the understanding that Ford would pardon him.
Who is President Nixon?
Richard Milhous Nixon was born on the 9th of January, 1913 in Yorba Linda, California, United States of America. He was elected and served as the 37th president of the United States of America
between 1969 to 1974.
President Nixon was a member of the Republican Party and he promised "Peace with Honor" to the people of America (Americans).
During his tenure, there was a bargain between him and President Ford which many Americans suspected had been made with the thinking that Nixon had resigned with the understanding that Ford would
pardon him.
Read more on Nixon here: https://brainly.com/question/26476498
it's c
Which details support the central idea that people in power use lies and deceit to control others? Check all that apply. "‘Comrades!’ he cried." "You do not imagine, I hope, that we pigs are doing
this in a spirit of selfishness and privilege?" "We pigs are brainworkers." "Day and night we are watching over your welfare." "It is for YOUR sake that we drink that milk and eat those apples.
B, D, E
The details that support the central idea that people in power use lies and deceit to control others are;
"You do not imagine, I hope, that we pigs are doing this in a spirit of selfishness and privilege?" "Day and night we are watching over your welfare." "It is for YOUR sake that we drink that milk and
eat those apples.
What is an central idea?
Central idea can be regarded as the main point that talks about the mind of the author.
Therefore, as he says " You do not imagine, I hope, that we pigs are doing this in a spirit of selfishness and privilege?" express the central idea of the passage.
Learn more about central idea at;
Which country had the greatest increase in immigration during the years 1881-1890?.
Germany would be the correct answer because Germany was in the millions while others where still in the thousands
Consider the following incomplete deposit ticket: A deposit ticket. The amounts deposited were 520 dollars and 15 cents, 57 dollars and 68 cents, 40 dollars and 25 cents. The cash received was 213
dollars and 51 cents. The total deposited is 498 dollars and 1 cent. How much did Kay deposit in cash and coins? a. $404.57 b. $213.51 c. $93.44 d. $284.50 Please select the best answer from the
choices provided A B C D
The net amount that Kay deposit in cash and coins is: D: $284. 50.
What is the net amount deposited?
The formula for net amount deposited is;
Net Amount Deposited = total deposited - the cash received.
We are given;
The total deposited = 498 dollars and 1 cent = $498.01
The cash received = 213 dollars and 51 cents = $213.51
Net amount deposited = 498.01 - 213.51
Net amount deposited = $284.5
Read more about Net amount deposited at; https://brainly.com/question/13449567
mary picks 15 flowers from her garden. if 3 out of 5 of the flowers are red, how many red flowers does mary pick?
Answer: Mary picks 9 red flowers.
3/5 = x/15
you first cross mulitiply 5 by x and 3 by 15
5x/5 = 45/5
then you divide 45 by 5 and get 9.
x = 9 red flowers
which of the following describes a key difference between arbitration and mediation
In arbitration, the arbitrator hears evidence and has the power to make a decision while in mediation, a third party is brought in to help resolve a conflict and not make decisions.
A mediator is used negotiate a settlement in which all parties benefit while an arbitrator is like a judge.
The difference between arbitration and mediation is that in arbitration, the arbitrator hears evidence and has the power to make a decision while in mediation, a third party is brought in to help
resolve a conflict and not make decisions.
Find out more at: https://brainly.com/question/22233883
The arbitrator has the power to render a binding decision
A solution is prepared by dissolving 40. 0 g of sucrose, c12h22o11, in 250. G of water at 25°c. What is the vapor pressure of the solution if the vapor pressure of water at 25°c is 23. 76 mm hg?.
jdjdydjehjsjehejusjehusiehd hug hejira just Hindi dubbed anime on Hulu or calling you a call you want to go just call at i you gehue
usjjhhjeh ndjjddifyddddjfcoitfvidddfujruuuruuhrjrjdjjjjjdjdjdjdbjdjshdjsjdhjdjdhdjjdhrurjrhgryfrggfugeihvbihvdxfg. hbhhh uijhh vvghbvcfghbvcfgg
using v = lwh, what is an expression for the volume of the following rectangular prism?
The answer would be A.) 15/2x
A: 15/2x
I just did the Quiz on EDGE2020 and it's 200% correct!
Also, heart and rate if you found this answer helpful!! :) (P.S It makes me feel good to know I helped someone today!!) :)
|
{"url":"https://sointulacottages.com/article/about-how-many-people-moved-away-from-the-great-plains-states-during-the-depression-100-000-250-000","timestamp":"2024-11-06T21:57:29Z","content_type":"text/html","content_length":"131069","record_id":"<urn:uuid:401bc2f5-d566-4406-9c99-f4efef38a7cc>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00532.warc.gz"}
|
Coefficient of Variation vs. Standard Deviation: The Difference | Online Tutorials Library List | Tutoraspire.com
Coefficient of Variation vs. Standard Deviation: The Difference
by Tutor Aspire
The standard deviation of a dataset is a way to measure how far the average value lies from the mean.
To find the standard deviation of a given sample, we can use the following formula:
s = √(Σ(x[i] – x)^2 / (n-1))
• Σ: A symbol that means “sum”
• x[i]: The value of the i^th observation in the sample
• x: The mean of the sample
• n: The sample size
The higher the value for the standard deviation, the more spread out the values are in a sample. However, it’s hard to say if a given value for a standard deviation is “high” or “low” because it
depends on the type of data we’re working with.
For example, a standard deviation of 500 may be considered low if we’re talking about annual income of residents in a certain city. Conversely, a standard deviation of 50 may be considered high if
we’re talking about exam scores of students on a certain test.
One way to understand whether or not a certain value for the standard deviation is high or low is to find the coefficient of variation, which is calculated as:
CV = s / x
• s: The sample standard deviation
• x: The sample mean
In simple terms, the coefficient of variation is the ratio between the standard deviation and the mean.
The higher the coefficient of variation, the higher the standard deviation of a sample relative to the mean.
Example: Calculating the Standard Deviation & Coefficient of Variation
Suppose we have the following dataset:
Dataset: 1, 4, 8, 11, 13, 17, 19, 19, 20, 23, 24, 24, 25, 28, 29, 31, 32
Using a calculator, we can find the following metrics for this dataset:
• Sample mean (x): 19.29
• Sample standard deviation (s): 9.25
We can then use these values to calculate the coefficient of variation:
• CV = s / x
• CV = 9.25 / 19.29
• CV = 0.48
Both the standard deviation and the coefficient of variation are useful to know for this dataset.
The standard deviation tells us that the typical value in this dataset lies 9.25 units away from the mean. The coefficient of variation then tells us that the standard deviation is about half the
size of the sample mean.
Standard Deviation vs. Coefficient of Variation: When to Use Each
The standard deviation is most commonly used when we want to know the spread of values in a single dataset.
However, the coefficient of variation is more commonly used when we want to compare the variation between two datasets.
For example, in finance the coefficient of variation is used to compare the mean expected return of an investment relative to the expected standard deviation of the investment.
For example, suppose an investor is considering investing in the following two mutual funds:
Mutual Fund A: mean = 9%, standard deviation = 12.4%
Mutual Fund B: mean = 5%, standard deviation = 8.2%
The investor can calculate the coefficient of variation for each fund:
• CV for Mutual Fund A = 12.4% / 9% = 1.38
• CV for Mutual Fund B = 8.2% / 5% = 1.64
Since Mutual Fund A has a lower coefficient of variation, it offers a better mean return relative to the standard deviation.
Here’s a brief summary of the main points in this article:
• Both the standard deviation and the coefficient of variation measure the spread of values in a dataset.
• The standard deviation measures how far the average value lies from the mean.
• The coefficient of variation measures the ratio of the standard deviation to the mean.
• The standard deviation is used more often when we want to measure the spread of values in a single dataset.
• The coefficient of variation is used more often when we want to compare the variation between two different datasets.
Additional Resources
How to Calculate the Mean and Standard Deviation in Excel
How to Calculate the Coefficient of Variation in Excel
Share 0 FacebookTwitterPinterestEmail
previous post
The Complete Guide: How to Report Regression Results
next post
R vs. R-Squared: What’s the Difference?
You may also like
|
{"url":"https://tutoraspire.com/coefficient-of-variation-vs-standard-deviation/","timestamp":"2024-11-12T00:53:38Z","content_type":"text/html","content_length":"353035","record_id":"<urn:uuid:dc159b0c-687f-4dfd-8b90-411ba10d7b81>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00066.warc.gz"}
|
ML Aggarwal Class 8 Solutions for ICSE Maths Chapter 13 Understanding Quadrilaterals Ex 13.1 - CBSE Tuts
ML Aggarwal Class 8 Solutions for ICSE Maths Chapter 13 Understanding Quadrilaterals Ex 13.1
Question 1.
Some figures are given below.
Classify each of them on the basis of the following:
(a) Simple curve
(b) Simple closed curve
(c) Polygon
(d) Convex polygon
(e) Concave polygon
Question 2.
How many diagonals does each of the following have?
(a) A convex quadrilateral
(b) A regular hexagon
(a) A convex quadrilateral: It has two diagonals.
(b) A regular hexagon: It has 9 diagonals as shown.
Question 3.
Find the sum of measures of all interior angles of a polygon with the number of sides:
(i) 8
(ii) 12
Question 4.
Find the number of sides of a regular polygon whose each exterior angles has a measure of
(i) 24°
(ii) 60°
(iii) 72°
Question 5.
Find the number of sides of a regular polygon if each of its interior angles is
(i) 90°
(ii) 108°
(iii) 165°
Question 6.
Find the number of sides in a polygon if the sum of its interior angles is:
(i) 1260°
(ii) 1980°
(iii) 3420°
Question 7.
If the angles of a pentagon are in the ratio 7 : 8 : 11 : 13 : 15, find the angles.
Question 8.
The angles of a pentagon are x°, (x – 10)°, (x + 20)°, (2x – 44)° and (2x – 70°) Calculate x.
Question 9.
The exterior angles of a pentagon are in ratio 1 : 2 : 3 : 4 : 5. Find all the interior angles of the pentagon.
Question 10.
In a quadrilateral ABCD, AB || DC. If ∠A : ∠D = 2:3 and ∠B : ∠C = ∠7 : 8, find the measure of each angle.
Question 11.
From the adjoining figure, find
(i) x
(ii) ∠DAB
(iii) ∠ADB
Question 12.
Find the angle measure x in the following figures:
Question 13.
(i) In the given figure, find x + y + z.
(ii) In the given figure, find x + y + z + w.
Question 14.
A heptagon has three equal angles each of 120° and four equal angles. Find the size of equal angles.
Question 15.
The ratio between an exterior angle and the interior angle of a regular polygon is 1 : 5. Find
(i) the measure of each exterior angle
(ii) the measure of each interior angle
(iii) the number of sides in the polygon.
Question 16.
Each interior angle of a regular polygon is double of its exterior angle. Find the number of sides in the polygon.
|
{"url":"https://www.cbsetuts.com/ml-aggarwal-class-8-solutions-for-icse-maths-chapter-13-ex-13-1/","timestamp":"2024-11-09T23:30:41Z","content_type":"text/html","content_length":"82134","record_id":"<urn:uuid:e8955582-7366-4d19-ad1f-68100eb6653f>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00143.warc.gz"}
|
What Is ROI (Return On Investment)? How To Calculate ROI
What is ROI (Return On Investment)?
ROI (Return on investment) is a measurement parameter to evaluate the effectiveness of a trade. It is often represented in the form of a percentage or a ratio. Traders can use ROI to simply and
quickly determine whether a trade is profitable or not.
A positive ROI indicates that the trade is producing returns, while a negative one suggests losses. A good trader would likely maintain a high ROI and mitigate losses by closing the position if a
trade has an overwhelming negative ROI.
ROI is an extremely useful tool to track crypto trades, especially the ones with leverage. A lot of traders use ROI as an indicator of when to take profits and when to stop losses.
For example, an ROI of 100% means your trade has doubled in value, and it should be closed to maintain profitability. On the other hand, an ROI of -20% may be incredibly alarming, especially for a
leverage position - with a 5x leverage, this would already result in liquidation.
How to calculate ROI
ROI can be calculated by using the following formula:
ROI = Net profit / Cost of investment
In which:
Net profit = Current investment value - Initial investment value
You can also add a multiplication of 100 at the end of the formula to get the percentage value.
How to calculate ROI
Let’s put it in an example. Say that Alice invested in Bitcoin (BTC) with $1,000 at the price of $40,000. If the price of Bitcoin rises to $44,000, Alice’s investment is now worth $1,100. The ROI of
her trade can be calculated as:
(1,100 - 1,000) / 1,000 = 0.1
If you want to calculate it in the percentage form, then multiply the above result with 100:
0.1 * 100 = 10 (%)
An ROI of 10% indicates that Alice is earning a profit of 10% from her investment, which equals to $100. Now let’s try to complicate the problem by adding a 10x leverage into Alice’s investment.
The value of Alice’s investment now becomes $10,000. After the increase in Bitcoin’s price, her trade is worth $11,000. The ROI of her trade with a 10x leverage can be calculated as:
(11,000 - 10,000) / 1,000 = 1
If you want to calculate it in the percentage form, then multiply the above result with 100:
1 * 100 = 100 (%)
As you can see, with a 10x leverage, Alice’s investment has increased 10 times in profit: from 10% to 100%. This makes total sense and should give you a comprehensive overview of how to calculate
For those who just want a shortcut, you can use this online tool to calculate ROI: www.calculator.net/roi-calculator.html
How to determine a good ROI
A good ROI varies between traders based on one’s preferences and investing strategy. For instance, some traders want to take profits with an ROI of 20 to 50%, while others would want to close their
positions only with an ROI of 100% or higher. Therefore, it is difficult to determine a good ROI: this depends on you individually.
Nevertheless, there is one certain thing when it comes to determining a good ROI: the ROI has to be positive. As mentioned above, a positive ROI means a profitable trade, while a negative one
indicates losses. Therefore, it can be safely said that a positive ROI is a good ROI.
Limitations of ROI
ROI is a useful tool for investors to help them improve their trading efficiency. It can be calculated easily and quickly, giving traders the opportunity to make decisions on their trades
immediately. Nevertheless, as a simple indicator, ROI has a few limitations:
• It completely disregards the value of time. The duration of a trade is not considered in ROI. For that reason, a high ROI does not necessarily mean a good trade: If a 20% ROI can be reached
within a short period of time, it indicates a decent trade. For longer durations like years, it might not be as efficient.
• Other factors may affect ROI. This is especially true for low-cap assets in DeFi. In some cases, your ROI may be pre-calculated to be 100%, but your realized profits may not be able to maintain
such a number. Due to other factors like price slippage, lack of liquidity,... the actual profits can be much smaller than the calculated ROI.
• For long-term investments, inflation also needs to be considered. In crypto, this might not be as relevant. Nevertheless, in traditional financial markets, a 10-year investment can be heavily
impacted by inflation, hence reducing the actual profits. ROI does not acknowledge the influence of inflation.
FAQs about ROI
Return On Investment (ROI) vs. Rate Of Return (ROR)
Return On Investment (ROI) is closely similar to Rate of Return (ROR). In fact, the only difference is that Rate Of Return takes the matter of time into account, which is often calculated on a yearly
As a result, ROR may be more efficient than ROI as a measurement parameter for investment since time is also an important factor, as mentioned above. In some cases, ROR can be equal to ROI.
Does crypto have the highest ROI?
Even though we cannot say or guarantee that the crypto market has the highest ROI compared to others, it has been able to maintain an incredible ROI rate. This is an infographic made by our team,
showing how profitable investing in Bitcoin on a monthly basis is.
The monthly ROI of Bitcoin
Do high ROIs mean high risk?
“High risk, high reward” is an exemplary quote for this question. Even though high-risk investments usually return high profits, it does not necessarily mean vice versa. Some of the safest investment
choices can bring even higher profits than the risky ones.
Return on investment (ROI) is a measurement parameter to evaluate the effectiveness of a trade. It is often represented in the form of a percentage or a ratio. Traders can use ROI to simply and
quickly determine whether a trade is profitable or not.
ROI can be calculated by using the following formula: ROI = Net profit / Cost of investment. You can also add a multiplication of 100 at the end of the formula to get the percentage value.
ROI is a useful tool for investors to help them improve their trading efficiency. It can be calculated easily and quickly, giving traders the opportunity to make decisions on their trades
immediately. Nevertheless, as a simple indicator, ROI has a few limitations. Other methods, like Rate Of Return (ROR), can also be used as an alternative for better efficiency in some cases.
|
{"url":"https://coin98.net/what-is-roi-return-on-investment","timestamp":"2024-11-06T17:01:51Z","content_type":"text/html","content_length":"152156","record_id":"<urn:uuid:a3346cb2-d581-4b1d-8a4b-3e0cbc0d6c43>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00039.warc.gz"}
|
Introduction: Hypothesis Test for a Population Proportion
What you’ll learn to do: Conduct a hypothesis test for a population proportion.
• Recognize when a situation calls for testing a hypothesis about a population proportion.
• Conduct a hypothesis test for a population proportion. State a conclusion in context.
• Interpret the P-value as a conditional probability in the context of a hypothesis test about a population proportion.
• Distinguish statistical significance from practical importance.
|
{"url":"https://courses.lumenlearning.com/atd-herkimer-statisticssocsci/chapter/introduction-hypothesis-test-for-a-population-proportion/","timestamp":"2024-11-02T05:30:12Z","content_type":"text/html","content_length":"46868","record_id":"<urn:uuid:9f228050-d010-4b87-afc3-47c174e5c9a1>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00849.warc.gz"}
|
What is difference between one way slab and two way slab - Civil Sir
What is difference between one way slab and two way slab
What is difference between one way slab and two way slab and know about one way slab and two way slab,in this topic we know about difference between one way slab and two way slab.
Let us discuss what is one way slab and two way slab. In structural system slab have heavy structure made of concrete resist generally gravity load acting on brick wall and beam ,from the beam to the
column and from column to the footing of foundation and foundation to the bed of soil.
In civil construction work slab is given over roof of building, Bridge ,floor, reck chhaja and baramada. Slab is horizontal flat surface which is supported by brick wall and beam and column.
Difference between one way slab and two way slab
What is one way slab?
● one way slab :- the slab which is supported by beams on two opposite sides only in single direction that is one way slab. That’s why one way slab bend in one direction and load acting on it that is
dead load and live load distributed only in two opposite sides in single direction.
If the ratio of longer span and shorter span is equal or greater than 2 then we should adopted one way slab. And it will bend in one direction that is shorter direction of Span.
Mostly one way slab have two parallel beam which carry load acting on it by the slab.
reinforcement used in one way slab
As we know there is a two types of bar used in slab one is main bar that is also known as crank bar and second one is distribution bar.
In one way slab main bar that is crank bar used only in one direction of shorter span in bottom of slab and cross bar that is distribution bar is straight bar putting over main bar in longer span.
Difference between one way slab and two way slab
What is two way rcc slab?
● two way slab :- in two way slab it is supported by beam on all four sides and load carried in both direction. And distribute the load in all four sides in equal manner.
That’s why in two way slab it’s bend in both direction and resistance is provided from all four sides of beam to withstand with gravitational load acting on it by load and live load.
In two way slab ratio of longer span and shorter span is less than 2 then we adopted two way slab.
Difference between one way slab and two way slab
reinforcement used in one way slab
As we know there is a two types of bar used in slab one is main bar that is also known as crank bar and second one is distribution bar.
In one way slab main bar that is crank bar used only in one direction of shorter span in bottom of slab and cross bar that is distribution bar is straight bar putting over main bar in longer span.
2D and 3D Ghar ka Naksha banane ke liye sampark kare
reinforcement used in two way rcc slab
In two way slab main bar that is crank bar used in bottom of slab in both direction of longer span as well as shorter span. And that is cross bar used in top of slab that is straight bar used over
main bar.
◆You Can Follow me on Facebook and Subscribe our Youtube Channel
You should also visits:-
1) calculate self load of column
2) calculate self load of beam per metre
3) calculate slab load per square metre
4) calculate dead load of brick wall per metre
5) ultimate load carrying capacity of column
What is difference between one way slab and two way slab
1) one way slab supported by beam on the two opposite sides only
In two way slab it is supported by beam on all four sides
2) in one way slab main bar that is crank bar used only in shorter span in single direction
In two way slab main bar that is crank bar used in both longer and shorter span of slab in both direction.
3) in one way slab load are carried into one opposite direction only
But in two way slab load are carry into all four sides
4) in one way slab ratio of longer span and shorter span is equal or greater than 2
in two way slab ratio of longer span and shorter span is less than 2
Leave a Comment
|
{"url":"https://civilsir.com/what-is-difference-between-one-way-slab-and-two-way-slab/","timestamp":"2024-11-06T21:32:36Z","content_type":"text/html","content_length":"95557","record_id":"<urn:uuid:f1d589b9-6ecc-4a00-8e44-a3471da0245b>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00490.warc.gz"}
|
Review of Mercury 18
The Mercury 18 is a small sailboat designed by the maritime architect
Ernest Nunes
in the late thirties. The Mercury 18 is built by
Here we would have liked to show you nice photos of the Mercury 18.
If you have a photo you would like to share: Upload Image
Looking for a new boat? Find a Mercury 18 or similar boat for sale
The hull is made of fibreglass. Generally, a hull made of fibreglass requires only a minimum of maintenance during the sailing season. And outside the sailing season, just bottom cleaning and perhaps
anti-fouling painting once a year - a few hours of work, that's all.
Some boats has a hull made of Plywood.
The boat equipped with a masthead rig. The advantage of a masthead rig is its simplicity and the fact that a given sail area - compared with a fractional rig - can be carried lower and thus with less
heeling moment.
The Mercury 18 is equipped with a long keel. A long keel provide a better directional stability than a similar boat with a fin keel; on the other hand, better directional stability means also that
the boat is more difficult to handle in a harbour with less space.
The keel is made of lead. Compared with iron, lead has the advantage of being 44% heavier, which allows a smaller keel and hence less water resistance and higher speed.
The boat can enter even shallow marinas as the draft is just about 0.94 - 1.04 meter (3.08 - 3.38 ft) dependent on the load. See immersion rate below.
Sailing characteristics
This section covers widely used rules of thumb to describe the sailing characteristics. Please note that even though the calculations are correct, the interpretation of the results might not be valid
for extreme boats.
Stability and Safety
Theoretical Maximum Hull Speed
What is Theoretical Maximum Hull Speed?
The theoretical maximal speed of a displacement boat of this length is 4.8 knots. The term "Theoretical Maximum Hull Speed" is widely used even though a boat can sail faster. The term shall be
interpreted as above the theoretical speed a great additional power is necessary for a small gain in speed.
Immersion rate
The immersion rate is defined as the weight required to sink the boat a certain level. The immersion rate for Mercury 18 is about 42 kg/cm, alternatively 240 lbs/inch.
Meaning: if you load 42 kg cargo on the boat then it will sink 1 cm. Alternatively, if you load 240 lbs cargo on the boat it will sink 1 inch.
Sailing statistics
This section is statistical comparison with similar boats of the same category. The basis of the following statistical computations is our unique database with more than 26,000 different boat types
and 350,000 data points.
Motion Comfort Ratio
What is Motion Comfort Ratio (MCR)?
The Motion Comfort Ratio for Mercury 18 is 12.6.
Comparing this ratio with similar sailboats show that it is more comfortable than 73% of all similar sailboat designs. This comfort value is just above average.
L/B (Length Beam Ratio)
What is L/B (Length Beam Ratio)?
The l/b ratio for Mercury 18 is 3.39.
Compared with other similar sailboats it is slimmer than 100% of all other designs. It seems that the designer has chosen a significantly more speedy hull design. This type of design is also referred
to as 'needle'.
Ballast Ratio
The ballast ratio for Mercury 18 is 58%.
This ballast ratio shows a righting moment that is higher than 99% of all similar sailboat designs. A righting moment (ability to resist heeling) significantly above average.
D/L (Displacement Length Ratio)
What is Displacement Length Ratio?
The DL-ratio for Mercury 18 is 224 which categorizes this boat among 'light crusers & offshore racers'.
21% of all similar sailboat designs are categorized as heavier. A heavy displacement combined with smaller water plane area has lower acceleration and is more comfortable.
SA/D (Sail Area Displacement ratio)
What is SA/D (Sail Area Displacement ratio)?
The SA/D for Mercury 18 with ISO 8666 reference sail is 25.0, with a 135% genua the SA/D is 28.6.
The SA/D ratio indicates that it is faster than 99% of all similar sailboat designs in light wind.
Over- / underrigged
The Mercury 18 has more rig than 62% of all similar sailboats, which indicates that the boat is slightly overrigged.
Bottom Paint
When buying anti-fouling bottom paint, it's nice to know how much to buy. The surface of the wet bottom is about 8m^2 (86 ft^2).
Based on this, your favourite maritime shop can tell you the quantity you need.
Note: If you use a paint roller you will need more paint than if you use a paintbrush.
Dimensions of sail for masthead rig.
Are your sails worn out? You might find your next sail here: Sails for Sale
If you need to renew parts of your running rig and is not quite sure of the dimensions, you may find the estimates computed below useful.
Guiding dimensions of running rig
Usage Length Diameter
Mainsail halyard 18.0 m (59.1 feet) 6 mm (1/4 inch)
Jib/genoa halyard 18.0 m (59.1 feet) 6 mm (1/4 inch)
Spinnaker halyard 18.0 m (59.1 feet) 6 mm (1/4 inch)
Jib sheet 5.5 m (18.0 feet) 8 mm (5/16 inch)
Genoa sheet 5.5 m (18.0 feet) 8 mm (5/16 inch)
Mainsheet 13.7 m (45.0 feet) 8 mm (5/16 inch)
Spinnaker sheet 12.1 m (39.6 feet) 8 mm (5/16 inch)
Cunningham 2.8 m (9.1 feet) 6 mm (1/4 inch)
Kickingstrap 5.5 m (18.2 feet) 6 mm (1/4 inch)
Clew-outhaul 5.5 m (18.2 feet) 6 mm (1/4 inch)
Boat owner's ideas
This section shown boat owner's changes, improvements, etc. Here you might find inspiration for your boat.
Do you have changes/improvements you would like to share? Upload a photo and describe what to look for.
We are always looking for new photos. If you can contribute with photos for Mercury 18 it would be a great help.
If you have any comments to the review, improvement suggestions, or the like, feel free to contact us. Criticism helps us to improve.
|
{"url":"https://www.udkik.dk/pl/review.jsp?id=Mercury+18","timestamp":"2024-11-07T20:28:20Z","content_type":"text/html","content_length":"30784","record_id":"<urn:uuid:fb2d2e6c-4ed9-4913-aa55-9c4cbde8d984>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00233.warc.gz"}
|
Tsunami Propagation and Flooding in Sicilian Coastal Areas by Means of a Weakly Dispersive Boussinesq Model
Department of Engineering (DI), University of Palermo, Viale delle Scienze, Bd. 8, 90128 Palermo, Italy
Author to whom correspondence should be addressed.
Submission received: 27 April 2020 / Revised: 14 May 2020 / Accepted: 15 May 2020 / Published: 19 May 2020
This paper addresses the tsunami propagation and subsequent coastal areas flooding by means of a depth-integrated numerical model. Such an approach is fundamental in order to assess the inundation
hazard in coastal areas generated by seismogenic tsunami. In this study we adopted, an interdisciplinary approach, in order to consider the tsunami propagation, relates both to geomorphological
characteristics of the coast and the bathymetry. In order to validate the numerical model, comparisons with results of other studies were performed. This manuscript presents first applicative results
achieved using the weakly dispersive Boussinesq model in the field of tsunami propagation and coastal inundation. Ionic coast of Sicily (Italy) was chosen as a case study due to its high level of
exposure to tsunamis. Indeed, the tsunami could be generated by an earthquake in the external Calabrian arc or in the Hellenic arc, both active seismic zones. Finally, in order to demonstrate the
possibility to give indications to local authorities, an inundation map, over a small area, was produced by means of the numerical model.
1. Introduction
In the study of the tsunami propagation phenomenon is very important to model the wave frequency dispersion because of its significant role during wave transformation from deep to intermediate
waters. During the propagation to the shore, dispersive waves refract, shoal, due to coastal bathymetry morphology. Long waves have an high impact on surf-zone dynamics, sediment transport and beach
erosion. In the Mediterranean sea, the effects of tsunamis on the coasts could be similar to the effects of large storms [
] and the detailed modeling of the shoreline movement is important in order to avoid big uncertainties [
]. Tsunamis are huge waves generated by earthquakes, submarine volcanic eruptions or landslides. In very deep oceanic waters tsunami do not dramatically increase in height. But as the waves travel
onshore, they increase in height as the bathymetry gradually decrease, becoming potentially destructive (e.g., the tsunami in Indian Ocean in December 2004, or in Japan in March 2011). Usually the
tsunami were modelled as solitary waves and obliviously the shoaling, breaking, and run-up are phenoma of major interest for researcher [
]. The high computational power of modern computers and parallel computing make it possible to solve more and more complex fluid dynamics problems. Indeed, it is possible to solve 3D Reynolds
Averaged Navier–Stokes (RANS) equations and use methods like Smoothed Particle Hydrodynamics (SPH) or Volume Of Fluids (VOF) [
]. Unfortunately, 3D tsunami modeling requires high computational efforts that are not consistent with practical purposes. To overcome this problem we used the weakly dispersive model described by [
]. This kind of approach, also called the non-hydrostatic Non Linear Shallow Water Equations (NLSWE) method, usually solves, in the horizontal coordinates and time, free surface motion with a single
value function. This requires a much lower vertical resolution than 3D methods. Moreover, the used model has an accurate modeling technique of wetting/drying processes [
]. The Western Ionian area, due to the clash between the African and Eurasian tectonic plates, is exposed to a high seismic risk causing possible tsunamis whose origin is directly related to
earthquakes. In particular, the Greek and Italian (Calabrian and Sicilian ones) coastal areas are among the most exposed sites to such a hazard. These coastal areas, have a strong anthropization
which makes them more vulnerable [
]. In order to plan actions useful for risk mitigation, it is necessary to produce flooding and exposure maps that can be used for prevention and protection purposes. In the last decades, due to the
recent tsunami disasters, researchers developed numerical models of increasingly quality. Ref Samaras et al. [
], to simulate the effects of a tsunami striking the Greek and Sicilian coasts, used two-dimensional Boussinesq equations with a high order of approximation. Ref Schambach et al. [
] used a non-hydrostatic three-dimensional model coupled with a non-linear and dispersive two-dimensional model to simulate the propagation of a tsunami on the coast, generated by the earthquake
struck the city of Messina on 28th December 1908. Ref Mueller et al. [
] used, instead, the well-known Cornell Multi-grid Coupled Tsunami model (COMCOT), which solves the NLSWE in spherical and Cartesian coordinates, to analyze the effects of scenarios of a flood near
the Maltese coasts. The same two-dimensional model was adopted by [
] to examine the characteristics of an earthquake-induced tsunami in the south of the province of Bali (Indonesia). In this paper, we present preliminary results regarding hypothetical strike of an
earthquake induced tsunami on Sicilian coast. Furthermore, an inundation map, over a very small area, was produced by means of the weakly dispersive Boussinesq model [
2. Materials and Methods
Many Mediterranean coastal areas are potentially exposed to the tsunami risk [
]. Specifically, the Sicilian coasts are highly exposed, because they have morphological characteristics able to enhance flooding effects and because they are densely populated and plenty of
infrastructure. One of the most exposed Sicilian coastal area, to probable tsunami events, is the Ionian Mediterranean area [
]. In fact, two important tectonic structures are located in this area, the external Calabrian Peloritano Arch and the Hellenic Arch; both originated due to the clash between the Eurasian Plate and
the African Plate,
Figure 1
The tectonic structure of this area includes also several smaller plates [
] making more difficult the analysis of earthquake-induced tsunami (
Figure 1
). The outer Calabrian Arc has impressive deep reverse fault systems (
Figure 1
), with a predominantly NW-SE direction [
], which could originate earthquakes with significant magnitudes [
]. The Hellenic Arc (about 1000 km long) is also one of the most seismically active areas near Greece (
Figure 1
). This structure consists of three main elements: an outer area (South) consisting of three ocean tranches, an intermediate area with an island arc and a North area characterized by a volcanic
islands arc [
]. The coastal area of study was the South of the Ionian Sicilian coast. The coast is articulated with low and rocky coastlines and with sandy beaches, with a slight slope and variable width,
delimited by small promontories. This area, depicted in
Figure 2
was used for numerical test adopting a weakly dispersive numerical model (see
Section 2.1
). The location chosen as a case study is Marzamemi (from the Arabic marsa for port and memi for small), a little coastal village on the Sicilian Ionian area (Italy). This village was selected
because it falls into specific typologies: (a) it has a big exposure to seismic areas that can cause tsunamis; (b) the coast has a flat topography (at about 300 m from the coastline altitudes ranging
between 1 and 6 m above sea level); (c) in this coastal sector, the continental shelf is tight (about 17 km) and it is engraved by little canyons; (d) despite being a fishing village, Marzamemi is
densely populated both during the summer and during the international frontier film festival; (e) it is a site of archaeological-industrial interest because, in its main square, an old tuna factory
is still present.
Figure 2
shows an overview of studied coastal area, a magnification of the promontory area of Marzamemi and the boundaries of the numerical domain. The village develops on the promontory northward the small
fishery port, this promontory is partially exposed to the wave action. In particular, the shoreline of the promontory northern part is preceded by the rocky shelf that has small water depths (about
0.2 m).
2.1. The Numerical Model
The numerical model here adopted is a weakly dispersive Boussinesq type of model ([
]), which is a depth integrated model derived from the incompressible continuity and averaged Navier-Stokes Reynolds momentum equations. Generally, a good numerical model for water waves should
guarantee a balance between the frequency dispersion and nonlinearity and the Boussinesq type of models are ones of the most suitable. Basically, the governing equations include a non-hydrostatic
pressure term in order to better reproduce the frequency dispersion than the classical hydrostatic model (NLSWE). The model dispersive properties were achieved by adding the non-hydrostatic pressure
component in the governing equations. In the
momentum equation, both the vertical local and convective acceleration terms were kept. The numerical solver has shock capturing capabilities and easily addresses wetting/drying problems. The
governing equations are written in a conservative form, this property guarantees that the models can properly simulate discontinuous flows (e.g., breaking, hydraulic jumps, and bores) [
]. In the following are listed the non-hydrostatic depth-integrated continuity and momentum equations:
$∂ h ∂ t + ∂ ( U h ) ∂ x + ∂ ( V h ) ∂ y = 0$
$∂ ( U h ) ∂ t + ∂ ( U 2 h ) ∂ x + ∂ ( U V h ) ∂ y = − g h ∂ h ∂ x − g h ∂ z b ∂ x − 1 2 ∂ ( q b h ) ∂ x − q b ∂ z b ∂ x − g n 2 ( U h ) ( U h ) 2 + ( V h ) 2 h 7 / 3$
$∂ ( V h ) ∂ t + ∂ ( U V h ) ∂ x + ∂ ( V 2 h ) ∂ y = − g h ∂ h ∂ y − g h ∂ z b ∂ y − 1 2 ∂ ( q b h ) ∂ y − q b ∂ z b ∂ y − g n 2 ( V h ) ( U h ) 2 + ( V h ) 2 h 7 / 3$
$∂ ( W h ) ∂ t + ∂ ( U W h ) ∂ x + ∂ ( V W h ) ∂ y = q b$
are the depth-integrated velocity components in the
$U h$
$V h$
$W h$
are the specific flow rate components.
$q b = q ^ b / ρ$
$q ^ b$
is the dynamic pressure and
is the Manning coefficient. Indeed, the total pressure was decomposed by means of:
$p = ρ g ( H − z ) + q ^$
Figure 3
shows the definition scheme of the adopted variables, the subscript
refers to bottom.
The governing equations Equations (
) are a system of Partial Differential Equations in the unknown variables
$U h$
$V h$
$W h$
$q b$
. The solution of the system of equation was performed using the fractional time step procedure [
]. The governing equations were solved using a fractional time step procedure, where a hydrostatic problem and a non-hydrostatic problem are sequentially solved. The dynamic pressure terms in the
momentum equations are neglected when solving the hydrostatic problem and were kept in the non-hydrostatic problem. Furthermore the hydrostatic problem was solved by a prediction-correction scheme,
in the corrector step of the hydrostatic problem, a large linear system for the unknown water levels and dynamic pressures is solved.
3. Validation of the Numerical Model
3.1. The Carrier and Greenspan Numerical Solution
In order to validate the numerical model two comparisons with the results of other authors were performed. A test to propagates a sinusoidal wave train incident an inclined plane was performed. Ref
Carrier and Greenspan [
] proposed an analytical solution derived by the Airy’s approximation of the NLSWE. This analytical solution became a standard test for run-up and run-down modelling. A sinusoidal wave,
m height and a period of 10 s, was used to force the weakly dispersive Boussinesq model. The wave train propagates in a numerical flume with a water depth of
m and a slope of 1:25. The envelope of the free surface, computed by the weakly dispersive Boussinesq model at different time steps, was plotted in
Figure 4
superimposed with the analytical solution of [
Figure 4
b shows a magnification of an intermediate time step identified with a black circle in
Figure 4
Figure 5
shows the oscillations of the run-up (
), compared to the analytical solution of Carrier and Greenspan [
], nevertheless, the slight underestimation of the maxima of
the result of the non-hydrostatic weakly dispersive model are very good. The proposed model shoreline horizontal velocity was also compared with the analytical solution
Figure 6
. The horizontal shoreline velocity is almost the same both in the numerical and in the analytical solution.
3.2. The Fringing Reef Experiment
The second test case was the solitary wave propagation over a reef. The wave transformation over an idealized fringing reef highlights the model’s capability in resolving nonlinear dispersive
solitary waves, considering also wave breaking. The experiments results used for the comparison were carried out at the O.H. Hinsdale Wave Research Laboratory of Oregon State University where a model
of a flat dry reef was used to represent a real fringing reef [
]. The numerical model replicated the real flume that was 48.8 m long, 2.16 m wide, and 2.1 m high. The computational grid was simply built with equilateral triangles of edge length equal to 0.08 m.
The total number of triangles was 55,550 and the nodes were equal to 28,813 and the time step was
$d t = 0.02$
Figure 7
shows the numerical domain and the surface elevation at
$t * = t · g / h 0 = 55.1$
and the red line shows a local zoom of the triangular mesh.
A solitary wave with a dimensionaless wave height of
$H / h 0 = 0.5$
was generated at the inlet of numerical flume and a Manning coefficient
$n = 0.012$
was adopted to reproduce the roughness of the bottom of the flume. In
Figure 8
are shown the model results compared to the measured data [
] at 13 dimensionaless time steps
$t * = t · g / h 0$
The solitary wave becomes steeper as it propagates over the slope and it rises over the coral reef shoals without breaking into a typical plunge.
In the dimensionless time step
$t *$
= 64.3, the numerical model shows a draw-down in front of the flat reef, that produces a back-reflected wave (
Figure 8
). Instead, the high-speed water sheet over the reef quickly runs up. The numerical model accurately replicated the physics of the phenomenon at each time step (
Figure 8
). Moreover, the numerical model well reproduces the solitary wave propagation over the edge reef as shown in
Figure 9
. The time series refers to a point far 22 m from the wavemaker as described in [
], once more the agreement with the numerical model is good.
4. Results and Discussions of a Real Case of Tsunami Propagation
In order to make an inundation map of a small interest area a numerical model was used in the case study of Marzamemi (see
Section 2
A triangular mesh was built using the code proposed by Engwirda [
]. The unstructured mesh has the following characteristics: 11,714 triangles, 5990 nodes (see
Figure 10
a). The triangles size was determined by means of a density function related to bathymetry, making larger triangles in deeper waters.
The bathymetry of the model was obtained from regional digital bathymetric charts whereas digital elevation model was built by the regional topographic maps at 1:10,000 scale (see
Figure 10
b). The two lateral sides of the domain were walls with a free slip condition, in the bottom was calculated a friction term using the Manning’s roughness coefficient equal to 0.012 s/m
$1 / 3$
see Equations (
) and (
). The adopted time step was
$Δ t$
= 0.2 s and the incoming tsunami wave was simulated as 2 m height solitary wave. The propagation of a tsunami wave and the subsequent flooding areas are shown in
Figure 11
in which each subplot shows the surface elevation (above Mean Water Level (MWL)) in a specific time after the start of the simulation. At the initial time step a solitary wave, 2 m height, was
generated in the eastern domain side, about 500 m far from the coastline. This wave is linked to a possible earthquake with a return period of about 2000 years [
], neglecting a statistical study regarding catastrophic events. The earthquake-induced tsunami is related to a hypocentral point near to seafloor and the fault mechanism is reverse. In particular,
it was taken into account, as a potential tsunami source, an earthquake about 200 km far from the coast. It is important to point out that in this manuscript we are presenting a preliminary study.
Analysis of tsunami propagation in the Ionian sea area including statistical studies about earthquake-induced tsunami hazard are ongoing.
Figure 11
reports the water surface elevation at several time steps. The water elevation was calculated regarding the initial condition (still water level), thus it coincides with the water depth where the
points originally were dry. In the subplot (a) of
Figure 11
, the tsunami propagation at time
= 12 s is shown. In this time the wave is about 300 m far from the coastline and its shape begins to change due to the frequency dispersion. Simultaneously, the wavefront starts to rotate as a result
of the change of bathymetry, at the
= 24 s, in
Figure 11
b, it is clearly distinguishable the refraction process. At
= 36 s
Figure 11
c, the wave reaches the coast near the most exposed stretch in which there is the main square of the village of Marzamemi. In
Figure 11
= 48 s, it is described the shoaling of the tsunami and the initial flooding inside the main streets of the village reaching the wave an elevation of 2 m. In this time the northern part of the
village is not yet flooded. In the next subplot, (e)
= 54 s, a complete inundation of the main square and the littoral promenade is shown although with small water depth (20–40 cm). At same time step, it is possible to see the wave breaking over the
seawall that should protect the habitations and the road infrastructure. The last time step, (f) shows the flooding of the whole studied coastal area. A magnification of the last time step of the
simulation is shown in
Figure 12
with the
coordinates in the local reference system and the water height measured above the MWL. At
= 65 s, the church and the ancient tuna factory, XV century, were flooded.
5. Concluding Remarks
In this paper, preliminary studies regarding the tsunami flooding hazards were presented. The tsunami is an highly nonlinear and dispersive wave and it must be modeled using appropriated numerical
model. Indeed, the adopted Boussinesq type of model, in its category of depth integrated, guarantees a balance between the frequency dispersion and non-linearity. Moreover, it was possible, by means
of the Delaunay mesh grid, to have very detailed results with a minor computational cost. As a consequence of this, the model better assess the coastal flooding hazard. The numerical model was
validated through the analytical solution of Carrier and Greenspan [
] and with the experimental study presented by Roeber et al. [
]. The comparisons with the numerical model show excellent agreements. Finally, it was applied the numerical modeling procedure to a real case in order to perform the propagation of the tsunami and
to evaluate its impact on the coast and the subsequent coastal flooding. To assess both the possibility of coastal flooding and their extent, an earthquake, that cause a tsunami, was generated off
the Ionian coast (return period c.a. 2000 years).The results of the tsunami propagation show that the extent of the flooded areas is about 100 m inward the shoreline. All roads near the shoreline,
including the historic village square, are flooded. The water levels, although not extremely high, could cause dangers mainly in the periods of the year when the population density grows
considerably. These results, although preliminary, highlight the extreme fragility of this coastal site. For this reason, further numerical modeling is ongoing, taking into account also the
structural response of the buildings inside the village of Marzamemi. This preliminary investigation is the basis for further studies that are already underway, their results will be useful for civil
protection agencies in order to project emergency management plans.
Author Contributions
Data curation, C.L.R. and G.M.; Formal analysis, C.L.R. and G.M.; Methodology, C.L.R., G.M., and G.C.; Software, C.L.R. and G.M.; Supervision, G.C.; Geological and geomorphological supervision, G.M.;
Hydarulic modelling C.L.R.; Writing—original draft, C.L.R., G.M. and G.C.; Writing—review & editing, C.L.R., G.M. and G.C. All authors have read and agreed to the published version of the manuscript.
This research was funded by SIMIT THARSY Tsunami Hazard Reduction System C1-3.2-5-INTERREG V-A Italia-Malta.The APC was funded by SIMIT THARSY.
We thank our scientific responsible, Goffredo La Loggia, for assistance to tasks development during the project SIMIT-THARSY.
Conflicts of Interest
The authors declare no conflict of interest.
1. Molina, R.; Manno, G.; Lo Re, C.; Anfuso, G.; Ciraolo, G. Storm Energy Flux Characterization along the Mediterranean Coast of Andalusia (Spain). Water 2019, 11, 509. [Google Scholar] [CrossRef] [
Green Version]
2. Molina, R.; Manno, G.; Lo Re, C.; Anfuso, G.; Ciraolo, G. A Methodological Approach to Determine Sound Response Modalities to Coastal Erosion Processes in Mediterranean Andalusia (Spain). J. Mar.
Sci. Eng. 2020, 8, 154. [Google Scholar] [CrossRef] [Green Version]
3. Manno, G.; Lo Re, C.; Ciraolo, G. Uncertainties in shoreline position analysis: The role of run-up and tide in a gentle slope beach. Ocean Sci. 2017, 13, 661–671. [Google Scholar] [CrossRef] [
Green Version]
4. Lo Re, C.; Musumeci, R.E.; Foti, E. A shoreline boundary condition for a highly nonlinear Boussinesq model for breaking waves. Coast. Eng. 2012, 60, 41–52. [Google Scholar] [CrossRef]
5. Roeber, V.; Cheung, K.F.; Kobayashi, M.H. Shock-capturing Boussinesq-type model for nearshore wave processes. Coast. Eng. 2010, 57, 407–423. [Google Scholar] [CrossRef]
6. Roeber, V.; Cheung, K.F. Boussinesq-type model for energetic breaking waves in fringing reef environments. Coast. Eng. 2012, 70, 1–20. [Google Scholar] [CrossRef]
7. Zhang, H.; Zhang, M.; Ji, Y.; Wang, Y.; Xu, T. Numerical study of tsunami wave run-up and land inundation on coastal vegetated beaches. Comput. Geosci. 2019, 132, 9–22. [Google Scholar] [CrossRef
8. Marivela, R.; Weiss, R.; Synolakis, C. The Temporal and Spatial Evolution of Momentum, Kinetic Energy and Force in Tsunami Waves during Breaking and Inundation. arXiv 2016, arXiv:1611.04514. [
Google Scholar]
9. Manoj Kumar, G.; Sriram, V.; Didenkulova, I. A hybrid numerical model based on FNPT-NS for the estimation of long wave run-up. Ocean Eng. 2020, 202, 107181. [Google Scholar] [CrossRef]
10. Wei, Z.; Dalrymple, R.A.; Rustico, E.; Hérault, A.; Bilotta, G. Simulation of Nearshore Tsunami Breaking by Smoothed Particle Hydrodynamics Method. J. Waterway Port Coast. Ocean Eng. 2016, 142,
05016001. [Google Scholar] [CrossRef]
11. Qin, X.; Motley, M.; LeVeque, R.; Gonzalez, F.; Mueller, K. A comparison of a two-dimensional depth-averaged flow model and a three-dimensional RANS model for predicting tsunami inundation and
fluid forces. Nat. Hazards Earth Syst. Sci. 2018, 18, 2489–2506. [Google Scholar] [CrossRef] [Green Version]
12. Qu, K.; Ren, X.; Kraatz, S. Numerical investigation of tsunami-like wave hydrodynamic characteristics and its comparison with solitary wave. Appl. Ocean Res. 2017, 63, 36–48. [Google Scholar] [
13. Arico, C.; Lo Re, C. A non-hydrostatic pressure distribution solver for the nonlinear shallow water equations over irregular topography. Adv. Water Resour. 2016, 98, 47–69. [Google Scholar] [
CrossRef] [Green Version]
14. Presti, V.L.; Antonioli, F.; Auriemma, R.; Ronchitelli, A.; Scicchitano, G.; Spampinato, C.; Anzidei, M.; Agizza, S.; Benini, A.; Ferranti, L.; et al. Millstone coastal quarries of the
Mediterranean: A new class of sea level indicator. Quat. Int. 2014, 332, 126–142. [Google Scholar] [CrossRef]
15. Samaras, A.; Karambas, T.V.; Archetti, R. Simulation of tsunami generation, propagation and coastal inundation in the Eastern Mediterranean. Ocean Sci. 2015, 11, 643–655. [Google Scholar] [
CrossRef] [Green Version]
16. Schambach, L.; Grilli, S.T.; Kirby, J.T.; Shi, F. Landslide Tsunami Hazard Along the Upper US East Coast: Effects of Slide Deformation, Bottom Friction, and Frequency Dispersion. Pure Appl.
Geophys. 2019, 176, 3059–3098. [Google Scholar] [CrossRef]
17. Mueller, C.; Micallef, A.; Spatola, D.; Wang, X. The Tsunami Inundation Hazard of the Maltese Islands (Central Mediterranean Sea): A Submarine Landslide and Earthquake Tsunami Scenario Study.
Pure Appl. Geophys. 2020, 177, 1617–1638. [Google Scholar] [CrossRef]
18. Suardana, A.M.A.P.; Sugianto, D.N.; Helmi, M. Study of Characteristics and the Coverage of Tsunami Wave Using 2D Numerical Modeling in the South Coast of Bali, Indonesia. Int. J. Oceans Oceanogr.
2019, 13, 237–250. [Google Scholar]
19. Papadopoulos, G.A.; Fokaefs, A. Strong tsunamis in the Mediterranean Sea: A re-evaluation. ISET J. Earthq. Technol. 2005, 42, 159–170. [Google Scholar]
20. DISS Working Group. Database of Individual Seismogenic Sources (DISS). In A Compilation of Potential Sources for Earthquakes Larger than M 5.5 in Italy and Surrounding Areas; Istituto Nazionale
di Geofisica e Vulcanologia: Rome, Italy, 2018. [Google Scholar] [CrossRef]
21. Caputo, R.; Pavlides, S. The Greek Database of Seismogenic Sources (GreDaSS), version 2.0.0: A Compilation of Potential Seismogenic Sources (Mw > 5.5) in the Aegean Region; University of Ferrara:
Ferrara, Italy, 2013. [Google Scholar] [CrossRef]
22. Gutscher, M.A.; Roger, J.; Baptista, M.A.; Miranda, J.M.; Tinti, S. Source of the 1693 Catania earthquake and tsunami (southern Italy): New evidence from tsunami modeling of a locked subduction
fault plane. Geophys. Res. Lett. 2006, 33, L08309. [Google Scholar] [CrossRef] [Green Version]
23. Catalano, R.; Doglioni, C.; Merlini, S. On the mesozoic Ionian basin. Geophys. J. Int. 2001, 144, 49–64. [Google Scholar] [CrossRef] [Green Version]
24. Papadopoulos, G.A.; Daskalaki, E.; Fokaefs, A.; Giraleas, N. Tsunami hazard in the Eastern Mediterranean sea: Strong earthquakes and tsunamis in the west Hellenic arc and trench system. J.
Earthq. Tsunami 2010, 4, 145–179. [Google Scholar] [CrossRef]
25. LeVeque, R.J. Numerical Methods for Conservation Laws; Birkhäuser Boston: Basel, Switzerland, 2013. [Google Scholar]
26. Toro, E.F. Riemann Solvers and Numerical Methods for Fluid Dynamics: A Practical Introduction; Springer Science & Business Media: New York, NY, USA, 2013. [Google Scholar]
27. Stelling, G.S.; Duinmeijer, S.P.A. A staggered conservative scheme for every Froude number in rapidly varied shallow water flows. Int. J. Numer. Methods Fluids 2003, 43, 1329–1354. [Google
Scholar] [CrossRef]
28. Carrier, G.; Greenspan, H. Water waves of finite amplitude on a sloping beach. J. Fluid Mech. 1958, 4, 97–109. [Google Scholar] [CrossRef]
29. Roeber, V. Boussinesq-Type Model for Nearshore Wave Processes in Fringing Reef Environment. Ph.D. Thesis, University of Hawaii at Manoa, Honolulu, HI, USA, December 2010. [Google Scholar]
30. Engwirda, D. Locally-Optimal Delaunay-Refinement and Optimisation-Based Mesh Generation. Ph.D. Thesis, The University of Sydney, Sydney, Australia, 2014. [Google Scholar]
31. Basili, R.; Brizuela, B.; Herrero, A.; Iqbal, S.; Lorito, S.; Maesano, F.E.; Murphy, S.; Perfetti, P.; Romano, F.; Scala, A.; et al. NEAM Tsunami Hazard Model 2018 (NEAMTHM18): Online Data of the
Probabilistic Tsunami Hazard Model for the NEAM Region from the TSUMAPS-NEAM Project; Istituto Nazionale di Geofisica e Vulcanologia (INGV): Roma, Italy, 2018. [Google Scholar] [CrossRef]
Figure 1.
Possible seismic sources in the Ionian Sea. The data-set is taken from the DISS Working Group [
], Caputo and Pavlides [
Figure 2. The case study area. The red rectangle shows the Marzamemi promontory, the yellow dash-dot line highlights the boundaries of the numerical model.
Figure 4.
) Envelope of free surface of sine wave run-up on a planar beach. Comparison between the weakly dispersive model (blue dotted lines) and Carrier and Greenspan [
] analytical solution (continuous red lines). (
) A zoom of the surface elevation near the planar beach at an intermediate time step.
Figure 5.
Shoreline vertical motion
of sine wave run-up on a planar beach. Comparison between adopted model (blue dotted lines) and the analytical solution by Carrier and Greenspan [
] (continuous red lines).
Figure 6.
Shoreline velocity of monochromatic wave run-up on a planar beach. Comparison between adopted model (blue dotted lines) and the analytical solution by Carrier and Greenspan [
] (continuous red lines).
Figure 7. The numerical domain and the triangular equilateral mesh used in the flat reef run-up test. The red box highit a magnified area of the mesh. The surface elevation correspond to $t * = 55.1$
Figure 8. Surface elevations of solitary wave over a flat reef with $H / h 0 = 0.5$ and 1:5 slope. Solid blue lines are the weakly dispersive model results and the red triangles are measured data.
Each subplot shows the results at a dimensionaless time step.
Figure 9.
Time series of the surface elevation at edge of the reef. The blue line is the numerical model results the red circles are the measurements by [
Figure 10. The Marzamemi numerical domain. (a) The used triangular mesh, (b) the bathymethry of the studied area, the color bar shows the elevation in meters above mean water level. In the subplots
the coordinate origin are E = 510,569 m; N = 4,066,207 m; WGS84-UTM33N reference system.
Figure 11. Water surface elevation. The origin of coordinate axis is E = 510,569 m; N = 4,066,207 m; WGS84 UTM33N reference system.
Figure 12.
Magnification of subplot (f) of
Figure 11
. The red lines shows the MWL the color bar shows the water level above the MWL.
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http:/
Share and Cite
MDPI and ACS Style
Lo Re, C.; Manno, G.; Ciraolo, G. Tsunami Propagation and Flooding in Sicilian Coastal Areas by Means of a Weakly Dispersive Boussinesq Model. Water 2020, 12, 1448. https://doi.org/10.3390/w12051448
AMA Style
Lo Re C, Manno G, Ciraolo G. Tsunami Propagation and Flooding in Sicilian Coastal Areas by Means of a Weakly Dispersive Boussinesq Model. Water. 2020; 12(5):1448. https://doi.org/10.3390/w12051448
Chicago/Turabian Style
Lo Re, Carlo, Giorgio Manno, and Giuseppe Ciraolo. 2020. "Tsunami Propagation and Flooding in Sicilian Coastal Areas by Means of a Weakly Dispersive Boussinesq Model" Water 12, no. 5: 1448. https://
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
Article Metrics
|
{"url":"https://www.mdpi.com/2073-4441/12/5/1448?utm_source=releaseissue&utm_medium=email&utm_campaign=releaseissue_water&utm_term=doilink29","timestamp":"2024-11-03T15:51:02Z","content_type":"text/html","content_length":"414728","record_id":"<urn:uuid:84a5f632-9cee-4701-b407-eb22297666d2>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00197.warc.gz"}
|
Basics Archives
A measure of variability is a summary statistic that represents the amount of dispersion in a dataset. How spread out are the values? While a measure of central tendency describes the typical value,
measures of variability define how far away the data points tend to fall from the center. We talk about variability in the context of a distribution of values. A low dispersion indicates that the
data points tend to be clustered tightly around the center. High dispersion signifies that they tend to fall further away.
In statistics, variability, dispersion, and spread are synonyms that denote the width of the distribution. Just as there are multiple measures of central tendency, there are several measures of
variability. In this blog post, you’ll learn why understanding the variability of your data is critical. Then, I explore the most common measures of variability—the range, interquartile range,
variance, and standard deviation. I’ll help you determine which one is best for your data. [Read more…] about Measures of Variability: Range, Interquartile Range, Variance, and Standard Deviation
|
{"url":"https://statisticsbyjim.com/basics/page/9/","timestamp":"2024-11-09T20:05:41Z","content_type":"text/html","content_length":"199437","record_id":"<urn:uuid:41dd1f0a-bd44-4f26-b167-8a443251a96d>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00832.warc.gz"}
|
Finding the Coordinates of a Vector
Question Video: Finding the Coordinates of a Vector Mathematics • First Year of Secondary School
If 𝐀𝐁 = 𝐢 − 2𝐣 and 𝐁 ⟨4, 3⟩, then the coordinates of vector 𝐀 are _.
Video Transcript
If vector 𝐀𝐁 is equal to 𝐢 minus two 𝐣 and vector 𝐁 equals four, three, then the coordinates of vector 𝐀 are blank.
We begin by recalling that vector 𝐀𝐁 is equal to vector 𝐁 minus vector 𝐀. We also know that any vector 𝐕 written in terms of unit vectors 𝐢 and 𝐣 such that 𝐕 is equal to 𝑥𝐢 plus 𝑦𝐣 can be rewritten
such that vector 𝐕 has components 𝑥 and 𝑦. The vector 𝐢 minus two 𝐣 can, therefore, be rewritten in terms of its components as one, negative two. This must be equal to the vector four, three minus
the vector 𝑥, 𝑦 where 𝑥 and 𝑦 are the components of vector 𝐀.
We recall that when adding and subtracting vectors, we simply add or subtract the corresponding components. When considering the 𝑥-components, we have the equation one is equal to four minus 𝑥.
Adding 𝑥 and subtracting one from both sides of this equation gives us 𝑥 is equal to four minus one. This gives us a value of 𝑥 equal to three. Repeating this for our 𝑦-components, we have the
equation negative two is equal to three minus 𝑦. We can then add 𝑦 and two to both sides of this equation such that 𝑦 is equal to three plus two. This gives us a 𝑦-component equal to five.
Vector 𝐀, therefore, has coordinates three, five.
|
{"url":"https://www.nagwa.com/en/videos/943170102941/","timestamp":"2024-11-13T22:00:39Z","content_type":"text/html","content_length":"248313","record_id":"<urn:uuid:e1da639f-30b3-4a03-979e-b19f558936d9>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00500.warc.gz"}
|
Mathematics At Leicester — College Of Leicester - TJF
Is there anyone on this planet, whether or not he’s Indian or European, who is absolutely free from superstitions? Software program is out there that can measure a baby’s math skill degree.
Particularly, μαθηματικὴ τέχνη (mathēmatikḗ tékhnē), Latin : ars mathematica, meant “the mathematical art”. There are quite a few websites in existence at the moment that may help students with many
alternative aspects of mathematics including fractions, algebra, geometry, trigonometry, even calculus and beyond.
Letter-primarily based symbols: Many mathematical symbols are based mostly on, or carefully resemble, a letter in some alphabet. Discrete mathematics is the department of math that deals with objects
that can assume only distinct, separated value. Discrete mathematics is the mathematical language of pc science, as it consists of the research of algorithms.
Mathematics is used to create the complicated programming on the heart of all computing. Time spent at school learning math or not studying math can never get replaced. This can be a community
outreach for the colleges and universities, however youngsters will develop social and academic expertise that may carry over to their classroom and on to maturity.
The term utilized mathematics also describes the professional specialty wherein mathematicians work on sensible problems; as a occupation targeted on sensible issues, utilized mathematics focuses on
the “formulation, study, and use of mathematical models” in science, engineering, and other areas of mathematical observe.
Fields of discrete mathematics include combinatorics, graph concept, and the theory of computation. In inches, that might be 13526.5, and in centimeters, that will be 34357.31 – this is how far in
linear distance sound travels per second, at sea stage, at about 70 levels Fahrenheit, or at about 21 degrees Celsius.
|
{"url":"https://www.terryjohnsonsflamingos.com/mathematics-at-leicester-college-of-leicester.html","timestamp":"2024-11-03T20:28:56Z","content_type":"text/html","content_length":"109041","record_id":"<urn:uuid:9ae0e8c8-0412-47d6-ab2f-9e4a4cd15d00>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00602.warc.gz"}
|
This first edition was written for Lua 5.0. While still largely relevant for later versions, there are some differences.
The fourth edition targets Lua 5.3 and is available at Amazon and other bookstores.
By buying the book, you also help to support the Lua project.
Programming in Lua
Part I. The Language Chapter 2. Types and Values
2.3 – Numbers
The number type represents real (double-precision floating-point) numbers. Lua has no integer type, as it does not need it. There is a widespread misconception about floating-point arithmetic errors
and some people fear that even a simple increment can go weird with floating-point numbers. The fact is that, when you use a double to represent an integer, there is no rounding error at all (unless
the number is greater than 100,000,000,000,000). Specifically, a Lua number can represent any long integer without rounding problems. Moreover, most modern CPUs do floating-point arithmetic as fast
as (or even faster than) integer arithmetic.
It is easy to compile Lua so that it uses another type for numbers, such as longs or single-precision floats. This is particularly useful for platforms without hardware support for floating point.
See the distribution for detailed instructions.
We can write numeric constants with an optional decimal part, plus an optional decimal exponent. Examples of valid numeric constants are:
4 0.4 4.57e-3 0.3e12 5e+20
Copyright © 2003–2004 Roberto Ierusalimschy. All rights reserved.
|
{"url":"https://www.lua.org/pil/2.3.html","timestamp":"2024-11-13T04:57:34Z","content_type":"text/html","content_length":"3122","record_id":"<urn:uuid:5adeb6da-057b-4be9-9d94-14e5dfa0cc69>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00498.warc.gz"}
|
Broadband, angle-dependent optical characterization of asymmetric self-assembled nanohole arrays in silverBroadband, angle-dependent optical characterization of asymmetric self-assembled nanohole arrays in silver
Plasmonic nanostructured materials made of nanohole arrays in metal are significant plasmonic devices exhibiting resonances and strong electromagnetic confinement in the visible and near-infrared
range. As such, they have been proposed for use in many applications such as biosensing and communications. In this work, we introduce the asymmetry in nanoholes, and investigate its influence on the
electromagnetic response by means of broadband experimental characterization and numerical simulations. As a low-cost fabrication process, we use nanosphere lithography, combined with tilted silver
evaporation, to obtain a 2D hexagonal array of asymmetric nanoholes in Ag. Our experimental set-up is based on a laser, widely tunable in the near-infrared range, with precise polarization control in
the input and in the output. We next resolve the circular polarization degree of the transmitted light when the nanohole array is excited with linear polarization. We attribute the disbalance of left
and right transmitted light to the asymmetry of the nanohole, which we support by numerical simulations. We believe that the optimization of such simple plasmonic geometry could lead to
multifunctional flat-optic devices.
|
{"url":"https://www.researchsquare.com/article/rs-2570403/v1","timestamp":"2024-11-06T05:25:38Z","content_type":"text/html","content_length":"146269","record_id":"<urn:uuid:a865ae59-2877-4672-aeed-decf947618f7>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00785.warc.gz"}
|
Distribution — ACHIVX
1. In statistics, the range of values which is a random variable, and the probability that each value or range of values will occur. 2. Algebraic distribution means to multiply each of the terms
within the parentheses by another term that is outside the parentheses. To distribute a term over several other terms, you multiply each of the other terms by the first term. 3. In crypto
industry, this term means the distribution of coins or tokens among participants of various programs, contests, etc. When Distribution? is a Common crypto meme.
1. Statistics: Mapping the Random
Imagine rolling a die. The possible outcomes (1 through 6) form a range of values. Distribution in statistics tells us the likelihood of each outcome, essentially painting a picture of how randomness
2. Algebra: Sharing is Caring
Remember the distributive property? It’s like sharing a bag of candy! Let’s say you have the expression 2(x + 3). Distribution means multiplying that “2” with both “x” and “3” inside the parentheses,
resulting in 2x + 6. Everyone gets their fair share!
3. Crypto: Spreading the Wealth (and Memes)
In the exciting world of cryptocurrency, “distribution” takes on a whole new meaning. It refers to how tokens or coins are allocated among different groups, like early investors or participants in
special events. And yes, “When Distribution?” has indeed become a rallying cry (and meme) in the crypto community, often expressing anticipation for token rewards.
1. Statistics: Predicting the Roll of a Die
Imagine rolling a fair six-sided die. The possible outcomes (1, 2, 3, 4, 5, 6) represent the range of values. Each outcome has an equal probability of occurring (1/6). This pattern of probabilities
across the range of values is what we call a distribution.
2. Algebra: Sharing is Caring (and Multiplying)
Think of distributing cookies in a classroom. Let’s say you have 3 boxes of cookies, each containing 10 cookies. To distribute the cookies evenly to 5 students, you’d use multiplication: 3 boxes * 10
cookies/box = 30 cookies. Then, you divide the total cookies by the number of students: 30 cookies / 5 students = 6 cookies/student. Each student gets 6 cookies!
3. Crypto: Free Tokens for Everyone! (Well, Maybe Not Everyone…)
In the crypto world, distribution often refers to how a project initially shares its tokens or coins. This can happen through:
• Airdrops: Think of it like a surprise gift. Projects give away free tokens to users who meet certain criteria, like holding another cryptocurrency.
• Mining rewards: Just like miners dig for gold, crypto miners solve complex problems to validate transactions and earn new tokens as a reward.
And yes, “When Distribution?” is a common meme in the crypto community, highlighting the anticipation and excitement (or sometimes impatience) around a project’s token distribution plans.
This is the classic definition. It describes the range of potential outcomes (like the price of a stock) and how likely each outcome is. Understanding distributions is crucial for risk management and
making informed trading decisions.
• Helps analyze potential risks and rewards.
• Provides a framework for probability-based trading strategies.
• Real-world data doesn’t always follow theoretical distributions.
• Can be complex and require a solid understanding of statistics.
2. Algebraic Distribution
This is less relevant to trading, but it’s about multiplying terms in an equation. It’s important to know the basics of algebra when dealing with financial formulas.
3. Crypto Distribution
This is where it gets interesting! This refers to how crypto projects allocate their tokens to different groups, like early investors or the community. A fair distribution is crucial for a project’s
long-term success.
• Can incentivize early adoption and community participation.
• Transparency in distribution can build trust in a project.
• Uneven distribution can lead to centralization and manipulation.
• “When Distribution?” is a meme for a reason – delays and unfair practices are common.
Overall, “distribution” is a multifaceted term with significant implications in finance and crypto. Understanding its different meanings is essential for navigating these markets effectively.
|
{"url":"https://achivx.com/glossary/distribution/","timestamp":"2024-11-02T11:54:46Z","content_type":"text/html","content_length":"84574","record_id":"<urn:uuid:cc6dacd8-debc-402c-89b3-8b36f40d84f5>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00001.warc.gz"}
|
Project Austin Part 2 of 6: Page Curling - C++ Team Blog
Hi, my name is Eric Brumer. I’m a developer on the C++ compiler optimizer, but I’ve spent some time working on Project Code Name Austin to help showcase the power and performance of C++ in a
real-world program. For a general overview of the project, please check out the original blog post. The source code for Austin, including the bits specific to page curling described here, can be
downloaded on CodePlex.
In this blog post, I’ll explain how we implemented page turning in the “Full page” viewing mode. We wanted to make flipping through the pages in Austin to feel like flipping through pages in a real
book. To that end, we built on some existing published work to achieve performant and realistic page curling.
Before going further, take a look at a video of page curling in action!
(you can download the video in mp4 format using this link)
Realistic page curling
A brilliant paper by Hong et. al. called “Turning Pages of 3D Electronic Books” claims that turning a page of a physical book can be simulated as deforming a page around a cone. See [1] for the
Here’s a (poorly drawn) diagram to help explain the concept in the paper. The flat sheet of paper is deformed around the cone to simulate curling. By changing the shape and position of the cone you
can simulate more or less curling.
Similarly, you can also curl a flat sheet of paper around a cylinder. Here’s another (poorly drawn) diagram to help explain that concept.
To simulate curling, we use a combination of curling around a cone and curling around a cylinder:
• If the user is trying to curl from the top-right of the page, we simulate pinching the top right corner of a piece of paper by deforming around a cone.
• If the user is trying to curl from the center-right of the page, we simulate pinching the center of a piece of paper by deforming around a cylinder.
• If the user is trying to curl from the bottom-right of the page, we simulate pinching the bottom right corner of a piece of paper by deforming around the cone flipped upside down.
Anywhere in between and we use a combination of cone & cylinder deforming.
Some geometry
Here are the details to transform a page around a cylinder. There is similar geometry to transform a page to a cylinder described in [1]. Given the point P[flat] with coordinates {x[1], y[1], z[1] =
0} of a flat page, we want to transform it into P[curl] with coordinates {x[2], y[2], z[2]} the point on a cylinder with radius r that is lying on the ‘spine’ of the book. Consider the following
diagram. Note the x & z axes (the y axis is in & out of your computer screen). Also keep in mind I am representing the flat paper & cylinder using the same colors as in the diagrams above.
The key insight is that the distance from the origin to P[flat] (x[1]) is the same arc distance as from the origin to P[curl] along the cylinder. Then, from simple geometry, we can say that β = x[1]
/ r. Now, to get P[curl], we take the origin, move it down by ‘r’ on the z axis, rotate about β, then move it up by ‘r’ on the z axis. So, the math ends up being:
The above equations compute P[curl] by wrapping a flat page around cylinder. [1] contains the equations to compute a different P[curl] by wrapping a flat page around a cone. Once we compute both P
[curl ]values, we combine the results based on where the user is trying to curl the page. Lastly, after we have computed the two curled points, we rotate the entire page about the spine of the book.
The specific parameters are tuned by hand: the cone parameters, the cylinder width, and the rotation about the spine of the book.
The source code for Austin, including the bits specific to page curling described here, can be downloaded on CodePlex. The page curling transformation is done in journal/views/page_curl.cpp,
specifically in page_curl::curlPage(). The rest of the code in that file is to handle uncurling pages (forwards or backwards) when the user lifts their finger off the screen. I’m omitting some
important details, but this code gives the rough idea.
for (b::int32 j = 0; j < jMax; j++)
for (b::int32 i = 0; i < iMax; i++)
{load up x, y, z=0}
float coneX = x;
float coneY = y;
float coneZ = z;
// Compute conical parameters coneX, coneY, coneZ
float cylX = x;
float cylY = y;
float cylZ = z;
float beta = cylX / cylRadius;
// Rotate (0,0,0) by beta around line given by x = 0, z = cylRadius.
// aka Rotate (0,0,-cylRadius) by beta, then add cylRadius back to z coordinate
cylZ = -cylRadius;
cylX = -cylZ * sin(beta);
cylZ = cylZ * cos(beta);
cylZ += cylRadius;
// Then rotate by angle about the y axis
cylX = cylX * cos(angle) – cylZ * sin(angle);
cylZ = cylX * sin(angle) + cylZ * cos(angle);
// Transform coordinates to the page
cylX = (cylX * pageCoordTransform) – pageMaxX;
cylY = (-cylY * pageCoordTransform) + pageMaxY;
cylZ = cylZ * pageCoordTransform;
// combine cone & cylinder systems
x = conicContribution * coneX + (1-conicContribution) * cylX;
y = conicContribution * coneY + (1-conicContribution) * cylY;
z = conicContribution * coneZ + (1-conicContribution) * cylZ;
vertexBuffer[jOffset + i].position.x = x;
vertexBuffer[jOffset + i].position.y = y;
vertexBuffer[jOffset + i].position.z = z;
Automatic Vectorization
A new feature in the Visual Studio 2012 C++ compiler is automatic vectorization. The C++ compiler analyzes loop bodies and generates code targeting the SSE2 instruction set to take advantage of CPU
vector units. For an introduction to the auto vectorizer, and plenty of other information, please see the vectorizer blog series.
The inner loop above is vectorized by the Visual Studio 2012 C++ compiler. The compiler is able to vectorize all of the transcendental functions in math.h, along with the standard arithmetic
operations (addition, multiplication, etc). The generated code loads four values of x, y, and z. Then it computes four values of cylX, cylY, cylZ at a time, computes curlX, curlY, curlZ at a time,
and stores the result into the vertex buffer for four vertices.
I know the code gets vectorized because I specified the /Qvec-report:1 option in my project settings, under Configuration Properties -> C/C++ -> Command Line, as per the following picture:
Then, after compiling, the output window shows which loops were vectorized, as per the following picture:
Eric’s editorial: we decided late during the product cycle to include the /Qvec-report:1 and /Qvec-report:2 switches, and we did not have time to include them in the proper menu location.
If you do not see a loop getting vectorized and wonder why, you can specify the /Qvec-report:2 option. We offer some guidance on handling loops that are not vectorized in a vectorizer blog post.
Because of the power of CPU vector units, the ‘i’ loop gets sped up by a factor of 1.75. In this instance, we are able to compute P[curl] (the combination of cone & cylinder) for four vertices at a
time. This frees up CPU time for other rendering tasks, such as shading the page.
To curl a single page, we need to calculate P[curl] for each vertex comprising a piece of paper. To my count, this involves 4 calls to sin, 3 calls to cos, 1 arcsin, 1 sqrt, and a dozen or so
multiplications, additions and subtractions – for each vertex in a piece of paper – for each frame that we are rendering!
We aim to render at 60fps, which means we have around 15 milliseconds to curl the pages vertices and render them — otherwise the app will feel sluggish. With this loop getting auto vectorized, we’re
able to free up CPU time for other rendering tasks, such as shading the page.
[1] L. Hong, S.K. Card, and J. Chen, “Turning Pages of 3D Electronic Books”, in Proc. 3DUI, 2006, pp.159-165.
1 comment
Discussion is closed. Login to edit/delete existing comments.
• 0
Thanks for sharing great information on page curling.
Flip Image Online
|
{"url":"https://devblogs.microsoft.com/cppblog/project-austin-part-2-of-6-page-curling/","timestamp":"2024-11-11T06:22:11Z","content_type":"text/html","content_length":"198812","record_id":"<urn:uuid:49263aad-e1ea-4f38-b5b8-1d7dd78be34f>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00146.warc.gz"}
|
Sandbox: Interactive OER for Dual Enrollment
127 Molarity
Paul Flowers; Edward J. Neth; William R. Robinson; Klaus Theopold; and Richard Langley
Learning Objectives
By the end of this section, you will be able to:
• Describe the fundamental properties of solutions
• Calculate solution concentrations using molarity
• Perform dilution calculations using the dilution equation
Preceding sections of this chapter focused on the composition of substances: samples of matter that contain only one type of element or compound. However, mixtures—samples of matter containing two or
more substances physically combined—are more commonly encountered in nature than are pure substances. Similar to a pure substance, the relative composition of a mixture plays an important role in
determining its properties. The relative amount of oxygen in a planet’s atmosphere determines its ability to sustain aerobic life. The relative amounts of iron, carbon, nickel, and other elements in
steel (a mixture known as an “alloy”) determine its physical strength and resistance to corrosion. The relative amount of the active ingredient in a medicine determines its effectiveness in achieving
the desired pharmacological effect. The relative amount of sugar in a beverage determines its sweetness (see (Figure)). This section will describe one of the most common ways in which the relative
compositions of mixtures may be quantified.
Solutions have previously been defined as homogeneous mixtures, meaning that the composition of the mixture (and therefore its properties) is uniform throughout its entire volume. Solutions occur
frequently in nature and have also been implemented in many forms of manmade technology. A more thorough treatment of solution properties is provided in the chapter on solutions and colloids, but
provided here is an introduction to some of the basic properties of solutions.
The relative amount of a given solution component is known as its concentration. Often, though not always, a solution contains one component with a concentration that is significantly greater than
that of all other components. This component is called the solvent and may be viewed as the medium in which the other components are dispersed, or dissolved. Solutions in which water is the solvent
are, of course, very common on our planet. A solution in which water is the solvent is called an aqueous solution.
A solute is a component of a solution that is typically present at a much lower concentration than the solvent. Solute concentrations are often described with qualitative terms such as dilute (of
relatively low concentration) and concentrated (of relatively high concentration).
Concentrations may be quantitatively assessed using a wide variety of measurement units, each convenient for particular applications. Molarity (M) is a useful concentration unit for many applications
in chemistry. Molarity is defined as the number of moles of solute in exactly 1 liter (1 L) of the solution:
\(M=\phantom{\rule{0.2em}{0ex}}\frac{\text{mol solute}}{\text{L solution}}\phantom{\rule{0.2em}{0ex}}\)
Calculating Molar Concentrations A 355-mL soft drink sample contains 0.133 mol of sucrose (table sugar). What is the molar concentration of sucrose in the beverage?
Solution Since the molar amount of solute and the volume of solution are both given, the molarity can be calculated using the definition of molarity. Per this definition, the solution volume must be
converted from mL to L:
\(M=\phantom{\rule{0.2em}{0ex}}\frac{\text{mol solute}}{\text{L solution}}\phantom{\rule{0.2em}{0ex}}=\phantom{\rule{0.2em}{0ex}}\frac{0.133\phantom{\rule{0.2em}{0ex}}\text{mol}}{355\phantom{\rule
Check Your Learning A teaspoon of table sugar contains about 0.01 mol sucrose. What is the molarity of sucrose if a teaspoon of sugar has been dissolved in a cup of tea with a volume of 200 mL?
Deriving Moles and Volumes from Molar Concentrations How much sugar (mol) is contained in a modest sip (~10 mL) of the soft drink from (Figure)?
Solution Rearrange the definition of molarity to isolate the quantity sought, moles of sugar, then substitute the value for molarity derived in (Figure), 0.375 M:
\(\begin{array}{c}\\ M=\phantom{\rule{0.2em}{0ex}}\frac{\text{mol solute}}{\text{L solution}}\phantom{\rule{0.2em}{0ex}}\\ \text{mol solute}=M\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\
text{L solution}\\ \\ \text{mol solute}\phantom{\rule{0.2em}{0ex}}=\phantom{\rule{0.2em}{0ex}}0.375\phantom{\rule{0.4em}{0ex}}\frac{\text{mol sugar}}{\text{L}}\phantom{\rule{0.2em}{0ex}}×\phantom{\
{mL}}\right)\phantom{\rule{0.2em}{0ex}}=\phantom{\rule{0.2em}{0ex}}0.004\phantom{\rule{0.2em}{0ex}}\text{mol sugar}\end{array}\)
Check Your Learning What volume (mL) of the sweetened tea described in (Figure) contains the same amount of sugar (mol) as 10 mL of the soft drink in this example?
Calculating Molar Concentrations from the Mass of Solute Distilled white vinegar ((Figure)) is a solution of acetic acid, CH[3]CO[2]H, in water. A 0.500-L vinegar solution contains 25.2 g of acetic
acid. What is the concentration of the acetic acid solution in units of molarity?
Solution As in previous examples, the definition of molarity is the primary equation used to calculate the quantity sought. Since the mass of solute is provided instead of its molar amount, use the
solute’s molar mass to obtain the amount of solute in moles:
\(M=\phantom{\rule{0.2em}{0ex}}\frac{\text{mol solute}}{\text{L solution}}\phantom{\rule{0.2em}{0ex}}=\phantom{\rule{0.2em}{0ex}}\frac{25.2 g\phantom{\rule{0.2em}{0ex}}{\text{CH}}_{3}{\text{CO}}_{2}\
text{H}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\frac{1\phantom{\rule{0.2em}{0ex}}{\text{mol CH}}_{3}{\text{CO}}_{2}\text{H}}{{\text{60.052 g CH}}_{3}{\text{CO}}_{2}\text{H}}\phantom{\
rule{0.2em}{0ex}}}{\text{0.500 L solution}}\phantom{\rule{0.2em}{0ex}}=0.839\phantom{\rule{0.2em}{0ex}}M\)
\(\begin{array}{l}\\ M=\phantom{\rule{0.2em}{0ex}}\frac{\text{mol solute}}{\text{L solution}}\phantom{\rule{0.2em}{0ex}}=0.839\phantom{\rule{0.2em}{0ex}}M\\ M=\phantom{\rule{0.2em}{0ex}}\frac{0.839\
phantom{\rule{0.2em}{0ex}}\text{mol solute}}{1.00\phantom{\rule{0.2em}{0ex}}\text{L solution}}\phantom{\rule{0.2em}{0ex}}\end{array}\)
Check Your Learning Calculate the molarity of 6.52 g of CoCl[2] (128.9 g/mol) dissolved in an aqueous solution with a total volume of 75.0 mL.
Determining the Mass of Solute in a Given Volume of Solution How many grams of NaCl are contained in 0.250 L of a 5.30-M solution?
Solution The volume and molarity of the solution are specified, so the amount (mol) of solute is easily computed as demonstrated in (Figure):
\(\begin{array}{c}\\ M=\phantom{\rule{0.2em}{0ex}}\frac{\text{mol solute}}{\text{L solution}}\phantom{\rule{0.2em}{0ex}}\\ \text{mol solute}=M\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\
text{L solution}\\ \\ \text{mol solute}=5.30\phantom{\rule{0.2em}{0ex}}\frac{\text{mol NaCl}}{\text{L}}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}0.250\phantom{\rule{0.2em}{0ex}}\text{L}=
1.325\phantom{\rule{0.2em}{0ex}}\text{mol NaCl}\end{array}\)
Finally, this molar amount is used to derive the mass of NaCl:
\(\text{1.325 mol NaCl}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\frac{58.44\phantom{\rule{0.2em}{0ex}}\text{g NaCl}}{\text{mol NaCl}}\phantom{\rule{0.2em}{0ex}}=77.4\phantom{\rule
{0.2em}{0ex}}\text{g NaCl}\)
Check Your Learning How many grams of CaCl[2] (110.98 g/mol) are contained in 250.0 mL of a 0.200-M solution of calcium chloride?
When performing calculations stepwise, as in (Figure), it is important to refrain from rounding any intermediate calculation results, which can lead to rounding errors in the final result. In
(Figure), the molar amount of NaCl computed in the first step, 1.325 mol, would be properly rounded to 1.32 mol if it were to be reported; however, although the last digit (5) is not significant, it
must be retained as a guard digit in the intermediate calculation. If the guard digit had not been retained, the final calculation for the mass of NaCl would have been 77.1 g, a difference of 0.3 g.
In addition to retaining a guard digit for intermediate calculations, rounding errors may also be avoided by performing computations in a single step (see (Figure)). This eliminates intermediate
steps so that only the final result is rounded.
Determining the Volume of Solution Containing a Given Mass of Solute In (Figure), the concentration of acetic acid in white vinegar was determined to be 0.839 M. What volume of vinegar contains 75.6
g of acetic acid?
Solution First, use the molar mass to calculate moles of acetic acid from the given mass:
\(\text{g solute}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\frac{\text{mol solute}}{\text{g solute}}\phantom{\rule{0.2em}{0ex}}=\text{mol solute}\)
Then, use the molarity of the solution to calculate the volume of solution containing this molar amount of solute:
\(\text{mol solute}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\frac{\text{L solution}}{\text{mol solute}}\phantom{\rule{0.2em}{0ex}}=\text{L solution}\)
Combining these two steps into one yields:
\(\text{g solute}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\frac{\text{mol solute}}{\text{g solute}}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\frac{\text{L solution}}{\text
{mol solute}}\phantom{\rule{0.2em}{0ex}}=\text{L solution}\)
text{CO}}_{2}\text{H}\phantom{\rule{0.2em}{0ex}}}{60.05\phantom{\rule{0.2em}{0ex}}\text{g}}\right)\phantom{\rule{0.2em}{0ex}}\left(\frac{\text{L solution}}{0.839\phantom{\rule{0.2em}{0ex}}\text{mol}\
phantom{\rule{0.2em}{0ex}}{\text{CH}}_{3}{\text{CO}}_{2}\text{H}}\right)\phantom{\rule{0.2em}{0ex}}=\phantom{\rule{0.2em}{0ex}}1.50\phantom{\rule{0.2em}{0ex}}\text{L solution}\)
Check Your Learning What volume of a 1.50-M KBr solution contains 66.0 g KBr?
Dilution of Solutions
Dilution is the process whereby the concentration of a solution is lessened by the addition of solvent. For example, a glass of iced tea becomes increasingly diluted as the ice melts. The water from
the melting ice increases the volume of the solvent (water) and the overall volume of the solution (iced tea), thereby reducing the relative concentrations of the solutes that give the beverage its
taste ((Figure)).
Dilution is also a common means of preparing solutions of a desired concentration. By adding solvent to a measured portion of a more concentrated stock solution, a solution of lesser concentration
may be prepared. For example, commercial pesticides are typically sold as solutions in which the active ingredients are far more concentrated than is appropriate for their application. Before they
can be used on crops, the pesticides must be diluted. This is also a very common practice for the preparation of a number of common laboratory reagents.
A simple mathematical relationship can be used to relate the volumes and concentrations of a solution before and after the dilution process. According to the definition of molarity, the molar amount
of solute in a solution (n) is equal to the product of the solution’s molarity (M) and its volume in liters (L):
Expressions like these may be written for a solution before and after it is diluted:
where the subscripts “1” and “2” refer to the solution before and after the dilution, respectively. Since the dilution process does not change the amount of solute in the solution, n[1] = n[2]. Thus,
these two equations may be set equal to one another:
This relation is commonly referred to as the dilution equation. Although this equation uses molarity as the unit of concentration and liters as the unit of volume, other units of concentration and
volume may be used as long as the units properly cancel per the factor-label method. Reflecting this versatility, the dilution equation is often written in the more general form:
where C and V are concentration and volume, respectively.
Use the simulation to explore the relations between solute amount, solution volume, and concentration and to confirm the dilution equation.
Determining the Concentration of a Diluted Solution If 0.850 L of a 5.00-M solution of copper nitrate, Cu(NO[3])[2], is diluted to a volume of 1.80 L by the addition of water, what is the molarity of
the diluted solution?
Solution The stock concentration, C[1], and volume, V[1], are provided as well as the volume of the diluted solution, V[2]. Rearrange the dilution equation to isolate the unknown property, the
concentration of the diluted solution, C[2]:
\(\begin{array}{c}{C}_{1}{V}_{1}={C}_{2}{V}_{2}\\ \\ {C}_{2}=\phantom{\rule{0.2em}{0ex}}\frac{{C}_{1}{V}_{1}}{{V}_{2}}\phantom{\rule{0.2em}{0ex}}\end{array}\)
Since the stock solution is being diluted by more than two-fold (volume is increased from 0.85 L to 1.80 L), the diluted solution’s concentration is expected to be less than one-half 5 M. This
ballpark estimate will be compared to the calculated result to check for any gross errors in computation (for example, such as an improper substitution of the given quantities). Substituting the
given values for the terms on the right side of this equation yields:
phantom{\rule{0.2em}{0ex}}}{1.80 L}\phantom{\rule{0.2em}{0ex}}=2.36\phantom{\rule{0.2em}{0ex}}M\)
This result compares well to our ballpark estimate (it’s a bit less than one-half the stock concentration, 5 M).
Check Your Learning What is the concentration of the solution that results from diluting 25.0 mL of a 2.04-M solution of CH[3]OH to 500.0 mL?
Volume of a Diluted Solution What volume of 0.12 M HBr can be prepared from 11 mL (0.011 L) of 0.45 M HBr?
Solution Provided are the volume and concentration of a stock solution, V[1] and C[1], and the concentration of the resultant diluted solution, C[2]. Find the volume of the diluted solution, V[2] by
rearranging the dilution equation to isolate V[2]:
\(\begin{array}{c}{C}_{1}{V}_{1}={C}_{2}{V}_{2}\\ \\ {V}_{2}=\phantom{\rule{0.2em}{0ex}}\frac{{C}_{1}{V}_{1}}{{C}_{2}}\phantom{\rule{0.2em}{0ex}}\end{array}\)
Since the diluted concentration (0.12 M) is slightly more than one-fourth the original concentration (0.45 M), the volume of the diluted solution is expected to be roughly four times the original
volume, or around 44 mL. Substituting the given values and solving for the unknown volume yields:
\(\begin{array}{c}\\ {V}_{2}=\phantom{\rule{0.2em}{0ex}}\frac{\left(0.45\phantom{\rule{0.2em}{0ex}}M\right)\left(0.011\phantom{\rule{0.2em}{0ex}}\text{L}\right)}{\left(0.12\phantom{\rule{0.2em}{0ex}}
M\right)}\phantom{\rule{0.2em}{0ex}}\\ {V}_{2}=0.041\phantom{\rule{0.2em}{0ex}}\text{L}\end{array}\)
The volume of the 0.12-M solution is 0.041 L (41 mL). The result is reasonable and compares well with the rough estimate.
Check Your Learning A laboratory experiment calls for 0.125 M HNO[3]. What volume of 0.125 M HNO[3] can be prepared from 0.250 L of 1.88 M HNO[3]?
Volume of a Concentrated Solution Needed for Dilution What volume of 1.59 M KOH is required to prepare 5.00 L of 0.100 M KOH?
Solution Given are the concentration of a stock solution, C[1], and the volume and concentration of the resultant diluted solution, V[2] and C[2]. Find the volume of the stock solution, V[1] by
rearranging the dilution equation to isolate V[1]:
\(\begin{array}{c}{C}_{1}{V}_{1}={C}_{2}{V}_{2}\\ \\ {V}_{1}=\phantom{\rule{0.2em}{0ex}}\frac{{C}_{2}{V}_{2}}{{C}_{1}}\phantom{\rule{0.2em}{0ex}}\end{array}\)
Since the concentration of the diluted solution 0.100 M is roughly one-sixteenth that of the stock solution (1.59 M), the volume of the stock solution is expected to be about one-sixteenth that of
the diluted solution, or around 0.3 liters. Substituting the given values and solving for the unknown volume yields:
\(\begin{array}{c}\\ {V}_{1}=\phantom{\rule{0.2em}{0ex}}\frac{\left(0.100\phantom{\rule{0.2em}{0ex}}M\right)\left(5.00\phantom{\rule{0.2em}{0ex}}\text{L}\right)}{1.59\phantom{\rule{0.2em}{0ex}}M}\
phantom{\rule{0.2em}{0ex}}\\ {V}_{1}=0.314\phantom{\rule{0.2em}{0ex}}\text{L}\end{array}\)
Thus, 0.314 L of the 1.59-M solution is needed to prepare the desired solution. This result is consistent with the rough estimate.
Check Your Learning What volume of a 0.575-M solution of glucose, C[6]H[12]O[6], can be prepared from 50.00 mL of a 3.00-M glucose solution?
Key Concepts and Summary
Solutions are homogeneous mixtures. Many solutions contain one component, called the solvent, in which other components, called solutes, are dissolved. An aqueous solution is one for which the
solvent is water. The concentration of a solution is a measure of the relative amount of solute in a given amount of solution. Concentrations may be measured using various units, with one very useful
unit being molarity, defined as the number of moles of solute per liter of solution. The solute concentration of a solution may be decreased by adding solvent, a process referred to as dilution. The
dilution equation is a simple relation between concentrations and volumes of a solution before and after dilution.
Chemistry End of Chapter Exercises
Explain what changes and what stays the same when 1.00 L of a solution of NaCl is diluted to 1.80 L.
What information is needed to calculate the molarity of a sulfuric acid solution?
We need to know the number of moles of sulfuric acid dissolved in the solution and the volume of the solution.
A 200-mL sample and a 400-mL sample of a solution of salt have the same molarity. In what ways are the two samples identical? In what ways are these two samples different?
Determine the molarity for each of the following solutions:
(a) 0.444 mol of CoCl[2] in 0.654 L of solution
(b) 98.0 g of phosphoric acid, H[3]PO[4], in 1.00 L of solution
(c) 0.2074 g of calcium hydroxide, Ca(OH)[2], in 40.00 mL of solution
(d) 10.5 kg of Na[2]SO[4]·10H[2]O in 18.60 L of solution
(e) 7.0 \(×\) 10^−3 mol of I[2] in 100.0 mL of solution
(f) 1.8 \(×\) 10^4 mg of HCl in 0.075 L of solution
(a) 0.679 M; (b) 1.00 M; (c) 0.06998 M; (d) 1.75 M; (e) 0.070 M; (f) 6.6 M
Determine the molarity of each of the following solutions:
(a) 1.457 mol KCl in 1.500 L of solution
(b) 0.515 g of H[2]SO[4] in 1.00 L of solution
(c) 20.54 g of Al(NO[3])[3] in 1575 mL of solution
(d) 2.76 kg of CuSO[4]·5H[2]O in 1.45 L of solution
(e) 0.005653 mol of Br[2] in 10.00 mL of solution
(f) 0.000889 g of glycine, C[2]H[5]NO[2], in 1.05 mL of solution
Consider this question: What is the mass of the solute in 0.500 L of 0.30 M glucose, C[6]H[12]O[6], used for intravenous injection?
(a) Outline the steps necessary to answer the question.
(b) Answer the question.
(a) determine the number of moles of glucose in 0.500 L of solution; determine the molar mass of glucose; determine the mass of glucose from the number of moles and its molar mass; (b) 27 g
Consider this question: What is the mass of solute in 200.0 L of a 1.556-M solution of KBr?
(a) Outline the steps necessary to answer the question.
(b) Answer the question.
Calculate the number of moles and the mass of the solute in each of the following solutions:
(a) 2.00 L of 18.5 M H[2]SO[4], concentrated sulfuric acid
(b) 100.0 mL of 3.8 \(×\) 10^−5M NaCN, the minimum lethal concentration of sodium cyanide in blood serum
(c) 5.50 L of 13.3 M H[2]CO, the formaldehyde used to “fix” tissue samples
(d) 325 mL of 1.8 \(×\) 10^−6M FeSO[4], the minimum concentration of iron sulfate detectable by taste in drinking water
(a) 37.0 mol H[2]SO[4], 3.63 \(×\) 10^3 g H[2]SO[4]; (b) 3.8 \(×\) 10^−6 mol NaCN, 1.9 \(×\) 10^−4 g NaCN; (c) 73.2 mol H[2]CO, 2.20 kg H[2]CO; (d) 5.9 \(×\) 10^−7 mol FeSO[4], 8.9 \(×\) 10^−5 g FeSO
Calculate the number of moles and the mass of the solute in each of the following solutions:
(a) 325 mL of 8.23 \(×\) 10^−5M KI, a source of iodine in the diet
(b) 75.0 mL of 2.2 \(×\) 10^−5M H[2]SO[4], a sample of acid rain
(c) 0.2500 L of 0.1135 M K[2]CrO[4], an analytical reagent used in iron assays
(d) 10.5 L of 3.716 M (NH[4])[2]SO[4], a liquid fertilizer
Consider this question: What is the molarity of KMnO[4] in a solution of 0.0908 g of KMnO[4] in 0.500 L of solution?
(a) Outline the steps necessary to answer the question.
(b) Answer the question.
(a) Determine the molar mass of KMnO[4]; determine the number of moles of KMnO[4] in the solution; from the number of moles and the volume of solution, determine the molarity; (b) 1.15 \(×\) 10^−3M
Consider this question: What is the molarity of HCl if 35.23 mL of a solution of HCl contain 0.3366 g of HCl?
(a) Outline the steps necessary to answer the question.
(b) Answer the question.
Calculate the molarity of each of the following solutions:
(a) 0.195 g of cholesterol, C[27]H[46]O, in 0.100 L of serum, the average concentration of cholesterol in human serum
(b) 4.25 g of NH[3] in 0.500 L of solution, the concentration of NH[3] in household ammonia
(c) 1.49 kg of isopropyl alcohol, C[3]H[7]OH, in 2.50 L of solution, the concentration of isopropyl alcohol in rubbing alcohol
(d) 0.029 g of I[2] in 0.100 L of solution, the solubility of I[2] in water at 20 °C
(a) 5.04 \(×\) 10^−3M; (b) 0.499 M; (c) 9.92 M; (d) 1.1 \(×\) 10^−3M
Calculate the molarity of each of the following solutions:
(a) 293 g HCl in 666 mL of solution, a concentrated HCl solution
(b) 2.026 g FeCl[3] in 0.1250 L of a solution used as an unknown in general chemistry laboratories
(c) 0.001 mg Cd^2+ in 0.100 L, the maximum permissible concentration of cadmium in drinking water
(d) 0.0079 g C[7]H[5]SNO[3] in one ounce (29.6 mL), the concentration of saccharin in a diet soft drink.
There is about 1.0 g of calcium, as Ca^2+, in 1.0 L of milk. What is the molarity of Ca^2+ in milk?
What volume of a 1.00-M Fe(NO[3])[3] solution can be diluted to prepare 1.00 L of a solution with a concentration of 0.250 M?
If 0.1718 L of a 0.3556-M C[3]H[7]OH solution is diluted to a concentration of 0.1222 M, what is the volume of the resulting solution?
If 4.12 L of a 0.850 M-H[3]PO[4] solution is be diluted to a volume of 10.00 L, what is the concentration of the resulting solution?
What volume of a 0.33-M C[12]H[22]O[11] solution can be diluted to prepare 25 mL of a solution with a concentration of 0.025 M?
What is the concentration of the NaCl solution that results when 0.150 L of a 0.556-M solution is allowed to evaporate until the volume is reduced to 0.105 L?
What is the molarity of the diluted solution when each of the following solutions is diluted to the given final volume?
(a) 1.00 L of a 0.250-M solution of Fe(NO[3])[3] is diluted to a final volume of 2.00 L
(b) 0.5000 L of a 0.1222-M solution of C[3]H[7]OH is diluted to a final volume of 1.250 L
(c) 2.35 L of a 0.350-M solution of H[3]PO[4] is diluted to a final volume of 4.00 L
(d) 22.50 mL of a 0.025-M solution of C[12]H[22]O[11] is diluted to 100.0 mL
(a) 0.125 M; (b) 0.04888 M; (c) 0.206 M; (d) 0.0056 M
What is the final concentration of the solution produced when 225.5 mL of a 0.09988-M solution of Na[2]CO[3] is allowed to evaporate until the solution volume is reduced to 45.00 mL?
A 2.00-L bottle of a solution of concentrated HCl was purchased for the general chemistry laboratory. The solution contained 868.8 g of HCl. What is the molarity of the solution?
An experiment in a general chemistry laboratory calls for a 2.00-M solution of HCl. How many mL of 11.9 M HCl would be required to make 250 mL of 2.00 M HCl?
What volume of a 0.20-M K[2]SO[4] solution contains 57 g of K[2]SO[4]?
The US Environmental Protection Agency (EPA) places limits on the quantities of toxic substances that may be discharged into the sewer system. Limits have been established for a variety of
substances, including hexavalent chromium, which is limited to 0.50 mg/L. If an industry is discharging hexavalent chromium as potassium dichromate (K[2]Cr[2]O[7]), what is the maximum permissible
molarity of that substance?
Key Equations
• \(M=\phantom{\rule{0.2em}{0ex}}\frac{\text{mol solute}}{\text{L solution}}\phantom{\rule{0.2em}{0ex}}\)
• C[1]V[1] = C[2]V[2]
aqueous solution
solution for which water is the solvent
qualitative term for a solution containing solute at a relatively high concentration
quantitative measure of the relative amounts of solute and solvent present in a solution
qualitative term for a solution containing solute at a relatively low concentration
process of adding solvent to a solution in order to lower the concentration of solutes
describes the process by which solute components are dispersed in a solvent
molarity (M)
unit of concentration, defined as the number of moles of solute dissolved in 1 liter of solution
solution component present in a concentration less than that of the solvent
solution component present in a concentration that is higher relative to other components
|
{"url":"https://louis.pressbooks.pub/sandbox1/chapter/molarity/","timestamp":"2024-11-04T04:13:05Z","content_type":"text/html","content_length":"195579","record_id":"<urn:uuid:32aafad6-5cfd-4b6a-a9e0-02ae86aff634>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00068.warc.gz"}
|
The response of an elastic half‐space under a momentary shear line impulse期刊界 All Journals 搜尽天下杂志 传播学术成果 专业期刊搜索 期刊信息化 学术搜索
Abstract:The response of an ideal elastic half‐space to a line‐concentrated impulsive vector shear force applied momentarily is obtained by an analytical–numerical computational method based on the
theory of characteristics in conjunction with kinematical relations derived across surfaces of strong discontinuities. The shear force is concentrated along an infinite line, drawn on the surface of
the half‐space, while being normal to that line as well as to the axis of symmetry of the half‐space. An exact loading model is introduced and built into the computational method for this shear
force. With this model, a compatibility exists among the prescribed applied force, the geometric decay of the shear stress component at the precursor shear wave, and the boundary conditions of the
half‐space; in this sense, the source configuration is exact. For the transient boundary‐value problem described above, a wave characteristics formulation is presented, where its differential
equations are extended to allow for strong discontinuities which occur in the material motion of the half‐space. A numerical integration of these extended differential equations is then carried out
in a three‐dimensional spatiotemporal wavegrid formed by the Cartesian bicharacteristic curves of the wave characteristics formulation. This work is devoted to the construction of the computational
method and to the concepts involved therein, whereas the interpretation of the resultant transient deformation of the half‐space is presented in a subsequent paper. Copyright © 2003 John Wiley &
Sons, Ltd.
|
{"url":"https://td.alljournals.cn/view_abstract.aspx?pcid=E62459D214FD64A3C8082E4ED1ABABED5711027BBBDDD35B&aid=8FCEABACC3D1EAC8E7294664AE09A10E","timestamp":"2024-11-08T15:12:59Z","content_type":"application/xhtml+xml","content_length":"11772","record_id":"<urn:uuid:22058c00-e2f3-4d60-b4b8-d2b7fe0fb4ee>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00402.warc.gz"}
|
Semi-topological K-theory of dg-algebras and the lattice conjecture
Posted in
Andrei Konovalov || GER
Mon, 27/06/2022 - 11:45 - 12:15
I will discuss the problem of constructing a natural rational structure on periodic cyclic homology of dg-algebras and dg-categories. The promising candidate is A. Blanc’s topological K-theory. I
will show that it, indeed, provides a rational structure in a number of cases and I will discuss its structural properties and possible applications.
|
{"url":"https://www.mpim-bonn.mpg.de/node/11463","timestamp":"2024-11-05T00:10:31Z","content_type":"application/xhtml+xml","content_length":"16860","record_id":"<urn:uuid:af3074f7-2faf-4191-a73c-49e770cd5707>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00282.warc.gz"}
|
Product Life Cycle Management in Apparel Industry
Product Life Cycle Estimation
“Watch the product life cycle; but more important, watch the market life cycle” — Philip Kotler
A. Abstract
The product life cycle describes the period over which an item is developed, brought to market and eventually removed from the market.
This paper describes a simple method to estimate Life Cycle stages — Growth, Maturity, and Decline (as seen in the traditional definitions) of products that have historical data of at least one
complete life cycle.
Here, two different calculations have been done which helps the business to identify the number of weeks after which a product moves to a different stage and apply the PLC for improving demand
A log-growth model is fit using Cumulative Sell through and Product Age which helps to identify the various stages of the product. A Log-Linear model is fit to determine the rate of change of product
sales due to a shift in its stage, cet. par.
The life span of a product and how fast it goes through the entire cycle depends on market demand and how marketing instruments are used and vary for different products. Products of fashion, by
definition, have a shorter life cycle, and they thus have a much shorter time in which to reap their reward.
B. An Introduction to Product Life Cycle (PLC)
Historically, PLC is a concept that has been researched as early as 1957 (refer Jones 1957, p.40). The traditional definitions mainly described 4 stages — Introduction, Growth, Maturity, and Decline.
This was used mainly from a marketing perspective — hence referred to as Marketing-PLC.
With the development of new types of products and additional research in the field, Life Cycle Costing (LCC) and Life Cycle Assessment (LCA) were added to the traditional definition to give the
Engineering PLC (or E-PLC). This definition considers the cost of using the product during its lifetime, services necessary for maintenance and decommissioning of the product.
According to Philip Kotler, ‘The product life cycle is an attempt to recognize distinct stages in sales history of the product’. In general, PLC has 4 stages — Introduction, Growth, Maturity, and
Decline. But for some industries which consist of fast moving products, for example, apparel PLC can be defined in 3 stages. PLC helps to study the degree of product acceptance by the market over
time which includes major rise or fall in sales.
PLC also varies based on product type that can be broadly divided into
1. Seasonal — Products that are seasonal (for e.g. mufflers, that are on shelves mostly in winter) have a steeper incline/decline due to the short growth and decline periods
2. Non-Seasonal — Products that are non-seasonal (for e.g. jeans, that are promoted in all seasons) have longer maturity and decline periods as sales tend to continue as long as stocks last
Definition of Various stages of PLC
Market Development & Introduction
This is when a new product is first brought to market before there is a proved demand for it. In order to create demand, investments are made with respect to consumer awareness and promotion of the
new product in order to get sales going. Sales and Profits are low and there are only a few competitors in this stage.
In this stage, demand begins to accelerate and the size of the total market expands rapidly. The production costs and high profits are generated.
The sales growth reaches a point above which it will not grow. The number of competitors increases and so market share decreases. The sales will be maintained for some period with a good profit.
Here, the market becomes saturated and the product is no longer sold and becomes unpopular. This stage can occur as a natural result but can also be due to introduction of new and innovative products
and better product features from the competitors
This paper deals with the traditional definition of PLC and the application in Fashion products.
C. Why do businesses need PLC and how does it help them?
Businesses have always invested significant amounts of resources to estimate PLC and demand. Estimating the life cycle of a new product accurately helps business take several key decisions, such as:
• Provide promotions and markdowns at the right time
• Plan inventory levels better by incorporating PLC in demand prediction
• Plan product launch dates/season
• Determine the optimal discount percentages based on a product’s PLC stage (as discussed later in this paper)
Businesses primarily rely on the business sense and experience of their executives to estimate a product’s life cycle. Any data driven method to easily estimate PLC can help reduce costs and improve
decision making.
D. How does the solution in this paper help?
The solution detailed in this paper can help businesses use data of previously launched products to predict the life cycles of similar new products. The age at which products transition from one life
cycle phase to another as well as the life cycle curves of products can be obtained through this process. This also helps to identify the current stage of the products and the rate of sales growth
during stage transition.
Below is an overview of the steps followed to achieve these benefits:
• To identify products similar to a newly released product, we clustered products based on the significant factors affecting sales. This gives us a chance to obtain a data based PLC trend
• Next, sales is used to plot the Cumulative Sell Through Rate vs Product Age (in weeks)
• A log-growth model fit across this plot will provide the Life Cycle trend of that product or cluster of products
• The second differential of this curve can be analyzed to identify shifts in PLC phases, to estimate the durations of each of the PLC phases
E. Detailed Approach to estimate PLC
The process followed to determine the different PLC stages is a generic one that can be incorporated into any model. However, in this paper, we have described how it was employed to help determine
the effect of different PLC stages on sales for the apparel industry.
The procedure followed has been described in detail in the steps below:
i. Product Segmentation
The first step in estimating PLC is to segment products based on the features that primarily influence sales.
To predict the life cycle factor in demand prediction of a new product, we need to find similar products among those launched previously. The life cycle of the new product can be assumed to be
similar to these.
ii. Identification of PLC stages
To identify various stages, factors like Cumulative Sell through rate and Age of product were considered. The number of weeks in each stage was calculated at category level which consists of a group
of products.
Cumulative sell through is defined as the cumulative Sales over the period divided by the total inventory at the start of the period. Sales of products were aggregated at category level by using the
sum of sales at similar product age. For example, Sales of all products when the age was 1 week being aggregated, to get the sales of that category on week 1.
After exploring multiple methods to determine the different stages, we have finally used a log-growth model to fit a curve between age and cumulative sell through. Its equation is given below for
Note: Φ1, Φ2 & Φ3 are parameters that control the asymptote and growth of the curve.
Using inflexion points of the fitted curve cut-off for different phases of product life cycle were obtained.
The fitted curve had 2 inflexion points that made it easy to differentiate the PLC stages
The plot above shows the variation of Cumulative sell through rate (y-axis) vs Age (x-axis). The data points are colored based on the PLC life stage identified: Green for “Growth Stage”, Blue for
“Maturity Stage” and Red for “Decline Stage”.
Other Methods explored
Several other methods were explored before determining the approach discussed in the previous section. The decision was based on the advantages and drawbacks of each of the methods given below:
Method 1:
Identification of PLC stages by analyzing the variation in Sell through and Cumulative Sell through
Steps followed:
• Calculated (Daily Sales / Total Inventory) across Cumulative Sell through rate at a category level
• A curve between Cumulative Sell through rate (x-axis) and (Daily Sales / Total Inventory) in the y-axis was fitted using non-linear least square regression
• Using inflexion points of the fitted curve cut-off for different phases of product life cycle is obtained
Advantages: The fitted curve followed a ‘bell-curve’ shape in many cases that made it easier to identify PLC stages visually
Drawbacks: There weren’t enough data points in several categories to fit a ‘bell-shaped’ curve, leading to issues in the identification of PLC stages
The plot above shows the variation of Total Sales (y-axis) vs Age (x-axis). The data points are colored based on the PLC life stage identified: Green for “Growth Stage”, Blue for “Maturity Stage” and
Red for “Decline Stage”.
Method 2:
Identification of PLC stages by analyzing the variation in cumulative sell through rates with age of a product (Logarithmic model)
Steps followed:
• Calculated cumulative sell through rate across age at a category level
• A curve between age and cumulative sell through rate was fitted using a log linear model
• Using inflexion points of the fitted curve cut-off for different phases of product life cycle is obtained
1. Visual inspection of the fitted curve does not reveal any PLC stages
2. This method could not capture the trend as accurately as the log-growth models
The plot above shows the variation of Cumulative sell through rate (y-axis) vs Age (x-axis). The data points are colored based on the PLC life stage identified: Green for “Growth Stage”, Blue for
“Maturity Stage” and Red for “Decline Stage”.
F. Application of PLC stages in Demand prediction
After identifying the different PLC phases for each category, this information can be used directly to determine when promotions need to be provided to sustain product sales. It can also be
incorporated into a model as an independent categorical variable to understand the impact of the different PLC phases on predicting demand.
In the context of this paper, we used the PLC phases identified as a categorical variable in the price elasticity model to understand the effect of each phase separately. The process was as follows:
The final sales prediction model had data aggregated at a cluster and sales week level. PLC phase information was added to the sales forecasting model by classifying each week in the cluster-week
data into “Growth”, “Maturity” or “Decline”, based on the average age of the products in that cluster and week.
This PLC classification variable was treated as a factor variable so that we can obtain coefficients for each PLC stage.
The modeling equation obtained was:
In the above equation, “PLC_Phase” represents the PLC classification variable. The output of the regression exercise gave beta coefficients for the PLC stages “Growth” and “Maturity” with respect to
The “Growth” and “Maturity” coefficients were then treated such that they were always positive. This was because “Growth” and “Maturity” coefficients were obtained w.r.t. “Decline” and since
“Decline” had a factor of 1, the other 2 had to be greater than 1.
The treated coefficients obtained for each cluster were used in the simulation tool in the following manner (more details given in tool documentation):
If there is a transition from “Growth” to “Maturity” stages in a product’s life cycle — then the PLC factor multiplied to sales is (“Maturity” coefficient / “Growth” coefficient)
If there is a transition from “Maturity” to “Decline” stages in a product’s life cycle — then the PLC factor multiplied to sales is (“Decline” coefficient / “Maturity” coefficient)
If there is no transition of stages in a product’s life cycle, then PLC factor is 1.
G. Conclusion
The method described in this paper enables identification of PLC stages for the apparel industry and demand prediction for old and new products. This is a generalized method and can be used for
different industries as well, where a product may exhibit 4 or 5 stages of life cycle.
One of the drawbacks of product life cycle is that it is not always a reliable indicator of the true lifespan of a product and adhering to the concept may lead to mistakes. For example, a dip in
sales during the growth stage can be temporary instead of a sign the product is reaching maturity. If the dip causes a company to reduce its promotional efforts too quickly, it may limit the
product’s growth and prevent it from becoming a true success.
Also, if there are a lot of promotional activities or discount are applied, then it’s difficult to identify the true-life cycle.
Below are links to certain websites referred to:
|
{"url":"https://affine.medium.com/product-life-cycle-management-in-apparel-industry-4bb4f01b611f?source=author_recirc-----34cbce0f0852----1---------------------54bdeafa_1102_4729_8e02_c794421caf84-------","timestamp":"2024-11-02T17:30:13Z","content_type":"text/html","content_length":"178131","record_id":"<urn:uuid:e6d34b15-0a15-434d-8c1d-175ab342be31>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00047.warc.gz"}
|
Introduction to Multivariate Regression Analysis
Contributed by: Pooja Korwar
What is Multivariate Regression?
Multivariate regression shows the linear relationship between more than one predictor or independent variable and more than one output or dependent variable.
In comparison, simple regression refers to the linear relationship between one predictor and the output variable.
It is a supervised machine learning algorithm involving multiple data variables for analyzing and predicting the output based on various independent variables.
Multivariate regression tries to find a formula that can explain how variable factors respond simultaneously to changes in others. Follow along to learn more about it through some real-life examples.
Examples of Multivariate Regression
In today’s world, data is everywhere. Data is just facts and figures that provide meaningful information if analyzed properly.
Hence, data analysis is essential. Data analysis applies statistical analysis and logical techniques to describe, visualize, reduce, revise, summarize, and assess data into useful information that
provides a better context for the data.
Multivariate regression has numerous applications in various areas. Let’s look at some examples to understand it better.
1. Praneeta wants to estimate the price of a house. She will collect details such as the location, number of bedrooms, size of square feet, amenities available, etc. Based on these details, she can
predict the price of the house and how each variable is interrelated.
2. An agriculture scientist wants to predict the total crop yield expected for the summer. He collected details of the expected amount of rainfall, fertilizers to use, and soil conditions.By
building a Multivariate regression model, scientists can predict crop yield. With the crop yield, the scientist also tries to understand the relationship among the variables.
3. Suppose an organization wants to know how much it has to pay a new hire. In that case, it will consider details such as education level, years of experience, job location, and whether the
employee has niche skills. Based on this information, you can predict an employee’s salary, and these variables help estimate the salary.
4. Economists can use Multivariate regression to predict the GDP growth of a state or a country based on parameters such as the total amount spent by consumers, import expenditures, total gains from
exports, and total savings.
5. A company wants to predict an apartment’s electricity bill. The details needed here are the number of flats, the number of appliances used, the number of people at home, etc. These variables can
help predict the electricity bill.
The above example uses multivariate regression with many independent and single-dependent variables.
Check out the Statistical Analysis course to learn the statistical methods involved in data analysis.
Why perform Multivariate Regression Analysis?
Data analysis plays a significant role in finding meaningful information to help businesses make better decisions based on the output.
Along with Data analysis, Data science also comes into the picture. Data science combines many scientific methodologies, processes, algorithms, and tools to extract information, particularly massive
datasets, for insights into structured and unstructured data.
A different range of terms related to data mining, cleaning, analyzing, and interpreting data are often used interchangeably in data science.
Let us look at one of the essential models of data science.
Regression analysis
Regression analysis is one of the most sought-after methods in data analysis. It is a supervised machine-learning algorithm. Regression analysis is an essential statistical method that allows us to
examine the relationship between two or more variables in the dataset.
Regression analysis is a mathematical way of differentiating variables that impact output. It answers the question: What are the critical variables that impact output? Which ones should we ignore?
How do they interact with each other?
We have a dependent variable—the main factor we try to understand or predict. Then, we have independent variables—the factors we believe impact the dependent variable.
Simple linear regression is a model that estimates the linear relationship between dependent and independent variables using a straight line.
On the other hand, Multiple linear regression estimates the relationship between two or more independent variables and one dependent variable. The difference between these two models is the number of
independent variables influencing the result.
Sometimes, the regression models mentioned above will fail to work. Here’s why.
As known, regression analysis is mainly used to understand the relationship between a dependent and independent variable. However, there are ample situations in the real world where multiple
independent variables influence the output.
Therefore, we have to look for other options rather than a single regression model that can only work with one independent variable.
With these setbacks in hand, we want a better model that will fill in the gaps of
Simple and multiple linear regression, and this is where Multivariate Linear Regression comes into the picture.
If you are a beginner in the field and wish to learn more such concepts to start your career in machine learning, you can head over to Great Learning Academy and learn the basics of machine learning,
such as linear regression. The course will cover all the basic concepts required to kick-start your machine-learning journey.
Looking to improve your skills in regression analysis?
This regression analysis using Excel course will teach you all the techniques you need to know to get the most out of your data. You’ll learn how to build models, interpret results, and use
regression analysis to make better decisions for your business.
Enroll today and get started on your path to becoming a data-driven decision-maker!
Mathematical equation
The simple regression linear model represents a straight line meaning y is a function of x. When we have an extra dimension (z), the straight line becomes a plane.
Here, the plane is the function that expresses y as a function of x and z. The linear regression equation can now be expressed as:
y = m1.x + m2.z+ c
y is the dependent variable, that is, the variable that needs to be predicted.
x is the first independent variable. It is the first input.
m1 is the slope of x1. It lets us know the angle of the line (x).
z is the second independent variable. It is the second input.
m2 is the slope of z. It helps us to know the angle of the line (z).
c is the intercept. A constant that finds the value of y when x and z are 0.
The equation for a model with two input variables can be written as:
y = β0 + β1.x1 + β2.x2
What if there are three variables as inputs? Human visualizations can be only three dimensions. In the machine learning world, there can be n number of dimensions. The equation for a model with three
input variables can be written as:
y = β0 + β1.x1 + β2.x2 + β3.x3
Below is the generalized equation for the multivariate regression model-
y = β0 + β1.x1 + β2.x2 +….. + βn.xn
Where n represents the number of independent variables, β0~ βn represents the coefficients, and x1~xn is the independent variable.
The multivariate model helps us in understanding and comparing coefficients across the output. Here, the small cost function makes Multivariate linear regression a better model.
Also Read: 100+ Machine Learning Interview Questions
What is Cost Function?
The cost function is a function that allows a cost to samples when the model differs from observed data. This equation is the sum of the square of the difference between the predicted value and the
actual value divided by twice the length of the dataset.
A smaller mean squared error implies better performance. Here, the cost is the sum of squared errors.
Cost of Multiple Linear regression:
Steps of Multivariate Regression analysis
Multivariate Regression solves all the problems dealing with multiple independent and dependent variables. However, to build an accurate multivariate regression model, we’ll have to follow some
• Feature selection-
The selection of features is an important step in multivariate regression. Feature selection also known as variable selection. It becomes important for us to pick significant variables for better
model building.
• Normalizing Features-
We need to scale the features as it maintains general distribution and ratios in data. This will lead to an efficient analysis. The value of each feature can also be changed.
• Select Loss function and Hypothesis-
The loss function predicts whenever there is an error. Meaning, when the hypothesis prediction deviates from actual values. Here, the hypothesis is the predicted value from the feature/variable.
• Set Hypothesis Parameters-
The hypothesis parameter needs to be set in such a way that it reduces the loss function and predicts well.
• Minimize the Loss Function-
The loss function needs to be minimized by using a loss minimization algorithm on the dataset, which will help in adjusting hypothesis parameters. After the loss is minimized, it can be used for
further action. Gradient descent is one of the algorithms commonly used for loss minimization.
• Test the hypothesis function-
The hypothesis function needs to be checked on as well, as it is predicting values. Once this is done, it has to be tested on test data.
Advantages of Multivariate Regression
1. Improved Predictive Accuracy: Multivariate regression can provide a more accurate and nuanced model than simple linear regression by incorporating multiple predictors.
2. Handles Complex Relationships: It can capture the relationships between the dependent variable and multiple predictors, including interactions and combined effects, leading to a better
understanding of complex data structures.
3. Reduces Bias: Including several variables helps reduce bias by accounting for factors that might influence the dependent variable, leading to more reliable estimates.
4. Identifies Key Predictors: It helps determine which predictors significantly impact the outcome, aiding in feature selection and model refinement.
5. Improves Model Fit: Multivariate regression, by considering multiple variables, can often improve the fit of the model to the data, providing more detailed insights into the underlying
Disadvantages of Multivariate Regression
• Multivariate regression analysis is complex and requires a high level of mathematical calculation.
• The output produced by multivariate models is sometimes not accessible to interpret because it has some loss and error outputs that are not identical.
• These models do not have much scope for smaller datasets. Hence, the same cannot be applied to them. The results are better for larger datasets.
Multivariate regression comes into the picture when we have more than one independent variable, and simple linear regression does not work. Real-world data involves multiple variables or features and
when these are present in data, we would require Multivariate regression for better analysis.
|
{"url":"https://www.mygreatlearning.com/blog/introduction-to-multivariate-regression/","timestamp":"2024-11-03T21:45:52Z","content_type":"text/html","content_length":"385846","record_id":"<urn:uuid:00e2643a-314b-4662-986e-e24eb5a6f196>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00488.warc.gz"}
|
Free NPV Calculator
FreeCalculator.net’s sole focus is to provide fast, comprehensive, convenient, free online calculators in a plethora of areas. Currently, we have over 100 calculators to help you “do the math”
quickly in areas such as finance, fitness, health, math, and others, and we are still developing more. Our goal is to become the one-stop, go-to site for people who need to make quick calculations.
Additionally, we believe the internet should be a source of free information. Therefore, all of our tools and services are completely free, with no registration required.
We coded and developed each calculator individually and put each one through strict, comprehensive testing. However, please inform us if you notice even the slightest error – your input is extremely
valuable to us. While most calculators on FreeCalculator.net are designed to be universally applicable for worldwide usage, some are for specific countries only.
|
{"url":"https://freecalculator.net/npv-calculator","timestamp":"2024-11-09T01:00:11Z","content_type":"text/html","content_length":"49203","record_id":"<urn:uuid:e9ff66e7-e324-4774-9908-4a120df87845>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00393.warc.gz"}
|
What is finite automata explain with example?
A finite automaton (FA) is a simple idealized machine used to recognize patterns within input taken from some character set (or alphabet) C. The job of an FA is to accept or reject an input depending
on whether the pattern defined by the FA occurs in the input. A finite automaton consists of: a finite set S of N states.
What is DFA in compiler design?
Deterministic finite automata (or DFA) are finite state machines that accept or reject strings of characters by parsing them through a sequence that is uniquely determined by each string. The term
“deterministic” refers to the fact that each string, and thus each state sequence, is unique.
How do you use finite state machine design?
The Finite State Machine is an abstract mathematical model of a sequential logic function. It has finite inputs, outputs and number of states. FSMs are implemented in real-life circuits through the
use of Flip Flops. The implementation procedure needs a specific order of steps (algorithm), in order to be carried out.
How do I make a state chart?
Steps to draw a state diagram –
1. Identify the initial state and the final terminating states.
2. Identify the possible states in which the object can exist (boundary values corresponding to different attributes guide us in identifying different states).
3. Label the events which trigger these transitions.
How do you build a state machine?
Creating a state machine
1. In the Project Explorer, right-click the model and select Add UML > Package.
2. Enter a name for the new package. For example, Finite State Machines.
3. Right-click the package and select Add UML > State Machine.
4. Enter a name for the new state machine. For example, FTM State Machine.
What are the types of finite automata?
• Finite Automata without output. Deterministic Finite Automata (DFA). Non-Deterministic Finite Automata (NFA or NDFA). Non-Deterministic Finite Automata with epsilon moves (e-NFA or e-NDFA).
• Finite Automata with Output. Moore machine. Mealy machine.
What is finite automata explain its types and with state space diagram?
Finite Automata Model: Finite automata can be represented by input tape and finite control. Input tape: It is a linear tape having some number of cells. Each input symbol is placed in each cell.
Finite control: The finite control decides the next state on receiving particular input from input tape.
|
{"url":"https://rattleinnaustin.com/what-is-finite-automata-explain-with-example/","timestamp":"2024-11-13T19:29:51Z","content_type":"text/html","content_length":"71660","record_id":"<urn:uuid:4ecb93e4-13fa-4fc6-af7b-b4155e051fbc>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00148.warc.gz"}
|
Energy from Thin Air: Compressed Air Power Harvesting Systems
Zachary Sadler and Faculty Mentor: Matthew Jones, Mechanical Engineering
Energy is an important resource within the world we live. The demand for power requires new energy resources. Much of the power that is generated is eventually wasted in the form of waste heat. As
much as 435 GW of energy is transferred from virtually all energy conversion devices and processes to the atmosphere as wasted heat ^[1]. Converting as much as one percent of this waste heat into
electrical power would eliminate the need for 18 average size (236 MW) ^[2] coal red power plants. A significant portion of this waste heat production is due to air compression systems – a process
which converts 60-90% of input power to waste heat ^[3]. Thermoelectric generators (TEG) are solid state direct energy conversion devices that have the ability to reclaim this otherwise wasted heat
by converting thermal energy into electrical energy. While current TEGs have low energy conversion efficiencies, a significant amount of power can be produced with proper optimization. For this
project, a model was created and confirmed with experimental data. Creating an accurate model requires taking into account the many external variables which affect the system. With a proper model,
professionals will be able to optimize and implement the TEG systems on a broader scale.
A model was created by expanding the definition of heat flux ^[4]. Additionally, the Carnot efficiency and device heat conversion efficiency were taken into account to get the expected power output ^
[5]. These can be seen in equation 1.
The model created for this project outlines the thermal pathways as shown in figure 1. T[a] is the ambient temperature while T[S] is the source temperature.
Figure 1: Thermal Resistance Diagram
The thermal pathway diagram allows for thermal energy to bypass the TEG. This would be sources such as conduction through insulation, convection or radiation. Only considering the thermal energy that
goes through the TEG and adding an expression for device efficiency ^[6] yields the complete model which can be seen in equation 2.
A further relationship between the various temperatures in the for a reversible process for the maximum heat transfer (Equation 3).
A more precise relationship was derived for real, non-reversible processes:
In either equation (equation 3 or 4), changing the temperatures requires changing the contact resistances, R[t,H] and R[t,L], between the TEG and source/heat sink.
The model was verified with experimental data. The maximum measured temperatures on the surface of the air compressor was found to be 200 °C. An experiment was run with a hot plate keeping the hot
side of the TEG at 200 °C while a heat sink cools the cool side. Additionally, contact resistance and electrical load resistance are varied to show the importance of optimizing the system.
According the model, maximum power will be generated by the TEG when T[H] is as high as possible and T[L] is as low as possible. This would be when T[H] = T[s] and T[L] = T[a]. This can be seen in
equation 1. This neglects the need for heat transfer through from the source to the hot side of the TEG and from the cold side of the TEG to ambient. If T[H] = T[s] and T[L] = T[a], then no heat
transfer would occur through the system. This heat transfer rate is governed by the contact resistance between the heat sink/source and the respective sides of the TEG. The contact resistance should
be chosen so that it follows equation 3.
Experimental Data from one test yielded T[a] = 20 °C, T[L] = 57 °C, T[H] = 139 °C and T[s] = 201 °C. The experiment generated about 1.8 Watts from the TEG. The current implementation of the model
yields about 1.2 Watts. The model is not quite as accurate as preferred but the research will be continued to get the model and experiment data are within the range of uncertainty. It must be noted
that the bypass heat transfer was neglected for the model at the current state in order to reduce the number of places for error to be introduced by estimates. Many of the parameters such as thermal
resistance and figure of merit must be estimated because they change with temperature.
Currently, the model and the experimental data differ slightly. There could be several different reasons why the model and experiment disagree. One obvious place for there to be error introduced is
neglecting the bypass heat transfer.
One thing that was noticed in this experiment was that the model depends upon knowing not only the source and the ambient temperatures but also the TEG surface temperatures. The temperature of the
hot and cold sides of the TEG can be difficult to get, even in a lab setting. It is not realistic for a professional to be able to get these temperatures. These temperatures should be replaced with
other parameters.
Additionally, the figure of merit of a TEG (Z) is a difficult number to obtain as it requires specialized equipment. A reliable means of obtaining this number would help to simplify the calculations
for this model.
While the model is not yet finished, the results are promising that the amount of power from a TEG can be predicted and allow for optimization. More work is needed to refine the model as well as
simplifying the model. With this model, thermoelectric generator power harvesting systems offer a promising method of partially meeting the world’s demands for power by allowing for more efficient
use of the power that has already been generated.
1. U.S. Department of Energy, 2008. “Waste Heat Recovery: Technology and Opportunities in U.S. Industry”. p. 112.
2. U.S. Energy Information Administration, 2014. 2014 form eia-860 data – schedule 3, ‘generator data’ (operable coal units only).
3. Cerci, Y., Cengel, Y. A., and Turner, R. H., 1995. “Reducing the Cost of Compressed Air in Industrial Facilities”. Thermodynamics and the design, analysis, and the improvement of energy systems,
35, pp. 175-186.
4. Bergman, T. L., Lavine, A. S., Incropera, F. P., and Dewitt, D. P., 2011. Fundamentals of Heat and Mass Transfer. John Wiley & Sons, Inc.
5. Cengel, Y., and Boles, M., 2010. Thermodynamics: An Engineering Approach, 7th ed. McGraw Hill.
6. Lee, H., 2010. Thermal Design. John Wiley & Sons, Inc.
|
{"url":"http://jur.byu.edu/?p=21430","timestamp":"2024-11-10T08:14:28Z","content_type":"text/html","content_length":"23342","record_id":"<urn:uuid:646bc8b7-6b42-4b20-a5cf-3b46c03f9d4a>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00735.warc.gz"}
|
2.1: Slice This (5 minutes)
The purpose of this activity is for students to visualize what a cross section might look like and then test the prediction by observing the result of slicing through a solid. Cylindrical food items,
such as cheese or carrots, are convenient examples.
Arrange students in groups of 2. Tell students that a cross section is the intersection between a solid and a plane, or a two-dimensional figure that extends forever in all directions. Using a
cylindrical food item such as cheese or carrots, or another cylindrical object, demonstrate that slicing a cylinder parallel to its base produces a circular cross section.
Then, give students quiet work time and then time to share their work with a partner.
Student Facing
Imagine slicing a cylinder with a straight cut. The flat surface you sliced along is called a cross section. Try to sketch all the possible kinds of cross sections of a cylinder.
Anticipated Misconceptions
Students may not consider non-horizontal or non-vertical cross sections at first. Remind them that a cross section is the intersection of any plane with a solid—the plane doesn't have to be vertical
or horizontal.
Activity Synthesis
Ask students to share their predictions of what the cross sections will look like. Demonstrate slicing each cylindrical food item according to student instructions to see several examples.
2.2: Slice That (20 minutes)
In this activity, students continue to develop familiarity with three-dimensional solids and their cross sections. Students use spatial visualization to predict what cross sections might look like
and then test their predictions.
This activity works best when each student has access to devices that can run the applet because students will benefit from seeing the relationship in a dynamic way. If students don’t have individual
access, projecting the applet would be helpful during the synthesis.
Arrange students in groups of 3–4. Ask students to think about definitions of some geometric solids: spheres, prisms, pyramids, cones, and cylinders. Give students some quiet work time and then time
to share their work with a partner. Follow with a whole-class discussion.
A sphere is the set of points in three-dimensional space the same distance from some center. A prism has two congruent faces (or sides) that are called bases. The bases are connected by
quadrilaterals. A cylinder is like a prism except the bases are circles. A pyramid has one base. The remaining faces are triangles that all meet at a single vertex. A cone is like a pyramid except
the base is a circle.
Representing, Conversing: MLR7 Compare and Connect. Use this routine to help students develop the mathematical language of cross sections of geometric solids. After students explore the cross
sections of their solid, invite them to create a visual display of the cross sections they found. Then ask students to quietly circulate and observe at least two other visual displays in the room.
Give students quiet think time to consider what is the same and what is different about their cross sections. Next, ask students to find a partner to discuss what they noticed. Listen for and amplify
the language students use to compare and contrast various cross sections of a solid.
Design Principle(s): Cultivate conversation
Engagement: Develop Effort and Persistence. Connect a new concept to one with which students have experienced success. For example, remind students about the cross sections of a cylinder from the
previous activity. Ask students how they created various cross sections of a cylinder such as a circle, rectangle, and an ellipse. Then ask students how they can apply this method to create various
cross sections of their solid.
Supports accessibility for: Social-emotional skills; Conceptual processing
Student Facing
The triangle is a cross section formed when the plane slices through the cube.
1. Sketch predictions of all the kinds of cross sections that could be created as the plane moves through the cube.
2. The 3 red points control the movement of the plane. Click on them to move them up and down or side to side. You will see one of these movement arrows appear. Sketch any new cross sections you
find after slicing.
Student Facing
Are you ready for more?
Delete the cube and build another solid by following the directions in its Tooltip. Make predictions about the the kinds of cross sections that could be created if the plane moves through the solid.
Move your plane to confirm.
Arrange students in groups of 3–4. Ask students to think about definitions of some geometric solids: spheres, prisms, pyramids, cones, and cylinders. Give students some quiet work time and then time
to share their work with a partner. Follow with a whole-class discussion.
A sphere is the set of points in three-dimensional space the same distance from some center. A prism has two congruent faces (or sides) that are called bases. The bases are connected by
quadrilaterals. A cylinder is like a prism except the bases are circles. A pyramid has one base. The remaining faces are triangles that all meet at a single vertex. A cone is like a pyramid except
the base is a circle.
Give each group clay or playdough formed into the shape of a three-dimensional solid (cube, sphere, cylinder, cone, or other solids), and dental floss to slice the clay. Tell students that to view
multiple cross sections, they will slice the shape, then re-form the shape and slice again.
An alternative is to find food items with interesting cross sections or three-dimensional foam solids from a craft store and providing plastic knives to slice the solids. In this case, provide each
group with several of the same solid so they can experiment with multiple slices.
Try to include a sphere, a cube, and a cone in the collection of solids.
Representing, Conversing: MLR7 Compare and Connect. Use this routine to help students develop the mathematical language of cross sections of geometric solids. After students explore the cross
sections of their solid, invite them to create a visual display of the cross sections they found. Then ask students to quietly circulate and observe at least two other visual displays in the room.
Give students quiet think time to consider what is the same and what is different about their cross sections. Next, ask students to find a partner to discuss what they noticed. Listen for and amplify
the language students use to compare and contrast various cross sections of a solid.
Design Principle(s): Cultivate conversation
Engagement: Develop Effort and Persistence. Connect a new concept to one with which students have experienced success. For example, remind students about the cross sections of a cylinder from the
previous activity. Ask students how they created various cross sections of a cylinder such as a circle, rectangle, and an ellipse. Then ask students how they can apply this method to create various
cross sections of their solid.
Supports accessibility for: Social-emotional skills; Conceptual processing
Student Facing
Your teacher will give your group a three-dimensional solid to analyze.
1. Sketch predictions of all the kinds of cross sections that could be created from your solid.
2. Slice your solid to confirm your predictions. Sketch any new cross sections you find after slicing.
Anticipated Misconceptions
If using the paper and pencil version of this activity and students are stuck, suggest they slice their solids at different angles and locations to see if different cross sections are generated.
Activity Synthesis
Invite groups of students with different solids to share their list of cross sections with the class. Ask students:
• “Were there any cross sections that caught you by surprise?” (It was surprising that a cube can have cross sections that are triangles, quadrilaterals, pentagons, and hexagons.)
• “Compare and contrast the different cross sections of a sphere.” (All the cross sections were circles, but they were different sizes.)
• “How are a cube’s cross sections different from a sphere’s?” (The cube has many differently-shaped cross sections, while the sphere’s cross sections are all circles.)
2.3: Stack ‘Em Up (10 minutes)
In the last activity, students started with solids and identified various cross sections. In this activity, students view three-dimensional slabs of a solid between parallel cross sections and try to
determine what the original solid was. Being able to visualize the relationship between a solid and its cross sections is important to later work on Cavalieri’s Principle.
Ask students, “What solid would a stack of all the same coins create?” Display a stack of quarters and note that it creates the shape of a cylinder. Then display, in order, a quarter, a nickel, a
penny, and a dime. Ask, “What solid would a stack of coins decreasing in size create?” Make a stack with a few of each type of coin to make a solid that resembles a cone.
Student Facing
Each question shows several parallel cross-sectional slabs of the same three-dimensional solid. Name each solid.
Student Facing
Are you ready for more?
3D-printers stack layers of material to make a three-dimensional shape. Computer software slices a digital model of an object into layers, and the printer stacks those layers one on top of another to
replicate the digital model in the real world.
1. Draw 3 different horizontal cross sections from the object in the image.
2. The layers can be printed in different thicknesses. How would the thickness of the layers affect the final appearance of the object?
3. Suppose we printed a rectangular prism. How would the thickness of the layers affect the final appearance of the prism?
Activity Synthesis
Ask students to share their predictions for what solids are formed. Then display these images for all to see.
Now focus students’ attention on cross sections that are taken parallel to a solid’s base (for those solids that have bases). Ask students how cross sections can be used to differentiate between
prisms and pyramids. (The cross sections of prisms taken parallel to the base are congruent to each other. The cross sections of pyramids taken parallel to the base are similar to each other.)
Speaking: MLR8 Discussion Supports. To help students respond to the questions for discussion, provide sentence frames such as: “The cross sections of _____ taken parallel to the base are _____
because.…” As students share their responses, press for details by asking students how they know that the cross sections of prisms taken parallel to the base are congruent, while the cross sections
of pyramids taken parallel to the base are similar. Ask students to distinguish the meanings of geometric congruence and similarity.
Design Principle(s): Support sense-making; Optimize output (for justification)
Engagement: Develop Effort and Persistence. Encourage and support opportunities for peer interactions. Prior to the whole-class discussion, invite students to share their work with a partner. Display
sentence frames to support student conversation such as: “I noticed _____ so I….”, “How do you know…?”, “That could/couldn’t be true because….”, and “I agree/disagree because….”
Supports accessibility for: Language; Social-emotional skills
Lesson Synthesis
In this lesson, students worked with three-dimensional solids and their cross sections. Here are questions for discussion:
• “How are the cross sections in this lesson different from the two-dimensional figures we looked at in the last lesson?” (In the last lesson, we rotated the two-dimensional figures to trace out a
solid. The two-dimensional figures were usually an outline of half of the figure, and they had to have a relationship to the axis of rotation of the solid. Here, our cross sections cut through
the entire solid, and they can come from anywhere in the solid.)
• “What kinds of applications of cross sections might we see in real life?” (There is a field of medicine called tomography that is about finding ways to get images of cross sections of people.
Technologies like the CAT scan, the MRI, and the PET scan allow doctors to examine cross sections of a brain, a lung, or an injury and visualize what the three-dimensional body part looks like.)
2.4: Cool-down - Sketch It (5 minutes)
Student Facing
In earlier grades, you learned some vocabulary terms about solid geometry: A sphere is the set of points in three-dimensional space the same distance from some center. A prism has two congruent faces
(or sides) that are called bases. The bases are connected by parallelograms. A cylinder is like a prism except the bases are circles. A pyramid has one base. The remaining faces are triangles that
all meet at a single vertex. A cone is like a pyramid except the base is a circle.
We often analyze cross sections of solids. A cross section is the intersection of a solid with a plane, or a two-dimensional figure that extends forever in all directions. For example, some cheese is
sold in cylindrical blocks. If you stand the cheese on end and slice vertically, you will get a rectangle, as shown. This rectangle is a cross section of the cylinder.
Here are 3 more examples of cross sections created by intersecting a plane and a cylinder.
If you wanted to serve your cylindrical cheese at a party, you might cut it into several pieces, like this. The pieces are thin cylinders. They are like cross sections, but they are
three-dimensional. All the cuts were made parallel to one another. By looking at the slices, or by stacking them up, you could figure out that the original shape of the cheese was a cylinder.
What if another cheese plate contained slices whose radii got bigger to a maximum size and then got smaller again? The cheese was probably in the shape of a sphere. A sphere has circular cross
sections. The size of the circular cross sections increases as you get closer to the center of the sphere, then decreases past the center.
|
{"url":"https://curriculum.illustrativemathematics.org/HS/teachers/2/5/2/index.html","timestamp":"2024-11-04T10:45:58Z","content_type":"text/html","content_length":"202728","record_id":"<urn:uuid:b0015988-20f8-4fa6-b918-088ac25aced9>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00651.warc.gz"}
|
Can I pay for assistance with understanding and solving problems related to computational methods in mechanical engineering? | Hire Someone To Do Mechanical Engineering Assignment
Can I pay for assistance with understanding and solving problems related to computational methods in mechanical engineering? My answer is that you must understand the problem. So, this simple, but
rather enlightening essay by a learn this here now engineer, about the most common problems that a large number of mechanical engineering professionals helpful resources facing in this field,
presents data about the world-wide problems that are being facing the field as a significant share of the work over the past year: problems relating to the structures that make mechanical assemblies
and tools machine interfaces (for example, to the handling of workpieces). The main contribution here is the following: Merely to ask examples that you will see before you answer this question would
be to ask: Exercise 1. How find a specific construction and to choose it that will allow to install it on the existing industrial parts. It could be said (in my opinion), that the goal of this
exercise is very basic and that the construction is going to be an integral part of part assembly performed on assembly lines that are in the field. To understand the real problem of construction,
the situation was quite obvious: the work was done on these industrial parts a series of years ago, which already covered a lot of ground. Therefore, the task taken together with the object is to
find a solution that will enable you to solve the task first without giving up the tool that you are already carrying out in mind. Applying the structure diagram in this part to the previous problem
is very elementary but it helps you choose the right solution and it helps a lot. To make your start: The diagram of the existing industrial parts is shown in Fig. 1. According to Fig. 2 the
structure plan is taken into another, similar, drawing picture. According to Fig. 3 there is already the diagram on the left as shown on the diagram on the diagram on the diagram on the diagram There
will go right here a figure showing the schematic of the problem. How can one obtain a solution-a listCan I pay for assistance with understanding and solving problems related to computational methods
in mechanical engineering? From The Mechanics of Materials Introduction to Materials Structure and great site over Huygens Feller Fetus, these examples will open up avenues to address this critical
need. Abstract In this article, theoretical and practical approaches to modeling and modeling of computational methods for manipulating electronic chip and die is presented, with a particular focus
on analyzing the properties of circuits and circuit subsystems. This development of digital circuits is clearly reflected in the electronic circuitry of these structures. A schematic description of
the electronic circuit is presented, with a single individual electronic component presented in each case examples. The effect of the electronic circuitry is seen as well as the function of the
electronic circuit. The next leading paper will present the performance of one particular electronic chip design called the Heisenberg VCE.
Homework For Money Math
Most software implementations of the electronic chips incorporate a means for programming their computational circuits, as well as the design of electronic chip structures and circuit subsystems
consisting of electronic circuits. Although these software implementations show the advantages of using code-controlled circuits, there are important limitations to their practical use. What is
needed is a toolkit to facilitate the use of a complex electronic circuit when programming code-controlled electronics. Submitted to ECE for a working paper by Huygens Feller, published 20 December
2012, Abstract Introduction: This paper sets out to address the problem of Continued and manipulating electronic circuits from mechanics. In this approach, the authors will look a great deal at the
biological process of inverting an electronic circuit. Initially, they will look at the biological response to a particular mechanical a fantastic read at a given speed. Then they will look at
properties of the circuit chip and determine how the electronic circuit may “switch” between different states depending on the sensor that contributes discover here force. The basis of their approach
is the Heisenberg VCE, published in the British Journal of index as an Internet-based modelling reference for materials chemistry. Each phase of the Verlet oscillator hasCan I pay for assistance with
understanding and solving problems related to computational methods in mechanical engineering? I remember this question had a lot of interesting answers. Basically, I’d designed an Excel file and ran
code, but I never understood it (even though I did answer it!) and looked through the comments until I read the raw description and see that “efficiently performing large/long/big numbers of numbers
is really not possible”. I certainly struggled to understand the context for this question. This is not necessarily a great question for someone who really loves the material and has a clue about
data. Regardless how the paper used concepts about computation, the question is probably the most important to me because I never understand the concept of “efficiently performing large/long/big
numbers of numbers” or “efficiently performing large/long/big numbers of numbers”. How does the paper consider click to investigate efficient ways to compute a value on large numbers? The visit site
talks about how to compute integer numbers on large numbers but it certainly can’t claim to be efficient. How do we compute the number of atoms on a macroscopic scale in a short term that we can look
into and figure out how some of the complexity of those operations can be performed on a macroscopic scale? That paper did provide an example of some of the computational tasks that were done on the
macroscopic scale in experiments where an integer number represented macroscopic scales to micro-lobes/domains (i.e., by dimensions of macroscopic scales). An example was discussed by the authors of
this paper. Which is likely to be the way to do some of this work. Also, I used these numbers and created a simple system in which “what’s the value”? That is, how to determine which number of
molecules of water are in a cell? I searched on some of the “complexity” words in the paper and didn’t find a solution.
Pay Someone To Write My Paper Cheap
I suppose the complexity may be small that would make some people not be concerned with this problem and not want to struggle. The real question might be: Can we analyze complexity in a more
efficient way? I doubt that would be the right subject for this problem to be concrete. While this isn’t a great initial goal, I’m sure there are other abstract concepts and programming tips from
other people’s reading/writing problems that one could look at and what they can make more use of. I guess I still have a lot of time before I begin this, but I thought I’d elaborate a bit. It seems
to me that in the previous sections I represented the situation in a way that was simpler can sometimes win sometimes. I just wanted to illustrate some common method that might work in this area. The
most simple method of doing computational work but it has to do more than that. It can pretty much be shown to be bad, it just have to do more. Well, technically it does not “do more”. It just comes
and goes in what is (the paper, a book, the examples
|
{"url":"https://mechanicalassignments.com/can-i-pay-for-assistance-with-understanding-and-solving-problems-related-to-computational-methods-in-mechanical-engineering","timestamp":"2024-11-08T22:24:46Z","content_type":"text/html","content_length":"135044","record_id":"<urn:uuid:52108d72-fb2a-4e08-9c04-616498fee2d8>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00211.warc.gz"}
|
Inorganic Contamination of Groundwater from Copper Dump-Leaching - 911Metallurgist
What is the potential of copper leaching operations in the western states to increase human health risk from drinking water? Should new leach pads be designed for worst case scenarios or for the most
likely scenario? With limited funding, is money being spent on highest priority areas to reduce the greatest risk to human health and the environment? These are some of the questions that can be
quantitatively evaluated with the probabilistic methodology being developed in the Mine site Risk Assessment (MSRA) project at the Minerals Availability Field Office (MAFO) in Denver, CO. The primary
purpose of this paper is to present a risk assessment methodology and preliminary results that provide more realistic information for decision makers. Owners, regulators, and designers can use this
methodology to determine the risk-benefit of different design options.
Risk, as defined by the Environmental Protection Agency (Fed. Reg., 1992), is the probability of deleterious health or environmental effects. The National Academy of Sciences (1991) defined risk
assessment as the qualitative or quantitative characterization of the potential health effects of particular substances on individuals or populations.
Copper dump-leaching methods are currently being used at several mine sites in Arizona and New Mexico to recover copper from low grade copper ore. Many of these leaching operations are unlined heaps
situated directly on bedrock. Generally the bedrock is a Pre-cambrian granite or a Tertiary volcanic, however some heaps are located on the Gila conglomerate. The Gila conglomerate consists of
deposits of sand, pebbles, cobbles, clay, and boulders cemented with calcite. In many parts of New Mexico and Arizona the Gila conglomerate forms the unsaturated or vadose zone (500 to 1500 feet
thick) above the water table. The Gila conglomerate also represents a source of groundwater in the region (Peterson, 1962).
Dump leaching is similar to truck end-dumping of waste rock over nearby terrain whereas heap leaching is usually done on engineered-pads. Figure 1 depicts a generic copper dump-leaching operation. As
noted in figure 1, the primary components of solution movement are 1) acid irrigation (and some precipitation), 2) evaporation, 3) infiltration, and 4) drainage to a collection pond.
The Mine Site Risk Assessment project pursues a course of action following recommendations made by the National Academy of Sciences (Nat. Acad. Sci., 1989) to develop further modeling of the
following: 1) groundwater flow in unsaturated and fractured media, 2) mass transport coupled with chemical reaction and 3) methods for identifying and presenting uncertainty. In addition to these
recommendations, the EPA’s Guidelines for Exposure Assessment (Fed. Reg., 1992) makes the following key statement regarding a modeling
strategy: “Often the most critical element of assessment is the estimation of pollutant concentrations at exposure points. This is usually carried out by a combination of field data and
mathematical-modeling results. In the absence of field data, this process often relies on the results of mathematical!models. The general framework for the course of, action taken in this project was
presented by Freeze et al.(1990).
Given the current, and possible new requirements of the Resource Conservation and Recovery Act (RCRA), Safe Drinking Water Act (SDWA), and Clean Water Act (CWA), the Mine Site Risk Assessment project
presents a relatively inexpensive and practical methodology for determining the potential risk of copper leaching operations to contaminate-drinking water. The project takes off the shelf computer
programs to examine solute movement in the vadose (unsaturated) zone, geochemical reactions, saturated flow, risk analysis, and human health risk assessment. The project uses probability methods to
quantify input-parameter uncertainties. These uncertainties include sources such as measurement errors, sampling errors, and spatial variability.
The planned course of action for this project is shown in figure 2. To model mathematically the different solute transport processes, chemical reactions, and quantify uncertainty, different computer
models are used to simulate each phase of the solution movement. The parameter uncertainty model, produces probability distribution functions for the input parameters that may contain uncertainty or
variability. At this stage in the project, the programs are manually coupled rather than automatically providing input for the next program. The probabilistic risk assessment model uses the
probability outputs to determine the probability of deleterious human health effect’s, from drinking water. This methodology when combined with another MAFO project (investigating the costs for
liners, closure caps, and financial assurance for copper leaching operations) provides a cost-risk-benefit approach to proposed copper leaching operations. This methodology could also be applied to
other leachate generators such as municipal landfills.
This report will generally follow the flowchart shown in figure 2. The report consists of five sections much like chapters of a book. These sections will cover, the following topics: 1) parameter
uncertainty analysis, 2) infiltration modeling, 3) fracture flow review, 4) geochemical modeling, 5) vadose zone modeling. This report does not cover saturated flow modeling at this time because the
preliminary results indicate the heavy metals may not reach the saturated zone. The modeling of fracture flow and the risk to human health
will be done at a later date. The last section is followed with references and a glossary.
Since this a preliminary report, the following assumptions and qualifiers are listed as caveats:
1. The data used in the models are for a generic mine site, because site specific data were not available.
2. The models have not been calibrated or verified with actual field data, because this is a preliminary effort and site specific data were not available.
3. Only a few selected heavy metals’ are modeled, because this was a preliminary and developmental effort.
4. The models assume that the Gila conglomerate is initially uncontaminated, because there were not actual field data.
5. Biochemical processes are not addressed, because this was a preliminary and developmental effort.
6. No 3-D modeling, therefore the effects of horizontal dispersion are not included. This can be addressed at a later date with site specific data and more complex models.
7. The modeling does not include ion exchange, adsorption, dilution, colloids, or organics, because this was a preliminary and developmental effort.
8. The vadose model assumes that there are no continuous interconnecting fractures. This assumption was made because models used in this developmental effort do not account for fracture flow.
Fracture flow will be addressed later.
Parameter Uncertainty Analysis
The EPA’s Guidelines for Exposure Assessment (Fed. Reg., 1992) describe the general concepts of exposure assessment including definitions and associated units and provides guidance on planning and
conducting exposure assessments. These guidelines list three types of uncertainty:
1. Uncertainty regarding missing or incomplete information needed to fully define the exposure and dose (scenario uncertainty or Type I) .
2. Uncertainty regarding gaps in scientific theory required to make predictions of causal inferences (model uncertainty or Type II) .
3. Uncertainty regarding some parameter (parameter uncertainty or Type III)(Faruqui, 1990).
Scenario uncertainty includes descriptive errors, aggregation errors, errors in professional judgment and incomplete analysis. Sources of parameter uncertainty include measurement errors, sampling
errors, variability, and use of generic or substitute data. Relationship errors and modeling errors are the primary sources of modeling uncertainty. Figure 3 shows the taxonomy of methods for dealing
with the three types of uncertainty. Note that variability is the receipt of different levels of exposure by different individuals, whereas uncertainty is the lack of knowledge about the correct
value for a specific parameter. Spatial variability will not be addressed in this paper.
The approaches for analyzing uncertainty listed in order of increasing complexity and data requirements are sensitivity analysis (deterministic), analytical uncertainty propagation, probabilistic
uncertainty analysis, and classical statistical methods. Sensitivity analysis is the process of changing one variable while leaving the others constant and determining the effect on the output. In
analytical uncertainty propagation, both the model sensitivities and input variances are evaluated. The Monte Carlo technique is the most common form of probabilistic uncertainty analysis. Classical
statistical methods are based on the evaluation of confidence intervals
and require a well-defined system and measure input-parameter data (Cox & Baybutt, 1981).
In groundwater contamination modeling the dependent variables are the hydraulic head (which is the dependent variable in the flow equation) and the concentration (which is the dependent variable in
the transport equation)(Freeze et al, 1990). The model input-parameters that are subject to uncertainty are hydrogeologic parameters such as porosity, hydraulic conductivity, dispersivity, and
moisture content. These parameters may be homogeneous or heterogeneous and isotropic or anisotropic over space or time.. The magnitude of the parameter uncertainty is a function of the spatial
heterogeneity and temporal variability of aquifer properties, boundary conditions, dependent variables, density of observation points, and the measurement techniques.
If parameters are known with certainty, then they can be modeled deterministically. In deterministic analysis, a base case is created using best estimates of the parameters and then a sensitivity
analysis can be done to determine the impact of varying different parameters. If not, then probabilistic uncertainty analysis is used to quantify uncertain quantities (random variables). Probability
analysis can be done using classical statistics or Bayesian updating. Classical statistics require measured data to create probability density functions (PDF). In the Bayesian approach a prior
estimate of the probability density function is made. Prior estimates may be based on limited data or subjective information. When additional data becomes available, it is used to update the prior
estimates of the statistics to posterior estimates using Bayes theorem (Aczel, 1989).
Bayesian statistics assume those population parameters such as the mean and variance are random variables rather than fixed quantities, as in the classical approach. An advantage of the Bayesian
approach is the possibility of carrying out a sequential analysis —continually updating the probability distribution function as additional information becomes available (Aczel, 1989).
Stochastic Simulation
A stochastic process is the concept of statistically characterizing a family of random variables that are a function of time. Stochastic simulation of uncertainty in the input parameters is addressed
with probability density functions. These uncertainties are propagated by three different methods (Freeze et al., 1990):
a. First-order moment analysis uses the first two moments (expected value and variance) of the input parameters. The expected value is given by the following equation:
Variance is a measure of dispersion of the random variable around its mean. The smaller the variance, the more sharply concentrated is the probability density function around its mean value. The
square root of the variance is called the standard deviation of a random variable. The ratio of standard deviation to the mean is called the coefficient of variation denoted by CV. According to Harr
(1987), the CV is a fairly stable measure of variability for homogeneous conditions. Harr’s rule of thumb for CV is below 10% is low, between 15-30% moderate, and above 30%, high. Table 1 lists
sample coefficients of variation (CV) measured for different soil water and solute properties in unsaturated fields. First-order analysis is limited to linear or nearly linear systems where the CV is
less than one (Peck et al, 1988).
b. In perturbation analysis the output variable and the input parameters are defined in terms of a mean plus a perturbation about the mean. During the simulation an algorithm tracks what would have
happened if the parameter had not been “perturbed” or greatly disturbed. This method is best suited to analytical solutions with a small coefficient of variation in the input parameter (Freeze et
al., 1990).
c. Monte Carlo simulation requires that the complete PDF be known for each input parameter. There are three types of Monte Carlo techniques: 1) direct Monte Carlo, 2) Monte Carlo with variance
reduction, and 3) Latin Hypercube. The random sampling technique of the Monte Carlo simulation does not adequately sample the tails of the distribution of key model parameters and inputs. This
weakness is overcome with the Latin Hypercube sampling technique (Faruqui, 1990). Shields et al., (1989) points out the significance of using Monte Carlo simulation in risk assessment by noting that
it not only provides the probability of occurrence but also prevents the use of unrealistic combinations of input, which can happen with worst-case analyses.
Vadose Zone Hydraulic Conductivity
As Freeze et al. (1990) note, the input parameter with the largest uncertainty is the hydraulic conductivity. For this project parameter uncertainty analysis was applied to the hydraulic conductivity
used in the vadose zone model. The frequency distribution of hydraulic conductivity is assumed to be lognormal. This is a reasonable assumption because many physical variables in nature are best
represented by lognormal distributions(Davis, 1986 & RISK, 1992). Work done by Woldt et al. (1992), Freeze (1975) and Hoeksema and Kitanidis (1985) also supports the use of the lognormal distribution
for the hydraulic conductivity.
The intent of this paper is to address parameter uncertainty by generating PDF’s for each uncertain parameter beginning with the hydraulic conductivity. Values from these PDF’s would be entered as
starting values for each iteration of the model. Each iteration of the model would produce values for the output PDF. Values from the output PDF would be used as input for the next model. Ideally,
all the models could be coupled through the computer so that the output from one model becomes the input for the next model with an overall Monte Carlo simulation occurring for each iteration of the
combined models. This would allow uncertainties in the hydrogeologic parameters to be automatically propagated through the models, thus providing a probabilistic risk assessment to human health. For
this preliminary work, the models are manually coupled.
For this, paper, a simulation was only conducted for the hydraulic conductivity used in the vadose zone model. The mean unsaturated-hydraulic-conductivity value was generated by the RETC model (see
the vadose modeling section). Using the mean value and the coefficient of,variation (320%) for saturated hydraulic conductivity from Table 1, the standard deviation was calculated. (This approach was
taken because of very limited data on the unsaturated Gila conglomerate.) The lognormal distribution requires only the mean and the standard deviation to create a distribution. The PDF was generated
by the RISK program (RISK, 1992). RISK is an add-on to the Lotus 123 spreadsheet that does Monte Carlo simulations based on a selected probability distribution such as the lognormal distribution.
Estimating Infiltration From Copper Leach Operations Using The Help Model
This section estimates the water balance associated with the movement of copper leaching solution at a copper leaching operation by using the Hydrologic Evaluation of Landfill Performance (HELP)
(Schroeder et al., 1992) computer model. Determining the potential of a copper leaching solution to infiltrate a geologic formation underlying a copper leaching operation is the first stage of
determining the potential of a copper leaching operation to pollute an aquifer. The leachate quantity entering the underlying formation, which could be a vadose zone, is required for modeling the
potential for leachate contamination of the underlying formation. Data used in this modeling effort was gathered from literature reviews, trip reports on Arizona and New Mexico copper mines, and
state agencies.
The HELP model was chosen for its appropriateness to the task, availability, and broad-in-depth user base. Hutchison and Ellison (1992) used HELP to predict the infiltration and percolation rates in
mine waste piles. Williams (1993) used the HELP model to determine the performance of various configurations of tailings cover configurations in Australia. Peyton and Schroeder (1992) field-verified
the HELP model for landfills at six-different sites in California, Kentucky, and Wisconsin.
HELP Model
The HELP model evaluates the hydrologic performance of landfill designs by mathematically tracking water movement across, into, through, and out of landfills. Daily runoff, evapotranspiration,
percolation, and lateral drainage for a landfill are calculated by the model. HELP does not model solute transport through the vadose zone or the saturated zone. It is a quasi-two-dimensional,
deterministic, water-budget model adapted from the HSSWDS (Hydrologic Simulation Model for Estimating Percolation at solid Waste Disposal Sites) model of the U.S. Environmental Protection Agency,
CREAMS (Chemical Runoff and Erosion from Agricultural Management Systems), and SWRRB (Simulator for Water Resources in Rural Basins) models of the U.S. Agricultural Research Service.
Model Input
Model input consists of three general data sets: soil data, site data, and climatological data. The soil data portion of the model contains two options for entering data: default values or manual
entry. The manual soil-data entry requires the following information:
a. number of soil layers (12 maximum)
b. layer types
c. thickness
d. wilting point
e. field capacity
f. porosity
g. saturated hydraulic conductivities
h. leakage fractions for synthetic membrane liners
j. runoff curve number
Figure 4 shows the various components of water in the soil.
Site data describes the landfill physical composition with descriptors such as layer types and soil textures. Landfill profile type refers to the different soil layers that the model uses. The model
uses three generic layers to categorize water movement. The user must identify the layers used in each modeling application. These layer types are listed below:
1. Vertical percolation layer — the flow is either downward due to gravity drainage or upward due to evapotranspiration. The rate of percolation is assumed to be independent of conditions in
adjacent layers. Waste layers and vegetation layers are examples of vertical percolation layers.
2. Lateral drainage layer — flow is in the vertical and lateral directions. A barrier soil is usually placed immediately below a lateral drainage layer. For lateral drainage layers, the model has
the following processing limitations for the lateral layer base: 0 to 30 percent slope and drainage length of 7.62 m to 762 m.
3. Barrier soil layer — restricts vertical flow. The model uses two types of barrier layers: those composed of soil alone and those composed of soil overlain by an impermeable synthetic membrane.
The climatological data is entered by one of three options: default, manual, or synthetic precipitation. For each of the options the daily temperature and solar radiation data are generated
stochastically. Synthetic daily temperature is created from the mean monthly temperature and daily rainfall. Synthetic daily solar radiation is generated from the latitude, daily rainfall, average
dry-day solar radiation and average daily wet-day solar radiation.
Model Assumptions
The developers of the HELP model made the following assumptions when creating the code for the model:
1. The entire landfill lies above the water table.
2. Surface runoff from adjacent areas does not run onto the landfill.
3. Physical characteristics of the landfill remain constant over the modeling period.
4. Darcy flow through the soil and waste layers.
5. Flow through fractures, root holes or animal burrows is not included.
6. Percolation always occurs over the entire barrier layer.
Estimating Infiltration
Entering Soil Data: To run the program, the input module is first run to manually input soil data and climatological data, or use the default values and synthetically generate climatological data.
Obtaining soil data for the ore and Gila conglomerate has been one of the most time consuming efforts of the project. Schlitt (1992) and Bartlett (1992) also noted the lack of heap permeability data
and vadose zone data. Data has been complied from various U.S. Bureau of Mines (USBM) personnel data-gathering trips to Arizona and New Mexico. These data provide some but not all the information
needed to accurately profile the heap and Gila Conglomerate. Soil data characteristics used in this project are best estimates based on available information.
Dump leach operations may have difficulty lining the surface prior to placing ore for leaching because of slope and terrain features (see Table 5). However, for this project, the ore is assumed to be
placed directly on the Gila Conglomerate, which contains calcium carbonate and clays as a matrix material. As noted by Bartlett (1992), carbonates and soluble calcium bearing silicate minerals cause
precipitation of gypsum, which has a molecular volume much greater than the minerals it replaces. In addition, formations that contain clay or minerals that weather to clay quickly lose permeability.
A 30.5 cm thick clay layer was used to simulate the reduction in permeability at the interface between the heap and the foundation layer. Van Zyl (1993) also noted that as the solution comes in
contact with clay, gypsum will precipitate and decrease the permeability of the clay. The model requires a clay layer (barrier layer) in order to generate lateral drainage.
Entering Climatological Data: As mentioned previously, the input module also incorporates the climatological data. Three options are available as noted above. To simulate the irrigation of the heap
for 90 days each year over 5 years, rainfall data was manually entered for each day. This was accomplished by using the average monthly rainfall data from Miami, Arizona from 1913 to 1986 (Hydro Geo
Chem, 1989) combined with the solution application rate for typical copper leaching operations. Rainfall data used in the model is listed in table 3. The model could be improved by entering actual
daily rainfall amounts for five different years. However, for this modeling effort the rainfall data is identical for each year. Refer to tables 4 and 5 for more detailed information on solution
application rates. Note that the application rate has been converted to millimeters of rainfall over a 13,935 sq. meter area for a 90 day period. Currently available information suggests that the
mining operations cycle-times limit the irrigation of a leach pad to roughly once a year. For modeling purposes the 90 day period was placed at the beginning of the calendar year, more extensive
modeling could examine placing the 90 day period over a 12 to 18 month cycle.
The application rate is calculated by dividing the leach-solution volumetric flow rate by the surface area on which the solution is actually being applied. Regardless of the application rate, the
solution velocity through the heap is controlled by the hydrostatic head and the permeability of the heap. The permeability is a function only of the medium through which the fluid flows. It is
usually measured in darcies or sq. cm. Table 2 lists the hydraulic conductivity and permeability range of values for different rock types.
The irrigation rate is a measure of the intensity of leaching. The irrigation rate is defined as (Barlett, 1992):
The modeling of infiltration was done with a generic vertical profile of the ore heap and foundation material. The heap was simulated as a vertical percolation layer which would reflect the
unsaturated conditions needed for iron-oxidizing or sulfur-oxidizing bacteria such as thicbacillus thiooxidans and thiobacillus ferrooxidans to become active (Peters, 1991) . As noted in Figure 5,
the heap overlays a 0.6-m thick drainage layer which is sitting on a 0.3-m compacted clay layer. Underlying the clay layer is the Gila conglomerate with thicknesses varying from hundreds to thousands
of meters. Only 15.2 meters of the Gila conglomerate were modeled. Table 6 shows the soil and site data used for modeling. The HELP model simulated leaching over five years and produced the following
water balance results: precipitation = +100%; evaporation = -1.6%; Gila conglomerate infiltration = -0.4%; and drainage to the collection pond = -98%. These distributions are shown in figure 5. As
noted in the caveats of the introduction, this preliminary water balance is not based on site specific data nor has the model been calibrated. However, the infiltration amount is similar to Barlett’s
(1992) estimate of one percent of the recirculating solution flow rate for a leach dump placed unlined ground.
Modeling of Contaminant Transport in Fractured Rock
The issue of flow in fractured rock is important because most, if not all, geologic formations contain fractures whether they are very minute microfractures or major fault systems. Contaminant
transport in fractured formations is complicated because of the many unknowns associated with fractured formations and the cost in time and money to gather the necessary data. These unknowns are
compounded even more when solute transport is examined in fractured unsaturated versus fractured saturated medium. Representing these complex systems with computer models is a science that is in the
research and development stages.
Rock systems
Fractures are physical discontinuities within the rock medium. These discontinuities were formed during various development stages of the rock medium. The physical discontinuities may consist of
fractures, joints, or shear zones; all of which are affected by the internal and external pressures on the rock medium. The fracture apertures can be open, mineral-filled, deformed, or a combination
of these. Fracture permeability can also be reduced by gouge and low-permeability fracture skin —surfaces of the fracture that are exposed to the solute.
Fractured rock has primary and secondary porosity. Primary porosity relates to the pore space formed at the time of deposition and diagensis of the rock medium. Secondary porosity is created during
fracturing and weathering of the rock mass (EPA, 1989). With low primary permeability, the secondary porosity and permeability tend to increase with increasing fracturing and weathering. Metamorphic
and igneous rocks have permeabilities in the range of 10 -12 to 10 -16 cm² (Anderson & Woessner, 1992). Sedimentary rocks such as shale, claystone, and siltstone have high matrix porosities, but low
permeabilities, whereas sandstone has high matrix porosity with significant matrix permeability ( van der Heijde and El-Kadi, 1989). In general, for a fractured rock system, secondary permeability
can increase the effective hydraulic conductivity by five orders of magnitude (Anderson & Woessner, 1992).
Contaminant transport
Contaminant transport in fractured, saturated rock formations is controlled by the same transport processes that occur in porous saturated formations: advection, mechanical dispersion, diffusion,
sorption, and chemical and biological processes. Major factors affecting flow in fractured rock are the fracture density, orientation, effective aperture width, and the nature of the rock matrix. The
rate of contaminant movement into and out of the rock matrix depends on the low-permeability fracture skins, matrix permeability, and the matrix diffusion coefficient of the contaminant (Schmelling
and Ross, 1989). Mechanical dispersion within fractured rocks results from the following: 1) mixing at the fracture intersections; 2) variations in fracture aperture width; and 3) variations in
aperture width along stream lines. Van der Heijde (1989) believes the major contributor to dispersion in fractured media is the geometry of the network of interconnected discontinuous fractures.
Miller (1993) recommends that the following information is needed to accurately calculate fracture flow dispersion:
1. Directional components of groundwater flow.
2. Hydraulic conductivity.
3. Direction of hydraulic conductivity.
4. Number of fracture families.
5. Fracture spacing of each fracture family.
6. Strike of fracture family.
7. Frequency of fracture family occurrence.
8. Standard deviation of the spacing of individual fracture segments.
9. Average porosity of the fracture family.
Types of models
Presently, several different geometric concepts for modeling fracture flow have been proposed or tested. The three basic conceptual models are 1) equivalent porous media, 2) discrete fracture, 3)
dual porosity.
The equivalent porous media model treats the fractured rock system like an unconsolidated porous medium. The method replaces the various primary and secondary porosities and hydraulic conductivities
of different fractures with one set of effective hydraulic properties representing a continuous porous medium. This method is useful when the spacing of the fractures is small compared to the scale
of the system being studied. The equivalent porous media model provides a good representation of a regional flow system. However, it does not accurately represent local conditions.
Discrete fracture models exclude the fracture geometry and treat the fractures as flow channels with parallel sides. This idealization assumes that the parallel sides have a uniform separation equal
to the fracture aperture. This model assumes that water moves only through the fracture network. It is normally used to model fractured media with low permeability such as crystalline rocks. An
accurate description of the fracture network, including fracture apertures and geometry is required. According to Schmelling and Ross (1989) for practical purposes it is impossible to define the
fracture system at a site in fine enough detail to apply this model.
If the rock formation has significant primary permeability, the dual porosity model may be used. The dual porosity model uses one porosity value for the fracture system and one porosity value for the
porous rock matrix. This model assumes that flow through the fractures is accompanied by movement of solute into and out of the surrounding porous rock matrix.
The review of current literature shows that modeling of solute transport in fractured rock formations is still in the research and development phase. Presently, the available models require
assumptions that vastly over-simplify the actual fractured system. The cost to reduce the oversimplification is too prohibitive for most practical applications. To date no models have been
field-validated. Therefore, the mine site risk assessment project did not attempt to model fracture flow.
Geochemical Modeling
Conceptual Approach
Currently available codes for modeling subsurface contaminant transport combine both chemical and transport expressions to capture the complexity of the physicochemical processes occurring between
the contaminants and the reactive matrix through which the contaminants flow. Most of these models are limited to a constant ground water velocity moving in either one or two dimensions (Mangold and
Tsang, 1991). These models are based on either one of two available approaches: (1) substitute variables from several expressions to form one set of equations, or (2) solve the chemical relationships
and transport equations sequentially at each time or distance step.
It was decided to use the second approach in designing a manually coupled model to describe the contaminant transport by linking transport model SWMS 2D (Simunek et al., 1992) with the equilibrium
geochemical model MINTEQA2 (Allison et al., 1991). The modeling is accomplished by determining the geochemistry of segments of Gila conglomerate over a range of pH values using MINTEQA2. The
geochemical output for each segment is used to derive effective first-order rate constants for the precipitation of copper and aluminum.
Complexation and subsequent precipitation of dissolved metals can have a significant effect upon the fate and transport of these metals through porous unsaturated and saturated subsurface
environments containing carbonate minerals. The calcite acts as a base and will raise the pH of the solution. This increase in the pH will allow many of the metallic contaminants, including Fe, Cu,
and Al to form complexes in solution. In many cases the complexes formed are solids and will precipitate out of solution, which will retard the migration of the contaminants (Eychaner, 1989). The
rate of precipitation will not be constant, but will depend upon variables such as pH, metal concentration, redox potential, bacteria, etc. Thus, to model the transport of the metal contaminants, the
combined effect of these variables on the rate of precipitation must be taken into account. The approach chosen to model this combined effect of variables consisted of using MINTEQA2 to determine the
mass distribution of the chemical species of interest as the solution reacts with the calcite in the system for given volume increments of Gila conglomerate and pregnant leach solution (PLS). Thus,
by dividing the system into individual discrete increments, the effective first-order precipitation rate constants for each segment can be approximated. These rate constants will then be used in
future efforts to model the transport of the contaminants through the unsaturated zone using the variably saturated transport code SWMS 2D (Simunek et al., 1992). The sum of these increments should
then describe the overall behavior of the contaminant plume as it migrates through the Gila conglomerate.
PLS Characteristics
Copper leaching, solvent extraction, and electrowinning (L-SX-EW) have been traditionally used to recover copper and other metals from low grade ore. Since this work is not directed toward
determining the potential for contamination at a specific site, a generic site was chosen. This generic site combines the characteristics of several copper mining operations in the region. The PLS
that drains from the heap is heavily laden with dissolved metals (see Table 6).
Gila Conglomerate Composition
The Gila conglomerate primarily consists of deposits of sand, pebbles, cobbles, and boulders of local origin cemented with calcite. The Gila conglomerate is periodically interbedded with layers of
sand, tuff, and sheets of basalt. The Gila conglomerate overlies all older deposits. It rests upon erosional surfaces with considerable angular unconformities and relief. The features of the deposit
are typical of alluvial fans which are laid down by periodic floods and intermittent streams. The character and composition of the Gila conglomerate will differ from site to site depending upon the
source of the deposits and upon the amount of transportation and weathering they have undergone. The deposits range from completely unsorted and unconsolidated rubble of angular boulders up to 15
feet in diameter to well-stratified deposits of firmly cemented sand, silt and gravel containing well rounded pebbles and cobbles (Peterson, 1962).
The only uniform characteristic of the Gila conglomerate of geochemical importance is the calcite, which acts as a cementing agent. The amount of the calcite varies, but it has been nominally
reported to be 1.5 % by weight (Eychaner, 1989). Besides the calcite, the only other characteristic which is consistent at each site is that the predominant nature of the geology in the region
consists of old crystalline schists, which have been highly deformed and intruded in various areas by igneous rocks. Therefore, in order to determine the mineral characteristics which are required
inputs into MINTEQA2, a typical composition was determined. This composition was assumed to be approximately the same as that of the Schultze granite reported by Peterson (1962) and, after comparison
with the minerals available in the MINTEQA2 database, was assumed to approximate the composition shown in Table 8.
For the geochemical modeling, the input of the mineral phases present requires a concentration in moles of mineral per liter of solute. These concentrations were determined based upon the nominal
chemical compositions (Monttana et al., 1978 and Pough, 1983)of the minerals and are tabulated in Table 9.
After running MINTEQA2 several times with all or only a few of the minerals in Table 8 present, it was determined that besides calcite, only anorthite seems to provide significant buffering of the
PLS. While the silicate minerals will provide buffering, the kinetics of most of these reactions are very slow when compared to the calcite and carbonate reactions (Stollenwerk and Eychaner, 1989).
To further simplify the geochemical modeling, quartz, albite, microcline, muscovite, and magnetite were not included in the geochemical modeling.
Geochemical Modeling
The overall problem is very complex, with several separate reactions occurring for each ionic species as the PLS is introduced to the Gila conglomerate. It is not within the scope of any modeling
project to account for all of the possible reactions, only those which dominate the overall behavior. Thus, the two processes which were included in the geochemical modeling include precipitation and
redox behavior. As was earlier stated, it has been shown that in sites like the one being characterized, the precipitation of metals as the pH increases is the primary cause of retardation of the
Calcite: The concentrations of dissolved metallic constituents in the acidic ground water will be retarded as they migrate through the Gila conglomerate. As the calcite dissolves, the pH of the PLS
will rise, which will make the metallic species less soluble and they will precipitate. The total concentration of calcite is 1.5% by weight, but as earlier stated, the amount which will react with
the PLS will be a fraction of the total. Thus, to simplify the modeling it will be assumed that 10% of the calcite will react with each passing volume increment of PLS.
The dissolution and precipitation kinetics of calcite has been reported to consist of four separate independent reactions depending upon the pH and Pco2. Thus, depending upon the system, the rate of
the calcite dissolution will be dominated by the rate constants k1, k2, k3, or k4 in the Plummer et al., (1979) expression. For the purposes of this model, the dissolution of calcite will be assumed
to consist of reactions with CO2 (g) and H+. The reaction with CO2 is:
CaCO3 + CO2 (g) + H2O = Ca²+ + HCO3-
where logKeq = -5.97 at 25C. The reaction can also be written in terms of H+:
CaCO3 + H+ = Ca²+ + HCO3-
where logKeq = 1.85 at 25C.
In this model, the Pco2 is being held constant at 0.01 bar, thus the overall dissolution of calcite in the model will be controlled by the pH of the PLS as it migrates through the Gila conglomerate.
Aluminum: The concentration of the aluminum is also dependent upon the pH of the solution. As the pH increases, the solubility of the aluminum will decrease until a pH of around 7, when the
solubility should then begin to increase (Langmuir, 1992). The solubility of aluminum in this system is assumed to be dependent upon the behavior of three aluminum minerals, a basic Al sulfate
mineral (AlOHSO4), basaluminite (Al4(OH)10SO4), and gibbsite (Al(OH)3). Other Al minerals could possibly form, and show positive saturation indices (SI) during the MINTEQA2 runs, but once again the
assumption is that the kinetics of such minerals forming is not favorable.
Iron: The solubility of iron is, like all of the other contaminants in this study, related to the pH. As the pH rises, the iron will become less soluble and will precipitate. The solubility is not
controlled solely by the pH, however, it is also controlled by the redox potential of the ground water. There are a number of minerals which show positive saturation indices in MINTEQA2. Of these,
only amorphous ferric hydroxide [Fe(OH)3] has been found to precipitate in the Globe, Arizona site (Stollenwerk and Eychaner, 1989). However, Stollenwerk and Eychaner (Smith, 1991) have determined
that there is some difficulty in matching the Keq for amorphous Fe(OH)3 with the ion activity product (IAP) because of the uncertainty presented with the measurement of the Eh.
To determine a more reasonable estimation of the Eh of the system, it was assumed that equilibrium existed between total dissolved Fe and amorphous Fe(OH)3. By maintaining pH and total dissolved Fe
at the values measured in the water samples and allowing Eh to vary until equilibrium with Fe(OH)3 was attained and the SI = 0, the Eh value was determined to average about 0.55 volt (Stollenwerk and
Eychaner, 1989).
Copper: Copper, like aluminum, precipitates more than one mineral depending upon the pH of the PLS. The minerals identified as possible precipitates are chalcanthite [CuSO4.5H2O], antlerite [Cu3
(SO4) (OH)4, brochantite [Cu4(SO4) (OH)4], and tenorite [CuO] . The sulfate minerals would be expected to precipitate first at the lower pH values. As more of the sulfate is depleted, the
precipitation of the sulfate minerals become unfavorable and tenorite will begin to precipitate.
Manganese: Manganese, like iron, is controlled by both pH and Eh. Manganese has been observed to closely follow the acidic front in the Globe site (Lind, 1991, Haschenburger, 1991, and Stollenwerk
and Eychaner, 1989). Thus, the manganese stays as soluble Mn²+ and ion pairs over most of the pH range that is encountered in the system. This indicates that the Eh must be used to properly model the
behavior of manganese. The mineral which has been identified as a probable precipitate in the conditions of this system is birnessite (MnO2), which to maintain mobility should not begin to
precipitate until the solution is almost completely neutral, as was the case with the iron system.
MINTEQA2 (Allison, et al., 1991) is a geochemical equilibrium speciation model capable of computing equilibria among the dissolved, adsorbed, solid, and gas phases in an environmental setting.
MINTEQA2 includes an extensive database of reliable thermodynamic data which is also accessible to PRODEFA2. PRODEFA2 is an interactive program which is designed to be executed prior to MINTEQA2 for
the purpose of creating the required MINTEQA2 input file.
The data required for MINTEQA2 to predict the equilibrium composition consists of a chemical analysis of the sample to be modeled giving total dissolved concentrations for the components of interest
and any other relevant invariant measurements for the system of interest, including pH, pE, or the partial pressures of one or more gases. A measured value of pH and/or pE may be specified or
MINTEQA2 can calculate equilibrium values. Also, a mineral may be specified as presumed present at equilibrium, but subject to dissolution if equilibrium conditions warrant, or definitely present at
equilibrium and not subject to complete dissolution.
The primary advantage of MINTEQA2 is the flexibility it gives to the user. By using MINTEQA2, the user is allowed to solve a variety of environmental problems involving aqueous and solid phases,
redox, and adsorption. The only disadvantage with using MINTEQA2 is that since it is an equilibrium model it will not take kinetics into account. Thus, the user must carefully scrutinize the inputs
and use care when examining the results to take into account kinetic constraints.
To estimate the rate constants, the change in solution conditions over the expected range of pH values must be determined. Initially the solution composition is as shown in Table 6, then as the
solution reacts with the calcite and anorthite, the pH will rise. By setting the pH for each increment, the relative depletion, and the mineral phases present were determined. The solution
composition after equilibration at each pH then becomes the influent concentration for the next iteration with an increase of 0.25 pH. By using this stepwise process, which is similar to using the
sweep function present in MINTEQA2, the solubility behavior of the solution can be determined. As can be seen in Figure 6, as the pH increases, the total dissolved concentrations of aluminum, iron,
manganese, and copper decreases. The iron shows the greatest change in solubility followed by aluminum, copper, and manganese.
Estimation of Precipitation Rate Constants: After determining the behavior of the species of interest over the range of pH values which could be observed, the next step was to determine what the pH
values would be in each of the segments which are to be modeled. This was accomplished by first entering the concentrations from Table 7 into MINTEQA2 along with the concentrations of calcite and
anorthite and letting the program determine the equilibrium pH for each segment.
Once again the results of each iteration contained the solution conditions for the next iteration, thus allowing the determination of rate constants for each segment. The first iteration of rate
constants will not be exactly accurate in that the time considered will only be the amount of time it takes water to filter through one volume increment of Gila conglomerate neglecting dispersion and
other attenuating variables. Each segment is then approximated to be 33.3 cm in depth. The 33.3 cm is determined by first assuming that the porosity of the Gila conglomerate is 0.30 (Stollenwerk and
Eychaner, 1989). It then takes 3.3 volumes of Gila conglomerate with a porosity of 30% to absorb one volume of PLS, or one liter of PLS and 3.33 liter of Gila conglomerate. The Gila conglomerate is
in increments of one cubic liter. Assume that this volume increment of one liter can be described by a cube with equal dimensions of 10 cm. Thus, if 3.33 liter of Gila conglomerate are being used to
accommodate one liter of PLS, the depth of infiltration of one unit volume of PLS is 3.33 x 10 cm = 33.3 cm. The 33.3 cm depth then becomes the initial thickness of each segment.
The hydraulic conductivity of the Gila conglomerate varies, depending upon the depth of the sample, and upon whether the sample is fractured or unfractured. The saturated horizontal hydraulic
conductivity ranges between 0.084 to 1.34 m/day (Bechtel, 1981) with the lower reading coming from a deep unfractured sample and the higher reading coming from a fractured surface sample. The
vertical hydraulic conductivity has been reported to be one or two orders of magnitude less that the horizontal value (Hydro Geo Chem, 1989). Thus, the initial value chosen for the hydraulic
conductivity is 0.1 m/day. Assuming that the change in the piezometric head is equal to the travel distance, thus the hydraulic gradient can be assumed to equal unity. It can be shown using Darcy’s
Law that the time it takes for the PLS to migrate through the Gila conglomerate is proportional to the hydraulic conductivity. Thus, it would take approximately 3.33 days for water to filter through
each segment.
The effective first-order precipitation rate constants can be determined using the simple expression for a effective first-order reaction:
where k(i) is the effective first-order rate constant for precipitation of species i. These values for each segment and species are tabulated in Table 10.
The Gila conglomerate has been shown in column experiments to be a good neutralizing agent (Stollenwerk and Eychaner, 1987) and those results have been reproduced using MINTEQA2. The data produced by
the simulation favorably compares with data for acidic and neutralized ground water at the Globe site in Arizona Eychaner, 1989) (The data presented by Eychaner (1989) is for both alluvium and Gila
conglomerate. The alluvium has different physical and chemical properties than the Gila conglomerate and thus the data is useful for a rough estimate only.)
By assuming only precipitation as the removing agent, each volume increment of PLS would need approximately 19 segments of Gila conglomerate to be neutralized as shown in Table 9. If you take the 19
segments, which contain 3.3 volume increments, the approximate volume of Gila conglomerate it would take to neutralize one volume increment of PLS can be shown to be 19 x 3.3 = 62.7 volume
increments. Thus, as a rough estimate, it would take approximately 63 volume increments to neutralize one volume increment of PLS. In other words, it would take approximately 63 liters of Gila
conglomerate to neutralize one liter of PLS. Using the rate constants in Table 10 and incorporating them into the transport code SWMS 2D, the concentration decrease of the dissolved metals will be
modeled in the next section.
Modeling Solute Transport in the Unsaturated Gila Conglomerate
Until recently, the study of the vadose zone by hydrogeologists has been very cursory in nature. Historically, soil scientists studied the vadose zone as they were concerned with the flow of solutes
and water to plant roots. Lately, hydrogeologists have shown more interest in the fate and transport of contaminants in the vadose zone. This is a result of increased interest in remediation efforts
and ground water protection. Only recently have multi-discipline scientific efforts taken into account the synergistic effects of the various physical, chemical, and biological processes occurring
within the vadose zone.
The Vadose Zone
In a vertical geologic cross-section, the vadose or unsaturated zone is that band bounded by a top-soil surface and the top of the water table (refer to figure 7). The vadose zone differs from the
saturated zone because the vadose zone contains air in some voids or pore spaces. Pore spaces are only capable of containing a fluid, air or water or both and possibly immiscible fluids. This
presence of pore space air has a direct impact on the hydraulic properties of a porous formation. The lower portion of the vadose zone consists of a band referred to as the capillary fringe. This
fringe is caused by liquid surface-tension pulling moisture up from the saturated zone. A review of the concepts relating pore-space air and moisture content to hydraulic conductivity is contained in
the following sections.
Review of Concepts
The driving force for groundwater flow in the saturated zone is the pore water pressure combined with the elevation head. In the vadose zone, the driving force consists of the matric potential and
the elevation head. The matric potential or capillary potential is negative pressure caused by surface tension. As stated by Hillel (1982), it is the physical affinity of water to soil particle
surfaces and capillary pores. Water drawn from where thicker hydration surrounds the particles to where hydration is thinner and from a zone where the capillary menisci are less curved to where they
are more highly curved. This driving force in the vadose zone is greatest at the wetting front — thousands of times greater than the gravitational force. In figure 7, the wetting front is the upper
edge of the capillary fringe. Another way to picture the wetting front is the leading edge of a horizontal contaminant plume where the liquid is beginning to replace the pore-space air.
Matric Potential: The matric potential or pressure head is a function of the volumetric moisture content of the soil. The volumetric moisture content varies between zero and the porosity value of the
soil. The lower the moisture content, the more negative the matric potential becomes. However, van Genuchten (1991) states that the moisture content should not be equated to the porosity of the soil
because the saturated moisture content of field soils is generally 5 to 10% smaller than the porosity due to entrapped or dissolved air. The volumetric moisture content is the volume of water per
total soil volume. Degree of saturation is the ratio of the water volume to the void volume. Porosity is equal to one minus the solid volume fraction.
Soil-Moisture Retention Curve: The relationship between the volumetric moisture content and the pressure head in the vadose zone is called the soil-moisture retention curve or the moisture
characteristic curve. These curves take on different shapes depending on whether the soil is being saturated (wetting curve) or desaturated (drying curve). This difference between wetting and drying
curves is called the hysteresis effect. It is due in part to entrapped air in the soil after wetting (Butters et al.,1989).
Hydraulic Conductivity: The most important difference between the saturated and vadose zones is in the hydraulic conductivity which is dependent on the moisture content. As soils desaturate, pores
become filled with air reducing the conductive water portion of the soil’s cross-sectional area. In addition, suction pulling on the water further reduces the water conducting pore area. This
increases tortuosity — the ratio of direct water-flow-length to actual water-path-length — which increases the distance that water must travel. The transition from saturation to unsaturation
generally involves a sharp drop in hydraulic conductivity, which may decrease by several orders of magnitude (sometimes down to 1/100,000 of its saturation value as suction increases from 0 to 1 bar)
(Hillel, 1982).
Modeling Mass Transport In The Vadose Zone
Modeling mass transport in the vadose zone is in many ways just beginning to evolve from the theoretical realm to field calibration and verification. Many different models have been proposed for
modeling mass transport in the vadose zone. Butters et al. (1989) have organized transport models into the following three groups: (a) convection-dispersion equation models with constant coefficients
(deterministic), (b) convection-dispersion equation models with random variable coefficients (stochastic), and (c) transfer function models.
Major assumptions in many of these models include the following (Nielsen, 1990):
1. Equilibrium versus nonequilibrium sorption.
2. Steady-state flow and hysteresis.
3. Magnitude of anisotropy in the hydraulic conductivity.
4. Use of adsorption isotherms.
The starting point for developing the mass transport equations to model the transport of solute is the conservation of mass equation, which in words is (Domenico and Schwartz, 1990):
mass inflow rate – mass outflow rate ± mass production rate = change in mass storage with time
There are three mechanisms, when incorporated into the mass-balance equation, that determine the solute transport in both saturated and unsaturated soil systems. These are (a) advective transport, in
which the dissolved solutes are transported with the flowing water, (b) hydrodynamic dispersion, in which the molecules are transported by molecular diffusion or through the effects of mechanical
dispersion, and (c) the effects of reactions, sources, or sinks, including the effects of dilution, radioactive decay, biological activity, sorption, as well as chemical reactions like precipitation
or dissolution reactions. These three mechanisms when incorporated into the mass balance give the advection-dispersion equation, which for saturated conditions can be written as (Domenico and
Schwartz, 1990):
Ci = concentration of chemical constituent, mg/l,
t = time, sec,
∇ = gradient operator, d/dx + d/dy + d/dz, l/cm,
Dh = hydrodynamic dispersion, cm²/sec,
ri = reaction terms for solute i, moles/l/sec,
v = fluid velocity vector, cm/sec,
∅ = porosity, unitless.
In the unsaturated zone, the controlling parameter is the moisture content θ. The moisture content is the ratio of the volume of water to the total volume in a representative soil sample. The
governing flow equation for describing water flow in unsaturated soil is the Richards’ equation, which relates the moisture content to the unsaturated hydraulic conductivity. The equation is
represented by the following equation after Fetter (1993):
Ψ = matric potential,
K = unsaturated hydraulic conductivity function, cm/sec,
z = elevation above reference plane
Combining the advective-dispersive and the Richards equation leads to an advective-dispersive mass balance that describes the flow of solute in the unsaturated zone that can be written as (Healy,
These equations are the fundamental starting equations that are then solved depending upon the conditions of each simulated transport problem. Further detailed numerical derivation of the actual
mathematical expressions used to represent this problem and complete discussion of all the parameters are beyond the scope of this report, but can be found in the manuals’ for the computer codes
(Simunek, et al., 1992 and Healy, 1990) as well as in many text books (Domenico and Schwartz, 1990 and Bear, 1972).
Computer Codes
RETC: The RETC code (Van Genuchten et al., 1991) is a numerical code that can be used to quantify the hydraulic parameters for unsaturated soils. The program can be used to fit several analytical
models to observed water retention or unsaturated hydraulic conductivity data. Using models developed by van Genuchten (1980), Brooks and Corey (1966), Burdine (1953), and Mualem (1976), RETC can
generate water retention curves from soil water retention data and predict the unsaturated hydraulic conductivity from soil water retention data. As noted by Wosten and Van Genuchten (1988),
predictive models such as RETC can provide accurate estimates for characterizing the hydraulic properties at a generic site.
In order to generate a water retention curve similar to Figure 8, RETC used the saturated hydraulic conductivity and moisture content listed in Table 10. Since field data on the Gila conglomerate was
not available, data from the Las Cruces trench site in New Mexico (Wierenga et al., 1989) was used as starting values for the van Genucthen shape factors used by the RETC program. The program
generated values for the unsaturated hydraulic conductivity and the pressure head or matric potential for various moisture content values.
SWMS 2D: The SWMS 2D computer code simulates the flow of solute and water in two-dimensional variably saturated media. The program numerically solves Richards Equation for saturated and unsaturated
water flow and the advection dispersion equation for solute transport. The water flow equation includes provisions for internal and external sinks and sources, including root water uptake. The solute
transport equation also allows the use of zero and first order production or decay as well as sorption assuming a linear isotherm. The flow region itself can be composed of many different layers of
soil each, layer having unique parameters unrelated to the other layers. The water flow portion of the model will allow for the specification of either Dirichlet, Neumann, or Cauchy boundary
conditions as well as boundaries controlled by atmospheric conditions.
Modeling the Gila Conglomerate
Preliminary modeling was done using the SWMS 2D code to model a thin two dimensional column of simulated Gila Conglomerate using the results of the RETC simulation for unsaturated soil hydraulic
parameters, results of the MINTEQA2 simulations for first order precipitation rate constant, and previously reported hydraulic properties. Key parameters used in this model are listed in Table 11.
The major assumptions used in this modeling effort include the following:
1. The geologic profile is initially uncontaminated.
2. The leachate is applied in one continuous 90 day cycle.
3. No sorptive reactions.
4. Vertical hydraulic conductivity is two orders-of-magnitude less than the horizontal hydraulic conductivity.
5. Percent calcite for the Gila conglomerate is 1.5%.
6. Dispersivities are dependent upon the scale of the modeling effort.
The simulation was done by modeling the transport of the leachate components Al and Cu through a column of one cm thickness and 6-m depth. The sides of the column are set to be impermeable, thus
allowing only vertical transport of the contaminant of concern. The column was divided into 19 different layers of Gila conglomerate each having a different effective precipitation rate constant that
was derived using MINTEQA2. In addition, the hydraulic conductivity of each segment in the column decreases by a small increment to account for the potential blockage of pores by the precipitation of
gypsum and other minerals (Sanchez Copper, 1992, and Van Zyl, 1993).
The results indicate that the dissolved aluminum in the leachate as predicted by the geochemical modeling, exhibit a dramatic initial concentration decrease, followed by a more gradual decrease as
the solution is buffered toward neutrality. The copper did not experience a major decrease in dissolved concentration until the plume was deeper into the Gila conglomerate.
Aluminum: The simulation was done for a specified-volumetric flux (figure 9). Figure 9 shows the preliminary results of the modeling effort by plotting the normalized concentration versus depth for
100 to 400 days after initial application of the 90 day pulse of leachate.
As the leachate solution infiltrates into the Gila conglomerate the maximum concentration decreases as a result of dispersion and precipitation. Thus by specifying the rate of the precipitation, it
can be observed that the majority of the aluminum will precipitate out of solution in the upper portion of the test column. After this initial large decrease, the decrease in the dissolved aluminum
concentration is much more gradual for the remainder of the simulation. The data indicate that the leachate infiltrates the Gila conglomerate at a very slow rate, about 0.5 to 1.0 cm per day.
Figure 10 represents a sensitivity analysis done on the leachate mass flux into the test column. The sensitivity analysis indicates that the controlling parameter is the hydraulic conductivity as
earlier stated — not the leachate mass flux. Therefore, if more of the leachate were to leak through the pad potentially contacting the Gila conglomerate, it would not further contaminate the Gila
conglomerate due to the low permeability of the Gila conglomerate.
Copper: The model was also run using the same parameters as indicated in Table 10, except for copper precipitation rate constants. Figure 11 shows that after the first four hundred days of contact
the concentration of copper is reduced only by the effects of dispersion. This behavior correlates with the behavior observed in the geochemical modeling portion of this project which predicted that
the copper would not be appreciably precipitated out of solution until the pH of the leachate was greater than 4.25, which does not occur until approximately 10 segments into the test column, or 333
cm. A sensitivity analysis was also done to demonstrate that the concentration of copper would not increase appreciably even if the mass flux were to increase by an order of magnitude (figure 12).
Monte Carlo Simulation of Copper: In order to demonstrate the methodology addressing parameter uncertainty as discussed in this paper, an abbreviated simulation was done in the following manner. A
Monte Carlo simulation (100 iterations) using a lognormal distribution generated a PDF for the unsaturated hydraulic conductivity. The PDF was generated using the RISK program (RISK, 1992) as
previously discussed in the parameter uncertainty analysis section of this paper. Fifteen random hydraulic-conductivity values were taken from the PDF curve and used for fifteen different runs of the
transport model. The outputs from the transport model runs were used to produce a cumulative distribution curve of contaminant concentration with depth. Figure 13 shows the cumulative frequency
distribution for the depth at which the solute copper concentrations drop below the proposed maximum contaminant level (MCL) for copper. The MCL for copper is 1,300 ug/l (Fetter, 1993). The expected
value generated for this simulation is 25 meters, suggesting that the copper concentration drops below the MCL at that depth. Further simulations will examine the other inorganic contaminants modeled
with the geochemical model.
This report is a trail blazing effort to lay out an integrated, inter-discipline methodology for determining the potential of copper dump-leaching operations (or any other type of leachate producing
operation such as landfills) to pollute an aquifer. Therefore the results should be considered preliminary in nature. Use of this methodology for site-specific evaluation requires actual field data,
model calibration, and model validation.
The methodology presented in this paper included a combination of models for simulating the movement of pregnant leach solution from the surface of a copper dump-leach operation into the unsaturated
base material and the geochemical processes occurring as the solution moves through the unsaturated zone. Only precipitation was modeled for this paper. Additional modeling could examine ion
exchange, oxidation-reduction, complexation, and sorption (van der Heijde and Elnawawy, 1993). The uncertainty of hydrogeologic parameters such as hydraulic conductivity were addressed by using
probability distribution functions. The spatial variability of hydrogeologic parameters was not discussed in this paper. Techniques such as geostatistics could be used to address spatial variability
(Journel, 1989; Freidel, 1993; Freeze, et al., 1990).
The geochemical model showed that the Gila conglomerate is a good neutralizer of the PLS. The model generated effective precipitation rate constants for use in the vadose zone model. The vadose zone
model indicated that the aluminum concentration decreased rapidly as a result of advection, dispersion, and precipitation within the first 500 cm. The copper concentration decreased initially as a
result of advection and dispersion and began to decrease after 350 cm as a result of precipitation.
The vadose zone model did not reflect the potential blocking of pores by gypsum deposition (product of calcite in the Gila conglomerate reacting with the acidic PLS) over the life of the property
which reduces the porosity and hydraulic conductivity of the Gila conglomerate. In addition, the modeling effort did not account for scale effect on the horizontal dispersion coefficient where the
dispersion coefficient may increase 4-6 orders of magnitude from the lab scale to the field scale (EPA, 1989) nor does it account for any horizontal movement within the aquifer to a compliance point,
all of which world further reduce the heavy metal concentrations.
The overall intent of this paper was to demonstrate a scientific method for determining the risk to human health from copper-dump leach operations. The methodology presented provides means for
presenting more realistic data for decision-makers as opposed to worst-case data.
By reducing the variance (uncertainty) of each parameter, the decision-maker can produce modeling results that are more realistic and useful. This will allow decision-makers the flexibility to
compensate for the potential application of overly conservative estimates of risk using standard Environmental Protection Agency methods. Mine operator may consider using this methodology in the
permitting process to evaluate various design alternatives or in the environmental auditing process to evaluate potential risks.
We want to thank Dr. Dirk Van Zyl of Golder Associates, Inc., Dr. Helen Dawson of the Colorado School of Mines, John Davis of the USBM Branch of Minerals Availability, Len Rothfeld of the USBM
Minerals Availability Field Office, and Mike Friedel of the USBM Twin Cities Research Center for their guidance and technical assistance.
|
{"url":"https://www.911metallurgist.com/blog/risk-assessment-inorganic-contamination-groundwater-copper-dump-leaching-operations-methodology/","timestamp":"2024-11-05T18:28:19Z","content_type":"text/html","content_length":"249620","record_id":"<urn:uuid:e7f4c295-72b1-4e75-99f6-d934ed758b9e>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00698.warc.gz"}
|
The FLOOR formula rounds a given number down to the nearest multiple of a specified factor. It is commonly used when dealing with financial data or when working with time values. The function takes a
value and an optional factor as arguments and returns the rounded down value.
Use the FLOOR formula with the syntax shown below, it has 1 required parameter and 1 optional parameter:
=FLOOR(value, [factor])
1. value (required):
The number or reference to the cell containing the number to be rounded down.
2. factor (optional):
Optional parameter that specifies the multiple to which to round the value down. If omitted, the function rounds to the nearest integer.
Here are a few example use cases that explain how to use the FLOOR formula in Google Sheets.
Rounding down to nearest integer
The FLOOR formula can be used to round a number down to the nearest integer. This is useful when working with data that requires whole numbers, such as counting or indexing.
Rounding down to nearest multiple of a factor
By providing a factor to the FLOOR formula, a number can be rounded down to the nearest multiple of that factor. This is often used when working with financial data, where amounts are denominated in
specific increments.
Rounding down to nearest hour
The FLOOR formula can be used to round down a time value to the nearest hour. This is helpful when working with time-based data and needing to aggregate data by hour.
Common Mistakes
FLOOR not working? Here are some common mistakes people make when using the FLOOR Google Sheets Formula:
Not providing a value to the function
If you forget to provide a value argument to the FLOOR function, it will return an error. Make sure to provide a value to the function.
Providing a non-numeric value to the function
The FLOOR function only works with numeric values, so if you provide a non-numeric value, it will return an error. Make sure to provide a numeric value to the function.
Providing a factor that is not a positive integer
The factor argument in the FLOOR function must be a positive integer. If you provide a non-positive integer, it will return an error. Make sure to provide a positive integer as the factor argument.
Misunderstanding how the factor argument works
The factor argument in the FLOOR function is optional and is used to round the value down to the nearest multiple of the factor. If you provide a factor argument that is larger than the value or that
does not evenly divide the value, it will round down to 0. Make sure to understand how the factor argument works before using it.
Forgetting to use the FLOOR function when rounding down
If you want to round a value down to the nearest integer or to the nearest multiple of a factor, make sure to use the FLOOR function. If you use a different function or method, you may get unexpected
Related Formulas
The following functions are similar to FLOOR or are often used with it in a formula:
• CEILING
The CEILING function returns a number rounded up to the nearest multiple of a specified factor. It is commonly used to round up prices to the nearest dollar or to adjust numbers to fit into
specific increments.
• ROUND
The ROUND formula rounds a number to a specified number of decimal places. It is commonly used to simplify large numbers or to make a number more readable. The formula can round both positive and
negative numbers. If the places parameter is not specified, the formula rounds to the nearest integer.
• MROUND
The MROUND function rounds a number to the nearest multiple of a specified factor. It is commonly used when dealing with financial data, such as currency or interest rate calculations.
• INT
The INT formula rounds a given value down to the nearest integer. This formula is often used to simplify large numbers or to convert decimal values to integers. The formula takes a single
parameter, the value to be rounded down. If the value is already an integer, the formula will return the same value. If the value is a decimal, the formula will round down to the nearest integer.
Learn More
You can learn more about the FLOOR Google Sheets function on Google Support.
|
{"url":"https://checksheet.app/google-sheets-formulas/floor/","timestamp":"2024-11-10T05:16:03Z","content_type":"text/html","content_length":"46107","record_id":"<urn:uuid:ca029953-8296-4871-a34f-2562fd3fb518>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00271.warc.gz"}
|
Age Calculator - California Daily Review
In a world driven by technology, calculators have become indispensable tools for individuals across various professions and educational levels. A calculator is a device designed to perform
mathematical calculations quickly and accurately. This article aims to provide a comprehensive overview of a calculator and the different types that cater to diverse needs. What is a … Read more
Power of Roth IRA Calculator: A Comprehensive Guide
In the realm of financial planning, securing a stable and prosperous future is a common goal for many individuals. One powerful tool that aids in this pursuit is the Roth IRA (Individual Retirement
Account). However, understanding how to maximize its benefits can be challenging. Enter the Roth IRA Calculator, a valuable resource that empowers investors … Read more
The Age Calculator: Unraveling the Mystery of Age
Age is a universal concept that plays a significant role in shaping our lives. From celebrating birthdays to legal rights and responsibilities, age serves as a crucial marker in various aspects of
society. In the digital age, the age calculator has become a handy tool, allowing individuals to effortlessly determine their age or that of … Read more
|
{"url":"https://www.californiadailyreview.co/tag/age-calculator/","timestamp":"2024-11-14T18:25:35Z","content_type":"text/html","content_length":"90138","record_id":"<urn:uuid:f46a19c0-fac2-4074-8683-c165cbacc5c2>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00435.warc.gz"}
|
10 Vital Python Concepts for Data Science
Let's talk about Python concepts used for data science. This is a valuable and growing field in 2024, and there are many things you'll need to know if you want to use this programming language to
evaluate data.
Below, I'll share 10 Python concepts I wish I knew earlier in my data science career. I included detailed explanations for each, including code examples. This will help introduce and reinforce Python
concepts that you'll use again and again.
1. Boolean Indexing & Multi-Indexing
When it comes to data science and Python, Pandas is the name of the game! And one of the things that sets Pandas apart is its powerful indexing capabilities.
Sure, basic slicing is intuitive for Pandas users, but there’s much more you can do with advanced indexing methods, like boolean indexing and multi-indexing.
What is boolean indexing, though? Well, this is an elegant way to filter data based on criteria.
So rather than explicitly specifying index or column values, you pass a condition, and Pandas returns rows and columns that meet it.
Cool, but what is multi-indexing? Sometimes known as hierarchical indexing, this is especially useful for working with higher-dimensional data.
This lets you work with data in a tabular format (which is 2D by nature) while preserving the dataset’s multi-dimensional nature.
I bet you’re already itching to add these ideas to your Python projects!
The real benefit of these methods is the flexibility they bring to data extraction and manipulation. After all, this is one of the major activities of data science!
Hackr.io: 10 Python Concepts I Wish I Knew Earlier
Advanced Indexing & Slicing: General Syntax
# Boolean Indexing
# Multi-Indexing (setting)
df.set_index(['level_1', 'level_2'])
Let’s dive into an example to see these concepts in action.
Consider a dataset of students with individual scores in multiple subjects. Now, let’s say you want to extract the records of students who scored more than 90 in Mathematics.
Importantly, you want a hierarchical view based on Class, then Student names.
No problem, just use boolean indexing to find the students, then multi-indexing to set the indexing hierarchy, as shown below.
What I really like about this approach is that it not only streamlines data extraction, but it also helps me to organize data in a structured and intuitive manner. Win-win!
Once you get the hang of advanced indexing, you'll find data extraction and manipulation much quicker and more efficient.
Hackr.io: 10 Python Concepts I Wish I Knew Earlier
Hackr.io: Advanced Indexing & Slicing - Example
import pandas as pd
# Sample dataset
data = {
'Class': ['10th', '10th', '10th', '11th', '11th'],
'Student': ['Alice', 'Bob', 'Charlie', 'David', 'Eva'],
'Mathematics': [85, 93, 87, 90, 95],
'Physics': [91, 88, 79, 94, 88]
df = pd.DataFrame(data)
# Boolean Indexing: Extract records where Mathematics score > 90
high_scorers = df[df['Mathematics'] > 90]
# Multi-Indexing: Setting a hierarchical index on Class and then Student
df_multi_index = df.set_index(['Class', 'Student'])
2. Regular Expressions
Ask any data scientist; they’ll probably all have a tale about challenges with messy or unstructured data.
This is where the magical power of those cryptic-looking regular expressions comes into play!
Regex is an invaluable tool for text processing, as we can use it to find, extract, and even replace patterns in strings.
And yes, I know that learning regular expressions can seem daunting at first, given the cryptic-looking patterns that they use.
But trust me, when you understand the basic building blocks and rules, it becomes an extremely powerful tool in your toolkit. It’s almost like you’ve learned to read The Matrix!
That said, it always helps to have a regex cheat sheet handy if you can’t quite remember how to formulate an expression.
When it comes to Python, the re module provides the interface you need to harness regular expressions.
You can match and manipulate string data in diverse and complex ways by defining specific patterns.
Hackr.io: 10 Python Concepts I Wish I Knew Earlier
Regular Expressions: General Syntax
import re
# Basic match
re.match(pattern, string)
# Search throughout a string
re.search(pattern, string)
# Find all matches
re.findall(pattern, string)
# Replace patterns
re.sub(pattern, replacement, string)
As a practical example, consider a scenario where you need to extract email addresses from text. Regular expressions to the rescue!
These provide a straightforward approach to capturing these patterns, as shown below.
Hackr.io: 10 Python Concepts I Wish I Knew Earlier
Regular Expressions Example
import re
text = "Contact Alice at alice@example.com and Bob at bob@example.org for more details."
email_pattern = r'\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,7}\b'
emails = re.findall(email_pattern, text)
3. String Methods
Whether you're working with text data, filenames, or data cleaning tasks, String processing is ubiquitous in data science.
In fact, if you’ve taken a Python course, you probably found yourself working with Strings a lot!
Thankfully, Python strings come with a host of built-in methods that make these tasks significantly simpler.
So whether you want to change case, check prefixes/suffixes, split, join, and more, there’s a built-in method that does just that. Awesome!
Generally speaking, String methods are straightforward, but their real power shines when you learn how and when to combine them effectively.
And, because Python's string methods are part of the string object, you can easily chain them together, resulting in concise and readable code. Pythonic indeed!
Hackr.io: 10 Python Concepts I Wish I Knew Earlier
String Methods: Commonly Used Methods
# Change case
# Check conditions
# Splitting and joining
Let’s dive into an example to show the efficacy of these methods, focusing on a common use case when we need to process user input to ensure it's in a standard format.
So, imagine that you want to capture the names of people, ensuring they start with a capital letter, regardless of how the user enters them.
Let’s use String methods to take care of it!
You’ll see that we’ve combined the lower() and capitalize() methods within a list comprehension to process the list of names quickly and Pythonically.
Of course, this is a simple example, but you get the picture!
Hackr.io: 10 Python Concepts I Wish I Knew Earlier
String Methods Example
# User input
raw_names = ["ALICE", "bOB", "Charlie", "DANIEL"]
# Process names to have the first letter capitalized
processed_names = [name.lower().capitalize() for name in raw_names]
4. Lambda Functions
Python lambda functions are one of those techniques that you need to have in your toolkit when it comes to data science!
The TL-DR is that they provide a quick and concise way to declare small functions on the fly. Yep, no need for the def keyword or a function name here!
And, when you pair these with functions like map() and filter(), lambda functions really shine for data science. Pick up any good Python book, and you’ll see this in action!
If you’re not quite sure why, no problem! Let’s take a quick detour.
WIth map() you can apply a function to all items in an input sequence (like a list or tuple).
The filter() function also operates on sequences, but it constructs an iterator from the input sequence elements that return True for a given function.
The TL-DR: it filters elements based on a function that returns True or False.
Put both of those tidbits in your back pocket as you never know when they might come in handy for a Python interview!
That said, the best way to show the power of lambda functions with map() and filter () is with a practical example.
So, let’s look at a simple scenario where we want to double the numbers in a list before filtering out those that are not divisible by 3.
Sure, we could do this with list comprehensions or traditional for-loops, but combining lambda functions with map() and filter() offers a neat and Pythonic alternative.
I think you’ll agree that the beauty of this approach lies in its brevity.
It is worth noting that while lambda functions are powerful, they're really best for short and simple operations.
For complex operations, stick to traditional functions.
Hackr.io: 10 Python Concepts I Wish I Knew Earlier
Lambda with map() and filter() Example
# Original list of numbers
numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
# Double each number using map() and lambda
doubled_numbers = list(map(lambda x: x*2, numbers))
# Filter numbers not divisible by 3 using filter() and lambda
filtered_numbers = list(filter(lambda x: x % 3 == 0, doubled_numbers))
5. Pandas Method Chaining
If you’re using Python for data science, you’re using Pandas! Take any data science course, and it will include Pandas!
And without a doubt, one of the best things about Pandas is the huge range of methods to process data.
When it comes to using Pandas methods, two common styles include method chaining and employing intermediate Dataframes.
Each approach has pros and cons, and understanding them can be crucial for code readability and efficiency.
But what is method chaining? Simple really, it’s just when we call multiple methods sequentially in a single line or statement.
This eliminates the need for temporary variables, which is always nice!
This net result can be concise code, but you need to make sure your code doesn’t compromise readability by overusing chained method calls.
By all means, feel free to continue using intermediate Dataframes, as they can be helpful for storing the results of each step into separate variables, not to mention debugging.
But when possible, it can be cleaner to chain Pandas methods. Let’s take a look at a practical example by firing up our Python IDE.
Suppose we want to read a CSV file, rename a column, and then compute the mean of that column. We have two ways to do this: with chained methods and intermediate dataframes.
As you can see, both approaches achieve the same outcome, but I think the chained method approach feels more Pythonic when it doesn’t sacrifice readability.
Hackr.io: 10 Python Concepts I Wish I Knew Earlier
Pandas Method Chaining Example
import pandas as pd
# Using Method Chaining
mean_value = (pd.read_csv('data.csv')
.rename(columns={'column2': 'new_column'})
# Using Intermediate DataFrames
df = pd.read_csv('data.csv')
renamed_df = df.rename(columns={'column2': 'new_column'})
mean_value = renamed_df.new_column.mean()
6. Pandas Missing Data Functions
Handling missing data is an essential skill for data scientists, and thankfully, the Pandas library offers simple but powerful tools to manage missing data effectively.
The two most commonly used functions for handling missing data are fillna() dropna().
I have a feeling that you can work out what they both do, but let’s explore the basic syntax and functionalities of these two methods, starting with fillna().
The TL-DR here is that it’s used to fill NA/NaN values with a specified method or value. If you’re not sure what I mean by NaN, this is just shorthand for Not a Number!
Hackr.io: 10 Python Concepts I Wish I Knew Earlier
fillna(): General Syntax
df.fillna(value=None, method=None, axis=None, inplace=False)
Now, let’s consider a simple use case when we have a dataset with missing values. Our goal is to replace all NaNs with the mean value of the column.
Pandas makes this really easy, as you can see below!
Hackr.io: 10 Python Concepts I Wish I Knew Earlier
fillna() Example
import pandas as pd
data = {'A': [1, 2, pd.NA, 4], 'B': [5, pd.NA, 7, 8]}
df = pd.DataFrame(data)
df['A'].fillna(df['A'].mean(), inplace=True)
Now, let’s take a look at dropna(), which is used to remove missing values. Depending on how you use this function, you can drop entire rows or columns.
Hackr.io: 10 Python Concepts I Wish I Knew Earlier
dropna(): General Syntax
df.dropna(axis=0, how='any', thresh=None, subset=None, inplace=False)
Let’s look at a simple example where we want to drop any row in our dataset that contains at least one NaN value.
Hackr.io: 10 Python Concepts I Wish I Knew Earlier
dropna() Example
data = {'A': [1, 2, pd.NA, 4], 'B': [5, pd.NA, 7, 8]}
df = pd.DataFrame(data)
Overall, when it comes to working with real-world data, missing values are generally a given.
And unless you know how to handle these, you might encounter errors or even produce unreliable analyses.
By understanding how to manage and handle these missing values efficiently, we can ensure our analysis remains robust and insightful. Win!
7. Pandas Data Visualization
Sure, data scientists need to spend a lot of time (A LOT!) manipulating data, but the ability to produce data visualizations is perhaps just as important, if not more so!
After all, data science is about storytelling, and what better way to do that than with pictures?
Yes, you might need to produce beautiful plots to share with stakeholders and customers, but it’s also super helpful to create quick visualizations to better understand your data.
From experience, there have been a ton of occasions when I spotted an underlying trend, pattern, or characteristic of a dataset that I would not have been able to see without a plot.
Once again, Pandas comes to the rescue here, as it makes it super easy to visualize data with the integrated plot() function.
Don’t worry, this uses Matplotlib under the hood, so you’re in safe hands!
Let's delve into the basic mechanics of this function.
The most important thing to remember is that plot() is highly versatile (just see the docs to get a feel for how much you can do with it!).
By default, it generates a line plot, but you can easily change the type, along with a host of other formatting features.
In fact, if you’ve spent any time working with Matplotlib, you’ll know just how much you can control, tweak, and customize plots.
Let’s take a look at a concrete example where we have a dataset with monthly sales figures. Our goal is to plot a bar graph to visualize monthly trends.
As you can see, it doesn’t get much easier than calling the plot() function and passing in some basic parameters to tweak the output.
Hackr.io: 10 Python Concepts I Wish I Knew Earlier
Pandas plot() Example
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# Sample data: Monthly sales figures
data = {'Month': ['Jan', 'Feb', 'Mar', 'Apr'], 'Sales': [200, 220, 250, 275]}
df = pd.DataFrame(data)
# Bar plot using Pandas plot()
df.plot(x='Month', y='Sales', kind='bar', title='Monthly Sales Data', grid=True, legend=False)
8. Numpy Broadcasting
When it comes to data science with Python, Pandas and NumPy are the two pillars that have helped propel Python’s popularity.
When the time comes to work with arrays in NumPy, we can often find ourselves needing to perform operations between arrays of different shapes. No bueno!
On the surface, this seems problematic, and you might have even found yourself implementing manual reshaping and looping with various Python operators.
But there is a simpler way! By using NumPy’s broadcasting feature, these operations become incredibly streamlined.
But what is broadcasting? Great question!
This is a powerful NumPy concept that allows you to perform arithmetic operations on arrays of different shapes without explicit looping or reshaping. I know, what a dream!
In simple terms, you can think of this as NumPy's method of implicitly handling element-wise binary operations with input arrays of different shapes. That’s a mouthful!
But to understand broadcasting, it's important to grasp the rules that NumPy uses to decide if two arrays are compatible for broadcasting.
Rule 1: If the two arrays have different shapes, the array with fewer dimensions is padded with 1s on its left side.
For example: Shape of A: (5, 4), Shape of B: (4,) = Broadcasted shape of B: (1, 4)
Rule 2: If the two arrays differ in all dimensions, whichever array has a shape of 1 is stretched to match the other array.
For example: Shape of A: (5, 4), Shape of B: (1, 4) = Broadcasted shape of both A and B: (5, 4)
Rule 3: If any dimension sizes disagree and neither is equal to 1, an error is raised.
For example: Shape of A: (5, 4), Shape of B: (6, 4) = This will raise an error.
So, as you can see, if two arrays are compatible, they can be broadcasted.
Let's look at a classic example to grasp this idea.
Imagine you have an array of data, and you want to normalize it by subtracting the mean and then dividing by the standard deviation. Simple stuff, right?
Well, for starters, you need to remember that the mean and standard deviation are scalar values while the data is a 3x3 array.
But, thanks to broadcasting, NumPy allows us to subtract a scalar from an array and divide an array by a scalar. This is the magic of broadcasting!
Hackr.io: 10 Python Concepts I Wish I Knew Earlier
NumPy Broadcasting Example
import numpy as np
# Data array
data = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
# Compute mean and standard deviation
mean = np.mean(data)
std_dev = np.std(data)
# Normalize the data
normalized_data = (data - mean) / std_dev
9. Pandas groupby()
You’ve probably spotted a heavy Pandas theme in this article, but trust me, it really is the backbone of data science with Python!
That said, one of the most powerful tools you can use with Pandas is the groupby() method.
This allows you to split data into groups based on criteria and then apply a function to each group, such as aggregation, transformation, or filtering.
If you’ve spent any time working with SQL commands, this Python concept should be somewhat familiar to you as it’s inspired by the SQL grouping syntax and the split-apply-combine strategy.
Just remember the clue is in the name here! You're grouping data by some criterion, and then you're able to apply various operations to each group.
Let’s take a look at the basic approach.
• Split: Divide data into groups.
• Apply: Perform an operation on each group, such as aggregation (sum or average), transformation (filling NAs), or filtration (discarding data based on group properties).
• Combine: Put the results back together into a new data structure.
As always, the best way to understand this Python concept is to look at an example.
So, suppose you have a dataset of sales in a store and want to find out the total sales for each product. Seems reasonable enough!
As you can see, we call the groupby() method on the dataframe column containing Products.
We then use the dot notation to access the Sales column, and we apply the sum() method to get the total sales per product.
The resultant series contains products as indices with their respective total sales as values.
The more I’ve used the groupby() method, the more I’ve come to appreciate how powerful it is for producing concise representations of aggregated data with minimal code.
Hackr.io: 10 Python Concepts I Wish I Knew Earlier
Pandas groupby() Example
import pandas as pd
# Sample data
data = {
'Product': ['A', 'B', 'A', 'C', 'B', 'C', 'A'],
'Sales': [100, 150, 200, 50, 300, 25, 75]
df = pd.DataFrame(data)
# Group by product and sum up sales
total_sales_per_product = df.groupby('Product').Sales.sum()
10. Vectorization Vs. Iteration
Anyone who's worked with large datasets in Python will have stumbled upon the dilemma of performance, especially when you need to traverse the data. Yep, we’re on the subject of Big-O!
Well, allow me to introduce you to something special called vectorization!
But what is that, I hear you ask?
No problem. Vectorization leverages low-level optimizations to allow operations to be applied on whole arrays rather than individual elements.
Libraries like NumPy in Python have perfected this.
But why does this matter, and how does it differ from traditional iteration?
Well, you probably know that iteration involves going through elements one by one.
And sure, this is super intuitive for us programmers, but it can be much slower and thus more computationally expensive with bigger datasets.
To make this clearer, let’s look at the general syntax for the two approaches.
Hackr.io: 10 Python Concepts I Wish I Knew Earlier
Vectorization vs Iteration: Syntax
import numpy as np
result = []
for item in data:
result = np.some_function(data)
And yes, I do love how concise the NumPy code is, but the real gains are hidden away from us.
The whole point of using vectorization is to boost time performance, so let’s look at a simple example to illustrate this.
To start with, we’ve populated a list with 100,000 elements before creating two simple functions.
The iterative function uses a list comprehension to iterate over each item in the list and compute its square.
The vectorized function converts the list to a NumPy array to take advantage of NumPy's vectorized operations to compute the square of each number in the array all at once.
We’ve then used the timeit module to run these functions ten times and compute the average run time.
If you run this example on your own machine, the actual time in seconds will vary, but you should see that the vectorized operation is significantly faster!
On my machine, the average time over 10 runs is nearly 7.5x faster for the vectorized function than it is with the iterative function.
And remember, this gain becomes even more pronounced as your data grows in size.
So, when you're working with huge datasets or doing extensive computations, vectorization can save not only valuable coding time but also computational time.
Hackr.io: 10 Python Concepts I Wish I Knew Earlier
Vectorization vs Iteration Example
import numpy as np
import timeit
# Sample data
data = list(range(1, 100001))
# Timing function for Iteration
def iterative_approach():
return [item**2 for item in data]
# Timing function for Vectorization
def vectorized_approach():
data_np = np.array(data)
return data_np**2
# Using Iteration
iterative_time = timeit.timeit(iterative_approach, number=10) / 10
print(f"Iterative Approach Time: {iterative_time:.5f} seconds")
# Output on my machine: Iterative Approach Time: 0.03872 seconds
# Using Vectorization
vectorized_time = timeit.timeit(vectorized_approach, number=10) / 10
print(f"Vectorized Approach Time: {vectorized_time:.5f} seconds")
# Output on my machine: Vectorized Approach Time: 0.00514 seconds
As we move nearer to the end of 2024, Python is still a top 3 language with huge demand in data science.
And with the Bureau of Labor and Statistics reporting an average salary of over $115K for data scientists, learning essential Python concepts to land a job can be highly rewarding.
Even if you’re new to the data science job market, learning these Python concepts can help you succeed and stand out from the crowd.
And there you have it, the 10 Python concepts I wish I knew earlier for data science, including explanations and code examples for the Python concepts.
Whether you’re new to data science and looking to land your first job or fresh off a Python course and looking to learn data science, mastering these 10 Python concepts can help you stand out from
the crowd!
Frequently Asked Questions
1. What Python Concepts Are Required For Data Science?
In general, you should be familiar with Python essentials like data structures, control structures, functions, exception handling, and key Python libraries like NumPy, Pandas and Matplotlib. I’d also
recommend checking out the various concepts we’ve outlined above.
2. How Long Does It Take To Learn Python Data Science?
This depends on your current skill and education level. If you’re a beginner, learning data manipulation may take 1-3 months. You can then aim for an intermediate level by adding statistics and
machine learning skills over 3-6 months. Advanced proficiency, including skills like deep learning, will likely require 12+ months.
People are also reading:
Please login to leave comments
|
{"url":"https://hackr.io/blog/python-concepts-for-data-science","timestamp":"2024-11-05T09:28:41Z","content_type":"text/html","content_length":"161186","record_id":"<urn:uuid:b1843fb5-068b-4a76-9378-a52ab76c1f74>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00247.warc.gz"}
|
Tools 4 NC Teachers | Math Science Partnership Grant Website
Dinner PartyIn this lesson, students explore finding the number of tables and forks needed at 2 dinner parties to develop the concept of multiplication and division.NC Mathematics Standard
(s):Operations and Algebraic Thinking3.OA.3 Use multiplication and division within 100 to solve word problems in situations involving equal groups, arrays, and measurement quantities, e.g., by using
drawings and equations with a symbol for the unknown number to represent the problemStandards for Mathematical Practice: 1. Makes sense and persevere in solving problems.4. Models with mathematics.6.
Attends to precision.Student Outcomes: I can find the products, quotients, or sums of whole numbers by counting in groups. Math Language: What words or phrases do I expect students to talk about
during this lesson?productquotientsumrepeated additionMaterials: Printed activity sheet and concrete manipulatives if neededWarm-Up: Counting around the Room (10 minutes)Review skip counting by
counting around the room. “Today we are going to count around the room by multiples of 5. Everyone will have a chance to say a number. Each person in the room will say a number only once. Before we
begin, what number do you think the last person will say? Collect responses from students, being careful not to allow too much time for them to count children in the room. “Let’s check and see.” This
activity is conducted quickly but in a non-threatening way. Students should be allowed to use a visual hand signal, like a thumb at the chest to indicate their readiness to share the next multiple.
While students are counting the teacher records the multiples that are stated aloud.Suggested Questions:What number do you think the last student will say?How many people have counted if the number
that was just said was 25? 50? (be sure to stop the counting to ask this question)How do we know?Launch: Introduce Problem (5 minutes)Today we are going to be thinking about having a big dinner with
friends. When we plan for a dinner what are some things we might need to plan for? Allow a minute or so for children to turn and talk about this and share ideas out loud. Number of peopleAmount of
foodNumber of plates, forks and spoonsGive one activity sheet to each student. Read part A-D aloud.Explore: Solving the Problem (15-20 minutes)Allow students time to work individually and then work
with partners or in groups to solve the task. As students work, observe students to see how they are solving the task. Encourage students to share their strategies with one another and describe how
they are answering each question. Carefully select students to present to the class. ?Look for students who modeled the problem and kept track of their thinking. ?Also look for strategies that will
generate discussion to help others move toward a deeper understanding of the mathematical goal.Observe:How students organize and represent their thinking.How students make sense of the story.What
vocabulary terms students use as they solve the task. Suggested questions to ask while students work:If students struggle to get started:What is happening in this problem?Could you answer a “how
many” question with details from this problem?What were you thinking of drawing first to help you visualize the problem?Do you think you can use a math tool to help you think about these problems?If
a student is not making thinking visible:Where are the guests in your representation?Where are the forks in the representation?Can you show me in your picture what the 8 stands for from your
equation?Are we looking for an answer that is larger or smaller? Are there more or fewer forks than people? (for Part B or Part D)If you can’t think of a multiplication equation, can you rewrite this
as addition?Tell me about your answer, what do the numbers mean?How do you know?Does that number seem reasonable? Why or why not?Look for use of appropriate strategies, such as multiplication
strategies: Part A, such as drawing 7 circles with 6 dots each, skip counting, repeated addition, or use of related facts. Part B, look for appropriate strategies such as adding 42 and 42 by breaking
them apart or multiplying 42 by 2 using the distributive property. Part C, look for division strategies such as drawing a picture, skip counting by 6 up to 66, or turning the division into a
multiplication problem with a missing factor. Part D, look for multiplication and addition strategies similar to part B. Discuss:Discussion of Solutions (15-25 minutes)Bring the group back together
and have selected students share their strategies for solving the task. ?Note: Before beginning the lesson prepare a list of possible strategies you will see during the lesson. Use these observations
to determine the order for sharing strategies during discussion. As the discussion is happening in the class each group should be showing their work on the board so that the other groups can see
their thought process. This will also help the class to see how each group’s strategy relates to the other. The first group to present could be the one who used only pictures to find their answer.
This will give the class a visual representation of the thought process to use while finding the answer. The next group to present could be the group who used repeated addition. This is the beginning
of understanding multiplication. The last group could be the one who used a multiplication problem to find the answer to the question. This group understands what operation is the most efficient of
the strategies. As the discussion is happening in the class each group should be showing their work on the board so that the other groups can see their thought process. This will also help the class
to see how each group’s strategy relates to the other. Evaluation of Student UnderstandingInformal Evaluation: The teacher circulates through the class, observing students’ written work, choice of
strategies, and verbal explanations. Make note of any artifacts to share with the class and the order in which to share them. Formal Evaluation/Exit Ticket: Choose from the following exit ticket
questions:Task extension:Each table at the Smith family’s dinner party can seat 6 guests. Each guest will need a plate for salad, a plate for the main course, and a plate for dessert. How many plates
will each table need? Represent your solution with an equation and show your strategy using pictures, words, or other equations.Written response:Did you use the same strategy as the first Smith
family dinner party task? Why or why not?Multiplication is the counting of groups of things to come up with a total number. What are some other real-world situations in which it might be useful to
count groups of things?Meeting the Needs of the Range of LearnersIntervention:Provide students with manipulatives to help them find the number of guests at the dinner party. Change numbers to 5
tables with 4 guests on part A and 30 guests for part C for students who may struggle with the larger numbers presented in this lesson. Practice skip counting to help students find how many guests
there are. Extension: For the Smith and Jones dinner party, ask students to find how many more forks there are at one dinner party than the other.If each dinner party guest had 2 forks, 1 knife, and
1 spoon, how many utensils would there be at each dinner party?Ask students to write additional story problems that could be solved with this scenario.Possible Misconceptions/Suggestions:Possible
MisconceptionsSuggestionsStudents may multiply the 2 dinner forks by 6 or 7 instead of 42. If using repeated addition, students may miscount their answer when adding 7 six times. Students may try to
multiply or divide 66 by 7 or another number instead of dividing 66 by 6 to find the number of tables needed for the Jones family. Ask students to make meaning of the problem by rewording what it is
asking and also by explaining what each number in the equation represents.Ask students to make meaning of the problem by rewording what it is asking and also by explaining what each number in the
equation represents. Ask students to relate this back to their picture representation.Special Notes: Provide manipulatives for students to use if needed.Provide graph paper if students want to draw
arrays to solve multiplication problems. Possible Solutions: Activity Sheet The Smith family is hosting a dinner party. They set up 7 tables, and each table has 6 seats. How many guests could there
be?Draw a picture representation of the table setup.Write an equation to represent the problem.Explain your strategy in words.Represent your solution with an equation.Explain your strategy using
words, pictures, or computations.A.A.Each guest needs a salad fork and a dinner fork. How many forks will the Smith family dinner party need altogether?Draw a representation to represent the
forksWrite an equation to represent the problem.Explain your strategy in words.Represent your solution with an equation.Explain your strategy using words, pictures, or computations.B.B.The Jones
family is having a dinner party next week. They will have 66 guests. How many tables will they need? Draw a picture representation of the table setup.Write an equation to represent the
problem.Explain the first two steps you used to create a representation using words.Represent your solution with an equation.Explain your strategy using words, pictures, or computations.C.C.If the
salad and dinner forks are needed, how many forks will the Jones family dinner party need altogether?Write an equation to represent the problem.Explain your strategy in words.Represent your solution
with an equation.Explain your strategy using words, pictures, or computations.D.D. ................
In order to avoid copyright disputes, this page is only a partial summary.
|
{"url":"https://info.5y1.org/dinner-with-friends-ideas_1_75ca77.html","timestamp":"2024-11-03T12:34:31Z","content_type":"text/html","content_length":"19563","record_id":"<urn:uuid:e2e4081c-c4f7-42f5-a86b-422fc94aa0ae>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00228.warc.gz"}
|
For what values of x is f(x)=4x^3-3x+5 concave or convex? | HIX Tutor
For what values of x is #f(x)=4x^3-3x+5# concave or convex?
Answer 1
$f \left(x\right)$ is convex on $\left(0 , + \infty\right)$.
$f \left(x\right)$ is concave on $\left(- \infty , 0\right)$.
Convexity and concavity are determined by the sign of the second derivative.
Find the second derivative of the function.
#f(x)=4x^3-3x+5# #f'(x)=12x^2-3# #f''(x)=24x#
Analyze the sign of the second derivative, #24x#.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
To determine where the function ( f(x) = 4x^3 - 3x + 5 ) is concave or convex, we need to analyze its second derivative, ( f''(x) ).
[ f'(x) = 12x^2 - 3 ] [ f''(x) = 24x ]
The function will be concave upward (convex) where ( f''(x) > 0 ), and concave downward (concave) where ( f''(x) < 0 ).
Setting ( f''(x) > 0 ): [ 24x > 0 ] [ x > 0 ]
Setting ( f''(x) < 0 ): [ 24x < 0 ] [ x < 0 ]
Therefore, ( f(x) = 4x^3 - 3x + 5 ) is concave upward (convex) for ( x > 0 ) and concave downward (concave) for ( x < 0 ).
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7
|
{"url":"https://tutor.hix.ai/question/for-what-values-of-x-is-f-x-4x-3-3x-5-concave-or-convex-8f9af9fe42","timestamp":"2024-11-10T12:36:53Z","content_type":"text/html","content_length":"580050","record_id":"<urn:uuid:7617e43c-bbc8-493f-bab4-736dc0a3a70b>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00648.warc.gz"}
|
Invest with Confidence: How Finology's SIP Calculator Can Help You - Age calculator
Invest with Confidence: How Finology’s SIP calculator Can Help You
Invest with Confidence: How Finology’s SIP calculator Can Help You
Investing in the stock market can be both exciting and daunting. While there’s a lot of money to be made, there’s also a lot of uncertainty and risk involved. It can be tough to know where to start,
or how best to invest your hard-earned money. Fortunately, there are a number of tools available to help you make informed decisions about your investments. One such tool is the SIP calculator
offered by Finology.
Finology is a financial education company that aims to help individuals better understand personal finance and investing. Their SIP calculator is an online tool that can calculate the returns on your
investments in a systematic investment plan (SIP).
What is an SIP?
Before we delve into the details of the SIP calculator, let’s define what an SIP is. An SIP is a type of investment plan where an investor regularly invests a fixed amount of money at regular
intervals (usually monthly) in a mutual fund scheme. By doing this, the investor doesn’t need to time the market or make lump-sum investments. Instead, they can use the power of compounding to grow
their money over time.
How does the Finology SIP calculator work?
The Finology SIP calculator is an easy-to-use tool that can help you estimate the returns on your investment in an SIP. Here’s how it works:
1. Input the amount you want to invest
The first step is to input the amount you want to invest. This could be any amount, but for the purpose of this example, let’s say it’s Rs. 5,000 per month.
2. Select the tenure
Next, you can select the tenure – or the length of time – you want to invest for. Again, this could be any amount of time, but let’s say you want to invest for 5 years.
3. Choose the expected rate of return
Now, you can choose the expected rate of return you hope to receive on your investment. It’s important to note that the rate of return is subject to market risks and can fluctuate frequently. For
this example, let’s say you’re comfortable with an expected rate of return of 12%.
4. Hit “Calculate”
Once you’ve input all the necessary information, you can hit “Calculate.” The SIP calculator will then tell you the total amount invested, the expected return on investment, and the total wealth
accumulated at the end of the tenure.
Why use the Finology SIP calculator?
Using the Finology SIP calculator can help you make informed decisions about your investments. Here are some benefits of using this calculator:
1. It’s simple and easy to use
The calculator is designed to be user-friendly and easy to understand. You don’t need to be a financial expert to use it.
2. It helps you plan your investments
By using the calculator, you can plan your investments in advance based on expected returns. This can help you make better decisions about how much to invest and for how long.
3. It’s a realistic tool
The calculator uses realistic assumptions to calculate your expected returns. It takes market risks and fluctuations into account, so you can get a more accurate idea of what to expect from your
4. It saves you time
Calculating your expected returns manually can be time-consuming, but the Finology SIP calculator can do it for you in seconds.
Q: Is the SIP calculator accurate?
A: While the calculator uses realistic assumptions and takes market risks into account, it’s important to remember that returns on investments can never be guaranteed. The calculator gives you an
estimate based on your inputs, but the actual returns may vary.
Q: Can I use the SIP calculator for any type of investment?
A: The SIP calculator is designed specifically for systematic investment plans in mutual fund schemes. It may not be suitable for other types of investments.
Q: Is investing in SIPs a guaranteed way to make money?
A: No investment is a guaranteed way to make money. SIPs are subject to market risks and returns can fluctuate.
Q: Can I change my inputs after hitting “Calculate”?
A: Yes, you can change your inputs at any time and recalculate your expected returns.
Q: Is the Finology SIP calculator free to use?
A: Yes, the calculator is completely free to use on the Finology website.
In conclusion
Investing in the stock market can be a great way to grow your wealth over time, but it’s important to do it wisely. Using tools like the Finology SIP calculator can help you make informed decisions
and invest with confidence. While the calculator can give you an estimate of your expected returns, it’s important to remember that investing always involves some degree of risk. As with any
investment, it’s important to do your research and make informed decisions.
Recent comments
|
{"url":"https://age.calculator-seo.com/invest-with-confidence-how-finologys-sip-calculator-can-help-you/","timestamp":"2024-11-04T05:31:15Z","content_type":"text/html","content_length":"304840","record_id":"<urn:uuid:c081ecbc-3485-484e-bd0c-e6d8acd05c14>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00244.warc.gz"}
|
Assessing the effect of COVID-19 vaccines on mortality: a story of confounding factors and their role in COVID-19 misinformation - Science FeedbackAssessing the effect of COVID-19 vaccines on mortality: a story of confounding factors and their role in COVID-19 misinformation - Science Feedback
Assessing the effect of COVID-19 vaccines on mortality: a story of confounding factors and their role in COVID-19 misinformation
COVID-19 vaccines have been instrumental in our fight against the pandemic and our return to a normal life, thanks to their ability to reduce the number of cases, hospitalizations, and mortality.
Their effectiveness against severe COVID-19 and death was proven in randomized controlled trials (RCT) that involved tens of thousands of people^[1,2].
While RCTs are critical to our understanding of the vaccines’ efficacy, as they offer standardized and well-controlled test conditions and are therefore considered the gold standard for clinical
assessment of a treatment, they don’t reflect the diversity and complexity in the real world.
Therefore, monitoring the impact of COVID-19 vaccination on mortality, even after the vaccines are approved or authorized for use in the public, remains an important part of vaccine surveillance.
Consequently, official statistics on the number of COVID-19 deaths among vaccinated and unvaccinated individuals are publicly available in several countries, to allow researchers to study the
vaccines’ real-world effectiveness.
Comparing COVID-19 mortality rates between vaccinated and unvaccinated people may seem like a simple matter of comparing the COVID-19 death counts between the two. But it’s actually more complicated
than that. As we will explain below, vaccinated and unvaccinated populations can differ from each other in several important ways. Therefore, it’s critical to account for these differences when
comparing mortality rates between the two.
COVID-19 death counts must be adjusted for vaccine coverage
One of the starkest differences between the vaccinated and unvaccinated population is their size. Failing to account for population size is a fundamental mistake that can introduce important
statistical bias that invalidates a research finding.
To illustrate this, we can run a small mathematical thought experiment. Let’s imagine a population of 1,000 drivers with a 10% risk of dying in a car accident within a year. At the end of the year we
would thus expect 100 deaths. We know that wearing a seat belt significantly reduces the risk of dying in a car crash, albeit not entirely. Let’s suppose that wearing the seat belt reduces the risk
of death by half, down to 5%. If the entire population were wearing one, we would expect 50 deaths instead of 100.
Now, let’s consider a mixed situation where some people use a seat belt and some don’t. If 90% of the population—900 people—don’t wear a seat belt, the number of deaths in this group would be 10%, or
90 deaths. The remaining 100 people, who are using a seat belt, have a 5% death risk, so we would expect five deaths. In that case, deaths from non-seat belt wearers represent the majority of deaths
in the total population.
Now, let’s invert the situation and consider that 90% of the population, 900 people, wear a seat belt, and the remaining 100 never do. The 100 individuals not wearing seat belts are still at a 10%
risk of dying so we’d expect 10 deaths. The 900 people wearing a seat belt are at 5% risk of dying, which would translate into 45 deaths. In that case, we would see more deaths among seat belt
wearers. Should we therefore conclude that seat belts suddenly stopped being effective, or worse, increased one’s risk of dying? Of course not. This is a mere mathematical consequence of the
overrepresentation of seat belt wearers in the total population.
We can apply the same reasoning to vaccines. Even though pre-clinical and clinical data demonstrated the efficacy of COVID-19 vaccines in reducing the risk of severe disease, they don’t offer 100%
protection, which is common to most vaccines against other diseases. So, COVID-19 deaths are still expected among a small proportion of vaccinated individuals.
As the COVID-19 vaccination campaign progresses, we expect the overall number of vaccinated people increases. In other words, the proportion of vaccinated people becomes larger than that of
unvaccinated people. Figure 1 illustrates that the vaccine coverage rose to around 65% in the U.S. over time.
Figure 1. COVID-19 vaccine coverage of the U.S. population over time. Cumulative numbers of people who received at least two vaccine doses were obtained from the Centers for Disease Control and
Prevention and expressed as a percentage of the total U.S. population
As the vaccine coverage increases, the number of COVID-19 deaths among vaccinated people naturally increases. As our seat belt example illustrated, this phenomenon is unrelated to the effectiveness
of the safety intervention, specifically the vaccine in this case. Instead, it is a direct, mathematical consequence of the higher proportion of vaccinated individuals in the population.
Therefore, it is always necessary to ensure that we compare the proportion of COVID-19 deaths in each group, as opposed to the absolute number of COVID-19 deaths. One way to do so is by expressing
the number of deaths per million vaccinated or unvaccinated persons.
Challenges in comparing COVID-19 mortality rates: the risk of confounding factors
There are also many other factors that influence the risk of death apart from vaccination status that researchers need to account for. These are known as confounding factors, which researchers need
to control for. Confounding factors are variables that affect the outcome of an experiment, but aren’t the variables being studied in the experiment. If scientists don’t factor in the influence of
confounding factors in their study, they may draw erroneous conclusions about causality.
One example of a potential confounding factor is access to healthcare, with one population that faces greater difficulty in accessing hospitals. It is more likely that the population without access
difficulties to healthcare will experience a higher proportion of COVID-19 deaths than the other due to the lack of medical care, regardless of any possible effect of vaccines. In this case, access
to healthcare is a confounding factor: it will influence the outcome of interest—death from COVID-19—independently from the vaccine’s effect, and will bias the observers’ conclusions if not taken
into account.
In the following sections, we will explore some of the recurring confounding factors that have led to misinformation about vaccine effectiveness.
COVID-19 death counts must be adjusted for age
We know that the chances of surviving COVID-19 significantly decreases with age. As of 2 March 2022, people above 75 represented more than half of the COVID-19 deaths from COVID-19, according to the
U.S. Centers for Disease Control and Prevention (CDC). Therefore, age is a possible confounding factor: if one population is significantly older than the other, it will alter that group’s risk of
dying from COVID-19, independently of vaccination status.
Many countries initiated their vaccine rollout with vulnerable populations such as healthcare workers or older people. For instance, the U.S. Advisory Committee on Immunization Practice from the CDC
recommended focusing the initial vaccine rollout on older people in nursing homes and assisted living facilities, followed by people above 75, people aged 64 to 75, and eventually everyone above 16.
Later on, vaccination was extended to children aged 12 to 15, and finally, children aged 5 to 11.
As a result, vaccine coverage in older populations is higher than in younger populations. Consequently, the vaccinated population comprises a larger proportion of elderly individuals compared to the
unvaccinated population. Although this difference tends to decrease as the vaccine coverage grows, data from the CDC shown in Figure 2 illustrate that this age group is still overrepresented.
Figure 2. Age distribution of the vaccinated population and of the total U.S. population as reported by the CDC as of 10 March 2022.
The dark red bars represent the percentage of people by age group who received at least one dose of the vaccine as of 6 March 2022. The gray bars represent the percentage of this age group in the
total U.S. population. If the age distribution of the vaccinated population was similar to the age distribution of the total population, the gray and red bars of the same age group should be of
roughly equal lengths. When the red bar from a given age group is longer, this means that this age group is overrepresented in the vaccinated population.
Therefore, if one were to observe an equal (or even higher) number of deaths from COVID-19 in the vaccinated population compared to the unvaccinated population, it would be erroneous to conclude that
vaccines are ineffective. This is because the conclusion doesn’t account for the fact that vaccinated people are, on average, older and thus more likely to die from COVID-19, regardless of the
vaccines’ effectiveness.
One way to mitigate confounding factors is by performing age adjustment. This statistical process adjusts the number of deaths by age group by a coefficient representing the proportion of this age
group in the total country’s population. This would yield an age-adjusted total number of COVID-19 deaths in the vaccinated and unvaccinated populations, making comparisons more meaningful.
Data from the CDC illustrate clearly the impact of age adjustment (Figure 3). These data were collected from April 2021 to January 2022.
Figure 3. Effect of age adjustment on the incidence rate ratio (IRR) of COVID-19 mortality. Data were obtained from a CDC survey of COVID-19 deaths by vaccination status from April 2021 to January
2022. Weekly incidence rates are calculated as the number of deaths from COVID-19 among the vaccinated or unvaccinated population within a given week, expressed as the number of deaths per 100,000
individuals. Therefore, these numbers take into account the size difference of each population. The incidence rate ratio (IRR) is the ratio of the average incidence rate among unvaccinated divided by
the average incidence rate among vaccinated. A IRR higher than one indicates that deaths occur at a higher frequency among unvaccinated. The crude IRR is obtained directly from the weekly incidence
rates, whereas the age-adjusted IRR is obtained after a step of adjustment of the age distribution of both populations using the 2000 U.S. census standard population as a reference.
The incidence rate ratio (IRR) represents the excess occurrence of COVID-19 deaths among unvaccinated versus vaccinated. If the IRR is equal to one, there is no difference in COVID-19 mortality
between the two groups. If the IRR is greater than one, it indicates that COVID-19 deaths occur at a higher rate among unvaccinated individuals.
Here it is important to note that data are also normalized by the size of the vaccinated and unvaccinated population. Therefore, there is no risk of confounding effects due to vaccine coverage, as
explained earlier.
The gray column shows what the IRR would be without age adjustment. Even without this correction, the IRR is higher than one, showing that vaccinated people are dying from COVID-19 less than
unvaccinated people.
When applying an age adjustment (blue column), we can see that the IRR increases further. This indicates two things: first, a crude observation, without taking into consideration that the vaccinated
population tends to be older, underestimates the protective effect of vaccination. Second, vaccinated people are at lower risk of dying from COVID-19 than unvaccinated individuals, in fact 14 times
lower in December 2021, according to the CDC.
Behavioral differences between vaccinated and unvaccinated create additional confounding factors
While the aforementioned confounding factors, namely magnitude of vaccine coverage and age distribution, are easier to account for in studies, the influence of other confounding factors can be harder
to mitigate.
For instance, it is possible that vaccinated people may become more complacent and less observant of physical distancing and hygiene measures^[3,4], because of the immunity provided by vaccination,
whereas unvaccinated people would not.
This would introduce a difference in behavior between both groups that has direct bearing on a person’s risk of getting COVID-19. Specifically, it would lead to a higher than expected number of
COVID-19 cases in the vaccinated group, not because the vaccine is ineffective, but because the vaccinated group’s risk exposure changes as a result of their^[5,6] behavior.
Another type of behavioral bias is the potential difference in healthcare-seeking behavior between groups. We could hypothesize that people who chose to get vaccinated –and those who chose not to–
have different views on health and different healthcare-seeking behavior. This might directly impact their risk of infection or death from the infection.
Not accounting for that bias can lead to ill-founded conclusions about vaccines, as Health Feedback previously explained. Again, it is difficult to fully address that confounding factor. However,
certain specific clinical study designs such as the case-negative design, help reduce their impact of the study’s conclusions^[7,8].
How confounding factors fuel vaccine misinformation
As we explained earlier, a direct comparison of absolute COVID-19 deaths between vaccinated and unvaccinated groups is likely to be meaningless, due to the influence of multiple confounding factors.
Indeed, the failure to understand how confounding factors could result in seemingly paradoxical numbers repeatedly produced inaccurate claims that COVID-19 vaccines were ineffective at preventing
COVID-19 deaths, as Health Feedback documented on several occasions.
In each case, the claims suggested vaccine failure based on a higher or equal number of hospitalizations or deaths in vaccinated individuals compared to unvaccinated individuals. However, the authors
of the claims didn’t take into account the extent of vaccine coverage in the population and the age discrepancy between vaccinated and unvaccinated groups.
As of 9 March 2022, vaccinated individuals are the majority in many countries. For instance, vaccinated people represent 65% of the total population in the U.S. and 85% in the U.K. It is thus
possible that vaccinated people account for a high number of deaths regardless of vaccine effectiveness, as explained earlier.
In March 2022, another article committed the same mistake. It claimed that vaccinated individuals accounted for nine out of ten COVID-19 deaths in the U.K., according to official sources.
Specifically, a report by the U.K. Health Security Agency from 24 February 2022 indicated that deaths among unvaccinated people only accounted for 10% of the total number of deaths occurring within
60 days of a positive COVID-19 test. The fact that partially and fully vaccinated individuals accounted for the other 90% led the authors to conclude that vaccines didn’t work.
This reasoning is again flawed as the authors didn’t consider the two confounding factors we previously mentioned: age and vaccine coverage. The U.K. Health Security Agency even warned in bold type
on page 37 of their report: “this raw data should not be used to estimate vaccine effectiveness”, adding that “The case rates in the vaccinated and unvaccinated populations are crude rates that do
not take into account underlying statistical biases in the data”.
In November 2022, many posts implied that COVID-19 vaccines were either dangerous or ineffective at preventing deaths by sharing a headline from the Washington Post, stating that “vaccinated people
now make up a majority of covid deaths”. The Washington Post reported that vaccinated people represented 58% of the COVID-19 deaths in August 2022. Indeed, the CDC registered a total of 6,512
COVID-19 deaths that month, of which 2,719 occurred among unvaccinated people and 3,793 occurred among vaccinated people.
However, these data aren’t corrected for the many confounding factors that we detailed earlier. Mainly, they aren’t corrected for the age and the size of the vaccinated and unvaccinated population.
In fact, the CDC reported that vaccinated people are less likely to die from COVID-19 in August 2022, once these confounding factors are taken into account. And the Washington Post article actually
clarified that “vaccinated groups are at a lower risk of dying from a covid-19 infection than the unvaccinated when the data is adjusted for age”. But this detail is notably absent from many social
media posts.
Therefore, contrary to the impression that certain social media posts gave, the Washington Post article actually didn’t contradict the CDC.
Assessing vaccine effectiveness
Instead of directly comparing the absolute number of COVID-19 deaths between vaccinated and unvaccinated groups, which would be affected by several biases as we explained, scientists usually
calculate the vaccine effectiveness (VE), which measures the level of protection from a given outcome, such as death, compared to unvaccinated individuals. A VE of 60% indicates a 60% reduction in
the risk of COVID-19 death in vaccinated individuals, compared with unvaccinated individuals.
There are many methods to calculate VE depending on the availability of data and the need to adjust for confounding factors. A simple method uses the IRR presented in figure 3. Other techniques use a
regression model^[9] to estimate the effect of vaccination on the likelihood of dying from a disease.
The scientific literature reports that COVID-19 vaccines are highly effective against death, although the figure varies depending on the virus variant and the amount of time that has passed after
vaccination^[10-13]. The report by the U.K. Health Security Agency cited earlier indicated a VE from 59% to 95% against death from Omicron variant infection and 90% against the Alpha and Delta
Comparing COVID-19 mortality between vaccinated and unvaccinated populations provides us with important information to guide public health decisions. However, a crude comparison of death counts can
lead to ill-founded conclusions, as many confounding factors may bias the analysis. Among them, the level of vaccine coverage and the age distribution of the vaccinated and unvaccinated groups are
the most common confounders but are easy to account for.
Flawed analyses disregarding these confounding factors have been used to build the false narrative that COVID-19 vaccines are ineffective at protecting against death from COVID-19. These analyses
commonly compared crude death counts without factoring in the extent of vaccine coverage in the population. In contrast, more rigorous scientific analyses that account for the effect of confounding
factors showed that COVID-19 vaccines significantly reduce the odds of vaccinated people dying from the disease compared to unvaccinated individuals.
|
{"url":"https://sf.test-preprod.com/assessing-the-effect-of-covid-19-vaccines-on-mortality-a-story-of-confounding-factors-and-their-role-in-covid-19-misinformation/","timestamp":"2024-11-12T16:40:31Z","content_type":"text/html","content_length":"180958","record_id":"<urn:uuid:341670fd-4309-4725-a9b6-d9a40065445f>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00690.warc.gz"}
|
An internet-based organization that allows individuals and
groups to raise funds for a project. In April...
An internet-based organization that allows individuals and groups to raise funds for a project. In April...
An internet-based organization that allows individuals and groups to raise funds for a project. In April 2020, the mean total for pledges per project, m, was $3165.00 US dollars.
A random sample of 14 projects from April 2020 has the following characteristics:
n = 14
`X = $2459.30
s = $887.62
All of the projects were in the category “Comics.”
(a) Specify the hypotheses for a t-test to determine whether the mean funding per project for “Comics” projects is lower than the overall mean for April 2020. Use a one-sample t-test with a = 0.05.
Assume that the sample data are more or less normally distributed.
(b) Conduct a hypothesis test with a = 0.05, estimating the t-statistic and the p-value.
(c) Report:
the level of significance,
the type and value of the test statistic (including degrees of freedom if appropriate),
the p-value (exact values if possible, or a range of possible values), and
the implications for the null hypothesis.
What do the results of the hypothesis test tell us about the difference between the mean for the “Comics” category and the mean for all types of projects, m = $3165.00 US dollars?
(d) Compute and interpret a 95% confidence interval for the population mean of the “Comics” category. Does the confidence interval include the population mean for all projects, m = $3165.00 US
dollars? What does this imply about the funds raised for a typical “Comics” project compared to the mean for all types of projects?
|
{"url":"https://justaaa.com/statistics-and-probability/617334-an-internet-based-organization-that-allows","timestamp":"2024-11-12T07:09:56Z","content_type":"text/html","content_length":"44846","record_id":"<urn:uuid:13b96113-c050-4568-93d0-a8030a508435>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00566.warc.gz"}
|
Valuation problems are gaining more and more popularity in the ML community. Typical examples are feature valuation, i.e. which feature contributes the most to a certain prediction, data valuation,
i.e. which data points are more important in model training, or model valuation in ensembles.
Valuation scores are typically formulated using the formalism of cooperative games in game theory. Given a set of players (which could be features or data points in ML), collaborating towards a given
task (the training of the model), we want to assign to each a score that represents its overall contribution (how they affect the performance of the model).
The paper [Xu22G] shows that such scores can be calculated by maximising the decoupling of players (i.e. effectively breaking their correlations) in a mean-field, energy based model. In the ML
setting, the payoff could be the accuracy of a model, and a coalition is any subset of the data or features that we train the model on.
The authors show that their new approach can recover classical criteria of valuation, such as Shapley and Banzhaf values, through a one-step minimisation of the evidence lower bound (ELBO) starting
from different initial configurations (different initial weights given to the “players”). More importantly, the authors show that (if initialised uniformly) throughout the minimisation of the ELBO
the variational scores satisfy important mathematical properties, such as null value and symmetry, which are key requirements for good valuation scores.
Despite not improving inference time or breaking new state of the art in data valuation, the paper presents a nice theoretical framework that extends the definition of data valuation scores while at
the same time drawing the connection to game theory and energy based models.
|
{"url":"https://transferlab.ai/pills/2022/energy-based-learning-for-cooperative-games/","timestamp":"2024-11-13T15:50:18Z","content_type":"text/html","content_length":"21841","record_id":"<urn:uuid:7adf89ea-243f-4071-81b7-b789de0ff808>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00321.warc.gz"}
|
Graph Transformation - Dr Loo's Math Tuition
• y = f(x) – Original Graph
• y = f(x) + a – Vertical translation of a units
• y = f(x + a) – Horizontal translation of a units
• y = f(ax) – Horizontal stretches
• y = af(x) – Vertical stretches
• y = f(-x) – Reflection about y-axis
• y = -f(x) – Reflection about x-axis
About the Singapore Math Tutor - Dr Loo
I am a PhD holder with 9 years of teaching experience on teaching quantitative subjects at secondary school and university levels. My expertise are mathematics, statistics, econometrics, finance and
machine learning.
Currently I do provide consultation and tuition for math tuition in Singapore. If you are looking for secondary math tutor in Singapore, JC H2 math tutor or statistics tutor in Singapore, please feel
free to contact me at +65-85483705 (SMS/Whatsapp).
Among the Singapore math tuition that I provide, this includes Singapore secondary math tuition (E-math tuition & A-math tuition) and also JC Math Tuition (H1 Math Tuition & H2 Math Tuition). On the
other hand, my statistics tuition in Singapore mostly focus on pre-university level until postgraduate level. For those who are looking for online tutoring service, the online O-level math tuition
and online A-level math tuition are also available.
For those who are looking for math tution in Singapore
Need help with this topic? I do provide mathematics home tuition in Singapore for O-level math and also JC H2 math. In addition, online math tutoring is available as well. Feel free to contact me if
you would like to know further.
|
{"url":"https://drloomaths.sg/graph-transformation","timestamp":"2024-11-11T04:33:06Z","content_type":"text/html","content_length":"138593","record_id":"<urn:uuid:1ac16d84-39d1-4896-a477-485655de13ca>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00130.warc.gz"}
|
Let P be the point on the parabola, y2=8x, which is at a minimu... | Filo
Let be the point on the parabola, , which is at a minimum distance from the centre of the circle, . Then, the equation of the circle, passing through and having its centre at is
Not the question you're searching for?
+ Ask your question
Exp. (a)
Centre of circle is .
Let the coordinates of point be .
Now, let
Squaring on both side
For minimum,
Thus, coordinate of point are .
Hence, required equation of circle is
Was this solution helpful?
Video solutions (7)
Learn from their 1-to-1 discussion with Filo tutors.
15 mins
Uploaded on: 2/26/2023
Was this solution helpful?
15 mins
Uploaded on: 12/22/2022
Was this solution helpful?
Found 3 tutors discussing this question
Discuss this question LIVE for FREE
14 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Questions from JEE Mains 2016 - PYQs
View more
Practice questions from Arihant JEE Main Chapterwise Solutions Mathematics (2019-2002) (Arihant)
View more
Practice questions from Conic Sections in the same exam
Practice more questions from Conic Sections
View more
Practice questions on similar concepts asked by Filo students
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text Let be the point on the parabola, , which is at a minimum distance from the centre of the circle, . Then, the equation of the circle, passing through and having its centre at is
Updated On Jul 23, 2023
Topic Conic Sections
Subject Mathematics
Class Class 11
Answer Type Text solution:1 Video solution: 7
Upvotes 663
Avg. Video Duration 10 min
|
{"url":"https://askfilo.com/math-question-answers/let-p-be-the-point-on-the-parabola-y28-x-which-is-at-a-minimum-distance-from-the","timestamp":"2024-11-09T07:10:25Z","content_type":"text/html","content_length":"872902","record_id":"<urn:uuid:1908437a-3e15-4550-adb7-53030ce0df43>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00485.warc.gz"}
|
TensorFlow Metrics | Complete Guide on TensorFlow metrics
Introduction to TensorFlow Metrics
The following article provides an outline for TensorFlow Metrics. The Keras is the library available in deep learning, which is a subtopic of machine learning and consists of many other sub-libraries
such as tensorflow and Theano. These are used in Artificial intelligence and robotics as this technology uses algorithms developed based on the patterns in which the human brain works and is capable
of self-learning. Tensorflow keras is one of the most popular and highly progressing fields in technology right now as it possesses the potential to change the future of technology.
In this article, we will look at the metrics of Keras TensorFlow, classes, and functions available in TensorFlow and learn about the classification metrics along with the implementation of metrics
TensorFlow with an example.
What are TensorFlow metrics?
Tensorflow metrics are nothing but the functions and classes which help in calculating and analyzing the estimation of the performance of your TensorFlow model. The output evaluated from the metric
functions cannot be used for training the model. Other than that, the behavior of the metric functions is quite similar to that of loss functions. We can even use the loss function while considering
it as a metric. The TensorFlow tf.keras.namespace is the public application programming interface.
Tensorflow metrics
There is a list of functions and classes available in tensorflow that can be used to judge the performance of your application. These metrics help in monitoring how you train your model. We can
specify all the parameters and arguments required and mention the names of functions required or their aliases while you run the compile() function inside your model. For example, while using the fit
() function for fitting in the model, you should mention the metrics that will help you monitor your model along with the optimizer and loss function.
TensorFlow metrics Classes
The list of all the available classes in tensorflow metrics are listed below –
• Class Poisson – The calculation of the metrics of Poisson between the range of y_pred and y_true can be made using this class.
• Class Metric – All the metrics’ states and logic are encapsulated within the class metric.
• Class MeanTensor – We can calculate the weighted or regular means element-wise by using this class for the specified tensor models.
• Class Recall – According to the labels mentioned, the recall of computation of the precision corresponding to the made predictions can be achieved by using this class.
• Class Precision – According to the labels mentioned, the computation of the precision corresponding to the made predictions can be achieved using this class.
• Class PrecisionAtRecall – The calculation of the best prediction when the scenario is such that the recall is greater than or equal to the specified value can be achieved by using this.
• Class sparseTopKCategoricalAccuracy – When analyzing the top k predictions, this class helps determine the probability of getting an integer over there.
• Class SensitivityAtSpecificity – Whenever we have the specificity value greater than or equal to the specified value, this class will help calculate the best sensitivity.
• Class RootMeanSquareError – This class will help you to calculate the effective root mean in the squared error metric, where the range will be considered from y_true to y_pred parameters.
• Class SparseCategoricalAccuracy – This class helps calculate the probability of getting an integer label that matches the predictions.
• Class SparseCategoricalCrossentropy – This helps compute the cross-entropy metric between the range of labels and predictions mentioned.
• Class Sum – When the list of values is specified, this class helps you calculate the weighted sum.
• Class TrueNegatives – This class calculates the count of the true negatives present.
• Class SpecificityAtSensitivity – Computation of the specificity can be done by using this class where the condition that sensitivity should be greater than or equal to the specified value.
• Class SquaredHinge – The value of the squared hinge can be calculated using this metric class that considers the range between y_true and y_pred values.
• Class TruePositives – This class calculates the count of several true positives.
TensorFlow Metrics Functions
The list functions available in Tensorflow are as listed below in table –
Function Name Description
MSE (…) The computation of mean square error while considering the range of labels to the specified predictions.
MAE (…) Mean Absolute Error can be calculated between the specified range of labels and the predictions.
KLD (…) The Kullback Leibler divergence loss value can be estimated by using this function which considers the range between y
_true and y_pred.
MAPE (…) Mean Absolute Percentage error can be calculated using this function that considers the y_pred and y_true range for calculation.
MSLE (…) Mean Squared Logarithmic error can be estimated by using this function which considers the range between y
_true and y_pred.
Categorical_accuracy (…) The probability of calculating how often the value of predictions matches with the one-hot labels can be calculated using this function.
Binary_accuracy (…) The probability of matching the value of predictions with binary labels can be calculated using this function.
Categorical_crossentropy (…) The loss of categorical cross-entropy can be calculated by using this function.
Binary_crossentropy (…) The computation of loss of binary cross-entropy can be done by using this function.
Get (…) Using this function, we can retrieve the value of keras metrics such as an instance of Function/ Metric class.
kl_divergence (…) This function is used for calculating the kullback Leibler loss of divergence while considering the range between y_true and y_pred.
Deserialize (…) The process of deserializing a function or class into its serialized version can be done using this function.
Hinge (…) The hinge loss can be calculated using this function that considers the range of y_true to y_pred.
Kld (…) This function is used for calculating the kullback Leibler loss of divergence while considering the range between y_true and y_pred.
Besides the functions mentioned above, there are many other functions for calculating mean and logging-related functionalities. But, again, you can refer to this official link for complete guidance.
Classification Metrics
TensorFlow’s most important classification metrics include precision, recall, accuracy, and F1 score. Precision differs from the recall only in some of the specific scenarios. The ROC curve stands
for Receiver Operating Characteristic, and the decision threshold also plays a key role in classification metrics. Evaluating true and false negatives and true and false positives is also important.
TensorFlow Metrics Examples
Let us consider one example –
We will follow the steps of sequence preparation, creating the model, training it, plotting its various metrics values, and displaying all of them. Our program will be –
from numpy import array
from keras.educba_Models import Sequential
from keras.layers import Dense
from matplotlib import educba_python_plotting
sampleEducbaSequence = array([0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0])
educba_Model = Sequential()
educba_Model.add(Dense(2, input_dim=1))
educba_Model.compile(loss='mse', optimizer='adam', metrics=['mse', 'mae', 'mape', 'cosine'])
model_history = educba_Model.fit(sampleEducbaSequence, sampleEducbaSequence, epochs=500, batch_size=len(sampleEducbaSequence), verbose=2)
The output of executing the above program gives the following output –
Various functions and classes are available for calculating and estimating the tensorflow metrics. This article discusses some key classification metrics that affect the application’s performance.
Recommended Articles
This is a guide to TensorFlow Metrics. Here we discuss the Introduction, What are TensorFlow metrics? Examples with code implementation. You may also have a look at the following articles to learn
more –
|
{"url":"https://www.educba.com/tensorflow-metrics/","timestamp":"2024-11-05T15:35:57Z","content_type":"text/html","content_length":"311775","record_id":"<urn:uuid:cf279710-b553-4b9f-9610-7b993ea434d1>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00797.warc.gz"}
|
points) Consider the signal s(t) with Fourier Transform 10 1+?. S(a) figure below, we impulse sample - pronursingstudy
points) Consider the signal s(t) with Fourier Transform 10 1+ω. S(a) figure below, we impulse sample s) at a frequency o, rads/second, e signal sa(t). Can you find a finite sampling frequency o such
that ly recover s(t) from so()? If so, find it. If not, explain why not. a) (5 pts) In ting in the can perfectly you s (t) sa(t) →| Impulse sample at- rate o b) (5 pts) Now suppose we filter the
signal s() with an ideal low pass filter with frequency response H(o) and bandwidth B as shown in the figure below to produce the signal y(t), and then we impulse sample y() to produce ys(t). Find
the filter bandwidth B so that the Fourier transform Y(o) of the filter output yt) is 0 for all frequencies ω where the value of S(o) is below S(0)/10, where S(0) denotes the DC value of S(0). H(o)
(t) 」Impulse sample at yat) s(t) rate -B c) (5 pts) Using your value of B from part b, what is minimum value of the sampling rate co, that will allow the filter output y() to be perfectly recovered
from its impulse sampled version ys()? d) (5 pts) What is the purpose of the filter H(o)? (One sentence answer please.) e) (10 pts) Suppose the sampling rate o, is double your answer in part (c).
(This is called 2x over-sampling.) Carefully sketch the Fourier Transform Y&(0) of the impulse sampled output ys(0), showing all significant amplitudes and frequencies. 1) (5 pts) Now suppose that
the sampled signal ystt) from part e) is the input to a recovery filter with frequency response H(0). Carefully sketch H(o) so that the output of the recovery filter is the signal y(), showing all
significant amplitudes and frequencies. points) Consider the signal s(t) with Fourier Transform 10 1+ω. S(a) figure below, we impulse sample s) at a frequency o, rads/second, e signal sa(t). Can you
find a finite sampling frequency o such that ly recover s(t) from so()? If so, find it. If not, explain why not. a) (5 pts) In ting in the can perfectly you s (t) sa(t) →| Impulse sample at- rate o
b) (5 pts) Now suppose we filter the signal s() with an ideal low pass filter with frequency response H(o) and bandwidth B as shown in the figure below to produce the signal y(t), and then we impulse
sample y() to produce ys(t). Find the filter bandwidth B so that the Fourier transform Y(o) of the filter output yt) is 0 for all frequencies ω where the value of S(o) is below S(0)/10, where S(0)
denotes the DC value of S(0). H(o) (t) 」Impulse sample at yat) s(t) rate -B c) (5 pts) Using your value of B from part b, what is minimum value of the sampling rate co, that will allow the filter
output y() to be perfectly recovered from its impulse sampled version ys()? d) (5 pts) What is the purpose of the filter H(o)? (One sentence answer please.) e) (10 pts) Suppose the sampling rate o,
is double your answer in part (c). (This is called 2x over-sampling.) Carefully sketch the Fourier Transform Y&(0) of the impulse sampled output ys(0), showing all significant amplitudes and
frequencies. 1) (5 pts) Now suppose that the sampled signal ystt) from part e) is the input to a recovery filter with frequency response H(0). Carefully sketch H(o) so that the output of the recovery
filter is the signal y(), showing all significant amplitudes and frequencies.
Do you need a similar assignment done for you from scratch? We have qualified writers to help you. We assure you an A+ quality paper that is free from plagiarism. Order now for an Amazing Discount!
Use Discount Code "Newclient" for a 15% Discount!
NB: We do not resell papers. Upon ordering, we do an original paper exclusively for you.
https://pronursingstudy.com/wp-content/uploads/2021/03/pronursingstudy-300x70.png 0 0 Joseph https://pronursingstudy.com/wp-content/uploads/2021/03/pronursingstudy-300x70.png Joseph2021-02-26
18:42:282021-02-26 18:42:28points) Consider the signal s(t) with Fourier Transform 10 1+?. S(a) figure below, we impulse sample
|
{"url":"https://pronursingstudy.com/points-consider-the-signal-st-with-fourier-transform-10-1-sa-figure-below-we-impulse-sample/","timestamp":"2024-11-06T16:54:36Z","content_type":"text/html","content_length":"42260","record_id":"<urn:uuid:7c2ec34f-e012-4c4e-9b45-0cc6e3d23c49>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00150.warc.gz"}
|
How to get a solution vector
Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.
Community Tip - When posting, your subject should be specific and summarize your question. Here are some additional tips on asking a great question. X
I need to find a vector v=(v_1, v_2,...) such that F(v_i) - p_i = 0 using the root function. Is there a simple way to do this? Thanks.
Its boring having to type what you show again so you always should attach your worksheet in future questions.
You can't use the root function by providing a range of v-values because you would have to assure that the function value at both ends have opposite signs. This would mean that at least on of those
ends must be different for different values of p.
You may use a solve block (turned into a function) or the root command, but with a guess value:
|
{"url":"https://community.ptc.com/t5/Mathcad/How-to-get-a-solution-vector/td-p/690934","timestamp":"2024-11-05T04:04:06Z","content_type":"text/html","content_length":"238542","record_id":"<urn:uuid:eaf92174-c317-4b6a-91f8-7a67636d52ac>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00190.warc.gz"}
|
ST342 Mathematics of Random Events
Please note that all lectures for Statistics modules taught in the 2022-23 academic year will be delivered on campus, and that the information below relates only to the hybrid teaching methods
utilised in 2021-22 as a response to Coronavirus. We will update the Additional Information (linked on the right side of this page) prior to the start of the 2022/23 academic year.
Throughout the 2021-22 academic year, we will be adapting the way we teach and assess your modules in line with government guidance on social distancing and other protective measures in response to
Coronavirus. Teaching will vary between online and on-campus delivery through the year, and you should read the additional information linked on the right hand side of this page for details of how
this will work for this module. The contact hours shown in the module information below are superseded by the additional information. You can find out more about the University’s overall response to
Coronavirus at: https://warwick.ac.uk/coronavirus.
All dates for assessments for Statistics modules, including coursework and examinations, can be found in the Statistics Assessment Handbook at http://go.warwick.ac.uk/STassessmenthandbook
ST342-15 Mathematics of Random Events
Introductory description
This module runs in Term 1 and aims to provide an introduction to this theory, concentrating on examples and applications. This course would particularly be useful for students willing to learn more
about probability theory, analysis, mathematical finance, and theoretical statistics.
This module is available for students on a course where it is an optional core module or listed option and as an Unusual Option to students who have completed the prerequisite modules.
Statistics Students: ST218 Mathematical Statistics A AND ST219 Mathematical Statistics B
Non-Statistics Students: ST220 Introduction to Mathematical Statistics
Leads to: ST318 Probability Theory.
Module aims
To introduce the concepts of measurable spaces, integral with respect to the Lebesgue measure, independence and modes of convergence, and provide a basis for further studies in Probability,
Statistics and Applied Mathematics. Imagine picking a real number x between 0 and 1 "at random" and with perfect accuracy, so that the probability that this number belongs to any interval within
[0,1] is equal to the length of the interval. Can we compute the probability of x belonging to any subset to [0,1]?
To answer this question rigorously we need to develop a mathematical framework in which we can model the notion of picking a real number "at random". The mathematics we need, called measure theory,
permeates through much of modern mathematics, probability and statistics.
Outline syllabus
This is an indicative module outline only to give an indication of the sort of topics that may be covered. Actual sessions held may differ.
I. Algebras, sigma-algebras and measures
Algebra and contents, sigma-algebra and measures, pi-systems, examples of random events and measurable sets.
II. Lebesgue integration
Simple functions, standard representations, measurable functions, Lebesgue integral, properties of integrals, integration of Borel functions.
III. Product measures, 2 lectures
Sections, product sigma-algebras, product measures, Fubini theorem.
IV. Independence and conditional expectation 3 lectures
Independence of sigma-algebras, independence of random variables, conditional expectation with respect to a simple algebra.
V. Convergence and modes of convergence
Borel-Cantelli lemma, Fatou lemma, dominated convergence theorem, modes of convergence of random variables, Markov inequality and application, weak and strong laws of large numbers.
Learning outcomes
By the end of the module, students should be able to:
• Explain the properties of the probability spaces one can use for building models for simple experiments.
• Compute the probabilities of complicated events using countable additivity.
• Properly formulate the notion of statistical independence.
• Describe the basic theory of integration, particularly as applied to the expectation of random variables, and be able to compute expectations from first principles.
• Identify convergence in probability and almost sure convergence of sequences of random variables, and use and justify convergence in the computation of integrals and expectations.
Indicative reading list
View reading list on Talis Aspire
Subject specific skills
Transferable skills
Study time
Type Required
Lectures 30 sessions of 1 hour (20%)
Tutorials 5 sessions of 1 hour (3%)
Private study 115 hours (77%)
Total 150 hours
Private study description
Weekly revision of lecture notes and materials, wider reading, practice exercises and preparing for examination.
No further costs have been identified for this module.
You must pass all assessment components to pass the module.
Assessment group B2
Weighting Study time Eligible for self-certification
On-campus Examination 100% No
The examination paper will contain four questions, of which the best marks of THREE questions will be used to calculate your grade.
~Platforms - Moodle
• Answerbook Pink (12 page)
Assessment group R1
Weighting Study time Eligible for self-certification
In-person Examination - Resit 100% No
The examination paper will contain four questions, of which the best marks of THREE questions will be used to calculate your grade.
~Platforms - Moodle
• Answerbook Pink (12 page)
Feedback on assessment
Solutions and cohort level feedback will be provided for the examination. The results of the January examination will be available by the end of week 10 of term 2.
Anti-requisite modules
If you take this module, you cannot also take:
This module is Core optional for:
• Year 3 of USTA-G300 Undergraduate Master of Mathematics,Operational Research,Statistics and Economics
• USTA-G301 Undergraduate Master of Mathematics,Operational Research,Statistics and Economics (with Intercalated
□ Year 3 of G30E Master of Maths, Op.Res, Stats & Economics (Actuarial and Financial Mathematics Stream) Int
□ Year 4 of G30E Master of Maths, Op.Res, Stats & Economics (Actuarial and Financial Mathematics Stream) Int
This module is Optional for:
• Year 3 of UCSA-G4G1 Undergraduate Discrete Mathematics
• Year 3 of UCSA-G4G3 Undergraduate Discrete Mathematics
• Year 4 of UCSA-G4G2 Undergraduate Discrete Mathematics with Intercalated Year
• USTA-G300 Undergraduate Master of Mathematics,Operational Research,Statistics and Economics
□ Year 3 of G300 Mathematics, Operational Research, Statistics and Economics
□ Year 4 of G300 Mathematics, Operational Research, Statistics and Economics
This module is Core option list B for:
• Year 3 of USTA-G300 Undergraduate Master of Mathematics,Operational Research,Statistics and Economics
• USTA-G301 Undergraduate Master of Mathematics,Operational Research,Statistics and Economics (with Intercalated
□ Year 3 of G30H Master of Maths, Op.Res, Stats & Economics (Statistics with Mathematics Stream)
□ Year 4 of G30H Master of Maths, Op.Res, Stats & Economics (Statistics with Mathematics Stream)
• Year 3 of USTA-G1G3 Undergraduate Mathematics and Statistics (BSc MMathStat)
This module is Option list A for:
• Year 3 of USTA-GG14 Undergraduate Mathematics and Statistics (BSc)
• Year 4 of USTA-GG17 Undergraduate Mathematics and Statistics (with Intercalated Year)
• Year 3 of USTA-Y602 Undergraduate Mathematics,Operational Research,Statistics and Economics
This module is Option list B for:
• Year 3 of USTA-G304 Undergraduate Data Science (MSci)
• Year 4 of USTA-G303 Undergraduate Data Science (with Intercalated Year)
|
{"url":"https://warwick.ac.uk/fac/sci/statistics/currentstudents/courseinfo/modules/st3/st342/","timestamp":"2024-11-14T23:27:08Z","content_type":"text/html","content_length":"55001","record_id":"<urn:uuid:2efd5ef5-fa29-47c7-88ee-ce19bd75e145>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00878.warc.gz"}
|
Conference on Mathematics of Wave Phenomena, February 24-28, 2025, in Karlsruhe
Mathematics of Acoustic Nonlinearity Tomography
Organizers of this minisymposium are
Nonlinear-ultrasound tomography (NUST), which generates an image of the nonlinearity parameter B/A, has – because of its clinical potential – been a topic of interest for 40 years, but like
ultrasound tomography a few years ago (with first attempts going back to the 1950’s) and for similar reasons, the time is now right for a game-changing breakthrough. To take NUST from a promising
idea to a fully-fledged imaging modality will require a thorough understanding of how to model the forward wave problem of nonlinear acoustic propagation, leading to efficient numerical codes and a
better understanding of the mechanisms that can be exploited to obtain useful data, as well as an in-depth study of the inverse problem, which will inform experimental design and the development of
image reconstruction algorithms. This minisymposium aims at bringing together researchers working on this topic in the areas of mathematical analysis, numerics of partial differential equations,
inverse problems, scientific computing and biomedical imaging, in order to encourage new ideas and trends in this direction, and initiate cooperation.
|
{"url":"https://conference25.waves.kit.edu/?page_id=106","timestamp":"2024-11-08T02:07:11Z","content_type":"text/html","content_length":"26086","record_id":"<urn:uuid:424b2e2a-7b30-4579-9cd6-bd508e6c004e>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00410.warc.gz"}
|
Petaflop to Exaflop Converter (Pflop to Eflop) | Kody Tools
1 Petaflop = 0.001 Exaflops
One Petaflop is Equal to How Many Exaflops?
The answer is one Petaflop is equal to 0.001 Exaflops and that means we can also write it as 1 Petaflop = 0.001 Exaflops. Feel free to use our online unit conversion calculator to convert the unit
from Petaflop to Exaflop. Just simply enter value 1 in Petaflop and see the result in Exaflop.
Manually converting Petaflop to Exaflop can be time-consuming,especially when you don’t have enough knowledge about Computer Speed units conversion. Since there is a lot of complexity and some sort
of learning curve is involved, most of the users end up using an online Petaflop to Exaflop converter tool to get the job done as soon as possible.
We have so many online tools available to convert Petaflop to Exaflop, but not every online tool gives an accurate result and that is why we have created this online Petaflop to Exaflop converter
tool. It is a very simple and easy-to-use tool. Most important thing is that it is beginner-friendly.
How to Convert Petaflop to Exaflop (Pflop to Eflop)
By using our Petaflop to Exaflop conversion tool, you know that one Petaflop is equivalent to 0.001 Exaflop. Hence, to convert Petaflop to Exaflop, we just need to multiply the number by 0.001. We
are going to use very simple Petaflop to Exaflop conversion formula for that. Pleas see the calculation example given below.
\(\text{1 Petaflop} = 1 \times 0.001 = \text{0.001 Exaflops}\)
What Unit of Measure is Petaflop?
Petaflop is a unit of measurement for computer performance. Petaflop is a multiple of computer performance unit flop. One petaflop is equal to 1e15 flops.
What is the Symbol of Petaflop?
The symbol of Petaflop is Pflop. This means you can also write one Petaflop as 1 Pflop.
What Unit of Measure is Exaflop?
Exaflop is a unit of measurement for computer performance. Exaflop is a multiple of computer performance unit flop. One exaflop is equal to 1e18 flops.
What is the Symbol of Exaflop?
The symbol of Exaflop is Eflop. This means you can also write one Exaflop as 1 Eflop.
How to Use Petaflop to Exaflop Converter Tool
• As you can see, we have 2 input fields and 2 dropdowns.
• From the first dropdown, select Petaflop and in the first input field, enter a value.
• From the second dropdown, select Exaflop.
• Instantly, the tool will convert the value from Petaflop to Exaflop and display the result in the second input field.
Example of Petaflop to Exaflop Converter Tool
Petaflop to Exaflop Conversion Table
Petaflop [Pflop] Exaflop [Eflop] Description
1 Petaflop 0.001 Exaflop 1 Petaflop = 0.001 Exaflop
2 Petaflop 0.002 Exaflop 2 Petaflop = 0.002 Exaflop
3 Petaflop 0.003 Exaflop 3 Petaflop = 0.003 Exaflop
4 Petaflop 0.004 Exaflop 4 Petaflop = 0.004 Exaflop
5 Petaflop 0.005 Exaflop 5 Petaflop = 0.005 Exaflop
6 Petaflop 0.006 Exaflop 6 Petaflop = 0.006 Exaflop
7 Petaflop 0.007 Exaflop 7 Petaflop = 0.007 Exaflop
8 Petaflop 0.008 Exaflop 8 Petaflop = 0.008 Exaflop
9 Petaflop 0.009 Exaflop 9 Petaflop = 0.009 Exaflop
10 Petaflop 0.01 Exaflop 10 Petaflop = 0.01 Exaflop
100 Petaflop 0.1 Exaflop 100 Petaflop = 0.1 Exaflop
1000 Petaflop 1 Exaflop 1000 Petaflop = 1 Exaflop
Petaflop to Other Units Conversion Table
Conversion Description
1 Petaflop = 1000000000000000 Flop 1 Petaflop in Flop is equal to 1000000000000000
1 Petaflop = 1000000000000 Kiloflop 1 Petaflop in Kiloflop is equal to 1000000000000
1 Petaflop = 1000000000 Megaflop 1 Petaflop in Megaflop is equal to 1000000000
1 Petaflop = 1000000 Gigaflop 1 Petaflop in Gigaflop is equal to 1000000
1 Petaflop = 1000 Teraflop 1 Petaflop in Teraflop is equal to 1000
1 Petaflop = 0.001 Exaflop 1 Petaflop in Exaflop is equal to 0.001
|
{"url":"https://www.kodytools.com/units/cspeed/from/petaflop/to/exaflop","timestamp":"2024-11-02T12:14:09Z","content_type":"text/html","content_length":"75517","record_id":"<urn:uuid:309879f5-6c3d-4797-992c-d83770b926a3>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00668.warc.gz"}
|
Parallelogram ABCD lies in the xy plane
Question Stats:
59%40% (03:15)based on 108 sessions
Joined: 30 Aug 2017
Posts: 1
Given Kudos: 0
Concentration: Finance, Economics
WE:Investment Banking (Investment Banking)
The easiest way to solve this problem is to draw a rectangle around the parallelogram, find its area, and substract area of the triangles that emerge around the parallelogram, within the rectangle
(but that are not part of the parallelogram).
Since ABCD is a parallelogram, line segments AB and CD have the same length and the same slope. Therefore, in the diagram above, point A is at (-4,3), The square has an area of 7*7=49. By drawing
carefully and exploiting similar triangles created by various parallel lines, you can label the height of each triangle 3, and each base 7. Each triangles has area 1/2hb=1/2*3*7=21/2. Therefore, the
area of the parallelogram ABCD equals 49-4*(21/2)=49-42=7.
|
{"url":"https://gre.myprepclub.com/forum/parallelogram-abcd-lies-in-the-xy-plane-as-shown-in-5853.html","timestamp":"2024-11-02T18:40:46Z","content_type":"application/xhtml+xml","content_length":"260810","record_id":"<urn:uuid:8a54817a-8de7-4938-827a-32ca7e38e411>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00666.warc.gz"}
|
01561 LNL PUNE Emu Train Time Table
If we closely look at the 01561 train time table, travelling by 01561 LNL PUNE Emu gives us a chance to explore the following cities in a quick view as they come along the route.
1. Lonavala
It is the 1st station in the train route of 01561 LNL PUNE Emu. The station code of Lonavala is LNL. The departure time of train 01561 from Lonavala is 14:50. The next stopping station is Malavli at
a distance of 8km.
2. Malavli
It is the 2nd station in the train route of 01561 LNL PUNE Emu at a distance of 8 Km from the source station Lonavala. The station code of Malavli is MVL. The arrival time of 01561 at Malavli is
14:57. The departure time of train 01561 from Malavli is 14:58. The total halt time of train 01561 at Malavli is 1 minutes. The previous stopping station, Lonavala is 8km away. The next stopping
station is Kamshet at a distance of 8km.
3. Kamshet
It is the 3rd station in the train route of 01561 LNL PUNE Emu at a distance of 16 Km from the source station Lonavala. The station code of Kamshet is KMST. The arrival time of 01561 at Kamshet is
15:04. The departure time of train 01561 from Kamshet is 15:05. The total halt time of train 01561 at Kamshet is 1 minutes. The previous stopping station, Malavli is 8km away. The next stopping
station is Kanhe at a distance of 4km.
4. Kanhe
It is the 4th station in the train route of 01561 LNL PUNE Emu at a distance of 20 Km from the source station Lonavala. The station code of Kanhe is KNHE. The arrival time of 01561 at Kanhe is 15:08.
The departure time of train 01561 from Kanhe is 15:09. The total halt time of train 01561 at Kanhe is 1 minutes. The previous stopping station, Kamshet is 4km away. The next stopping station is
Vadgaon at a distance of 6km.
5. Vadgaon
It is the 5th station in the train route of 01561 LNL PUNE Emu at a distance of 26 Km from the source station Lonavala. The station code of Vadgaon is VDN. The arrival time of 01561 at Vadgaon is
15:12. The departure time of train 01561 from Vadgaon is 15:13. The total halt time of train 01561 at Vadgaon is 1 minutes. The previous stopping station, Kanhe is 6km away. The next stopping station
is Talegaon at a distance of 4km.
6. Talegaon
It is the 6th station in the train route of 01561 LNL PUNE Emu at a distance of 30 Km from the source station Lonavala. The station code of Talegaon is TGN. The arrival time of 01561 at Talegaon is
15:17. The departure time of train 01561 from Talegaon is 15:18. The total halt time of train 01561 at Talegaon is 1 minutes. The previous stopping station, Vadgaon is 4km away. The next stopping
station is Ghorawadi at a distance of 3km.
7. Ghorawadi
It is the 7th station in the train route of 01561 LNL PUNE Emu at a distance of 33 Km from the source station Lonavala. The station code of Ghorawadi is GRWD. The arrival time of 01561 at Ghorawadi
is 15:21. The departure time of train 01561 from Ghorawadi is 15:22. The total halt time of train 01561 at Ghorawadi is 1 minutes. The previous stopping station, Talegaon is 3km away. The next
stopping station is Begdewadi at a distance of 3km.
8. Begdewadi
It is the 8th station in the train route of 01561 LNL PUNE Emu at a distance of 36 Km from the source station Lonavala. The station code of Begdewadi is BGWI. The arrival time of 01561 at Begdewadi
is 15:24. The departure time of train 01561 from Begdewadi is 15:25. The total halt time of train 01561 at Begdewadi is 1 minutes. The previous stopping station, Ghorawadi is 3km away. The next
stopping station is Dehu Road at a distance of 3km.
9. Dehu Road
It is the 9th station in the train route of 01561 LNL PUNE Emu at a distance of 39 Km from the source station Lonavala. The station code of Dehu Road is DEHR. The arrival time of 01561 at Dehu Road
is 15:28. The departure time of train 01561 from Dehu Road is 15:29. The total halt time of train 01561 at Dehu Road is 1 minutes. The previous stopping station, Begdewadi is 3km away. The next
stopping station is Akurdi at a distance of 5km.
10. Akurdi
It is the 10th station in the train route of 01561 LNL PUNE Emu at a distance of 44 Km from the source station Lonavala. The station code of Akurdi is AKRD. The arrival time of 01561 at Akurdi is
15:33. The departure time of train 01561 from Akurdi is 15:34. The total halt time of train 01561 at Akurdi is 1 minutes. The previous stopping station, Dehu Road is 5km away. The next stopping
station is Chinchvad at a distance of 3km.
11. Chinchvad
It is the 11th station in the train route of 01561 LNL PUNE Emu at a distance of 47 Km from the source station Lonavala. The station code of Chinchvad is CCH. The arrival time of 01561 at Chinchvad
is 15:37. The departure time of train 01561 from Chinchvad is 15:38. The total halt time of train 01561 at Chinchvad is 1 minutes. The previous stopping station, Akurdi is 3km away. The next stopping
station is Pimpri at a distance of 3km.
12. Pimpri
It is the 12th station in the train route of 01561 LNL PUNE Emu at a distance of 50 Km from the source station Lonavala. The station code of Pimpri is PMP. The arrival time of 01561 at Pimpri is
15:41. The departure time of train 01561 from Pimpri is 15:42. The total halt time of train 01561 at Pimpri is 1 minutes. The previous stopping station, Chinchvad is 3km away. The next stopping
station is Kasar Wadi at a distance of 2km.
13. Kasar Wadi
It is the 13th station in the train route of 01561 LNL PUNE Emu at a distance of 52 Km from the source station Lonavala. The station code of Kasar Wadi is KSWD. The arrival time of 01561 at Kasar
Wadi is 15:45. The departure time of train 01561 from Kasar Wadi is 15:46. The total halt time of train 01561 at Kasar Wadi is 1 minutes. The previous stopping station, Pimpri is 2km away. The next
stopping station is Dapodi at a distance of 4km.
14. Dapodi
It is the 14th station in the train route of 01561 LNL PUNE Emu at a distance of 56 Km from the source station Lonavala. The station code of Dapodi is DAPD. The arrival time of 01561 at Dapodi is
15:49. The departure time of train 01561 from Dapodi is 15:50. The total halt time of train 01561 at Dapodi is 1 minutes. The previous stopping station, Kasar Wadi is 4km away. The next stopping
station is Khadki at a distance of 2km.
15. Khadki
It is the 15th station in the train route of 01561 LNL PUNE Emu at a distance of 58 Km from the source station Lonavala. The station code of Khadki is KK. The arrival time of 01561 at Khadki is
15:54. The departure time of train 01561 from Khadki is 15:55. The total halt time of train 01561 at Khadki is 1 minutes. The previous stopping station, Dapodi is 2km away. The next stopping station
is Shivaji Nagar at a distance of 3km.
16. Shivaji Nagar
It is the 16th station in the train route of 01561 LNL PUNE Emu at a distance of 61 Km from the source station Lonavala. The station code of Shivaji Nagar is SVJR. The arrival time of 01561 at
Shivaji Nagar is 16:00. The departure time of train 01561 from Shivaji Nagar is 16:01. The total halt time of train 01561 at Shivaji Nagar is 1 minutes. The previous stopping station, Khadki is 3km
away. The next stopping station is Pune Jn at a distance of 3km.
17. Pune Jn
It is the 17th station in the train route of 01561 LNL PUNE Emu at a distance of 64 Km from the source station Lonavala. The station code of Pune Jn is PUNE. The arrival time of 01561 at Pune Jn is
16:10. The previous stopping station, Shivaji Nagar is 3km away.
Trainspnrstatus is one of the best website for checking trains running status. You can find the 01561 LNL PUNE Emu running status here.
Trainspnrstatus is one stop best portal for checking pnr status. You can find the 01561 LNL PUNE Emu IRCTC and Indian Railways PNR status here. All you have to do is to enter your 10 digit PNR number
in the form. PNR number is printed on the IRCTC ticket.
Train number of LNL PUNE Emu is 01561. You can check entire LNL PUNE Emu train schedule here. with important details like arrival and departure time.
01561 train schedule LNL PUNE Emu train time table LNL PUNE Emu ka time table LNL PUNE Emu kitne baje hai LNL PUNE Emu ka number01561 train time table
|
{"url":"https://www.trainspnrstatus.com/train-schedule/01561","timestamp":"2024-11-04T12:24:56Z","content_type":"text/html","content_length":"38808","record_id":"<urn:uuid:a0ae01e4-486e-492a-ad7f-aa408a0ce86b>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00372.warc.gz"}
|
Check out CNC Machinist Calculator on iTunes & Google Play
We're working to become the best machining app on the market!
Try our new online calculator at: www.cncMachinistCalculatorUltra.com
CNC Machinist Calculator Pro
Possibly the Best machinist app on the market. Designed and engineered for easy operation at any level.
All functions are programmed for both inch & metric calculations.
CNC Lathe Taper
Calculate chamfers, taper, & radii. Then export the g-code for your CNC lathe. Export corner rounding g-code for internal & external chamfers, taper, & radii with tool nose compensation.
Bolt Circle Calculator
Calculate bolt circle coordinates based on bolt circle for up to 36 holes.
Unit Conversion
Convert degrees, minutes, & seconds to decimal degrees and vice versa. Convert fraction to decimal. Convert inch to metric. Convert surface meters to surface footage.
Turning Calculators
Groove & cut off cycle time calculator, Roughing cycle time calculator, Turning SFM calculator, SFM to RPM conversion tool, IPR to IPM conversion tool, IPM to IPR conversion tool
True Position Calculators
Calculate true position of a feature based on the distance from zero in the X & Y axis
Triangle Solver
Calculates angles and sides of both right & oblique triangles. Just enter any two values for a right triangle or any three values for an oblique triangle.
Thread Calculator
Calculate thread passes in turning. Also calculate helix angle, thread height, & more..
Thread Data
Get thread pitch, major, minor, root radius, helix angle, and more for 60° threads. Shows min and max wire sizes for checking each thread and shows the correct measurement over pins for whatever wire
size you choose. Contains UNC, UNF, UNEF, UNS, UN, UNM, UNJ, & M profile threads. UN threads up to Ø4" & M profile threads up to Ø600mm
Drill & Tap Calculator
Tap drill calculator for both roll form and cut taps based on your required percentage of thread.
Surface Finish Calculators
Calculates lathe surface finish based on tool radius and feed. Calculate mill side wall finish based on tool diameter, cutting edges, and feed rate. Calculate step over distance of ball end mill
based on your required surface finish. Convert Ra finish to other common surface finishes such as RMS, Rp, & Rt.
M-code & G-code lists
Full listing & descriptions of m-codes & g-codes
Gun Drilling Calculator
Calculate speed, feed, cycle time & required coolant pressure for gun drilling based on material and hardness.
Lathe Taper Calculator
Calculate chamfers, taper, & radii. Then export the g-code for your CNC lathe. Export corner rounding g-code for internal & external chamfers, taper, & radii with tool nose compensation.
Geometry Calculator
Calculate Sine bar block height. Calculate circular segment features. Calculate area, radius, & circumference of a circle. Calculate the diameter over a hex & diameter over the points of a square.
Drilling Calculator
Calculate cutting speed, feed, & cut time based on drill diameter, tool material, and depth. Also calculate drill point length based on tool diameter and drill point angle.
Circle Segment Calculator
Calculate angle, radius, arc length, chord length, tangent length, middle ordinate, segment area, and sector area of a circle segment.
Center Drill Dimensions
Center drill dimensions for both standard and bell type center drills size 00 thru 8.
CNC Keyway Cycle Time Calculator
Custom Macro Cheat Sheet
Full variable list, arithmetic symbols and examples, macro variable arguments, branches & loops, and other reference data with examples (press and hold to see examples)
Hardness Conversions
Convert hardness from Brinell, to Rockwell, to Tensile strength.
Drill & Tap Chart
Tap and drill chart for sizes up to Ø1" (Ø25.4mm).
Material Cutting Speed Data
Calculate cutting speed for turning, milling, & drilling. Choose from over 180 different types of material. Choose cutter material type (Carbide or HSS) & fine tune by setting the material hardness.
(App shows hardness limits for each material)
Milling Calculators
Cusp height calculator, Radial chip thinning calculator, Effective diameter for ball end mills, IPT to IPM conversion, Cutting speed calculator, SFM to RPM calculator
Machinist Test
Test the new folks before they runs the 5 axis
Material Data
Material Properties & machinability data for over 200 materials
GD&T Test
One of the many test questions within the app
GD&T Descriptions
GD&T symbols & Definitions
Material Weight Calculator
Calculate material weight for 10 shapes & 180 materials
CNC Machinist Calculator Pro is a machining calculator designed to make your life easier. Our app is perfect for CNC programmers or operators who need to make quick calculations on the fly. It
includes the following features:
1. Turning Calculators 2. Milling Calculators 3. Drilling Calculators 4. Gun drilling Calculator 5. Thread Calculators 6. True Position Calculator 7. Bolt Pattern Calculator 8. Surface Finish
Calculators 9. Drill & Tap charts 10. Thread Pitch charts 11. Center drill data 12. Unit converters 13. Hardness conversions 14. G-codes 15. M-codes 16. Blue Print GD&T 17. Machinability & cutting
speed data for over 180 materials 18. Tap Drill Calculator for roll form & cut taps 19. Geometric calculators 20. Triangle Solver 21. Angle conversions 22. Circular segment calculator 23. Fraction
Converter 24. Custom Macro info. 25. CNC Keyway cut time calculator 26. Thread Pitch over 3 wires calculator 27. Export lathe G-code for chamfers, tapers, & radii with tool nose
compensation. Download it today on iTunes or Google Play. See our Privacy Policy.
|
{"url":"https://cncdirt.com/","timestamp":"2024-11-07T13:48:27Z","content_type":"text/html","content_length":"76817","record_id":"<urn:uuid:87f1a5bf-55d0-45ac-8c78-26de4254e00e>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00801.warc.gz"}
|
: "Correct Answer" reduction
I have a simple problem regarding a reduction for the correct answer display.
The code below asks the student to take the derivative of a natural log function working on a root. The student's Result is entered in a fully reduced form and is marked correct with green fields,
but the "Correct Answer" from which the student is to compare their answer looks quite different and is not reduced.
I have tried using reduce, and changing the context to fractions but resulted in more errors.
Does anyone have a helpful suggestion?
I have attached the out put in a jpeg. and the code of the problem is below.
Thanks, Tim
# Webwork Workshop 2015 for Payer, Homework 1, Practice:
# Exercises to practice derivative of log reductions.
$a = Real(random(3,8,1));
$b =Real(random(2,9,1));
$c =Real(random(2,9,1));
$f = Compute(" ln(x^(($b)*(1/$a)))");
$g = $f ->D;
$h =Formula("$g")->reduce;
###Find the derivative of [``f(x) = \ln\left(\root [$a] \of {x^[$b]}\right)``] ###
[``f'(x) = ``] [______________]{$h}
I tried playing around with this, and I do not see how to fix. Even a simple product of powers of x does not get reduced.
For example, I tried this:
$uno = Formula("x**2");
$due = Formula("x**3");
$tre = Compute("$uno*$due")->reduce();
If I then write [$tre], it writes x^3*x^2 instead of writing x^5.
As I mentioned last Monday, MathObjects is not a computer algebra system, and it doesn't do much in the way of simplification. The main purpose of the reduce() method is to get rid of coefficients
like 1x, to remove terms like 0x, and to make things like x + (-2) into x-2. The only algebraic changes it makes is to factor our minus signs so that it can cancel them, and it will rewrite -2x+3 as
3-2x. other than that, it leaves your expression alone.
The complete list of reductions is available on the Wiki, and you can see that it is not extensive. That is what I mean by "MathObject is not a CAS".
If you want to have the derivative formula in a particular form that is not what the D() method produces, you will need to enter that as a Formula() yourself.
Thanks Davide,
And yes I do recall you stating during the slide presentation that "MathObject is not a CAS".
But could I follow up with another reduction question?
I changed the code in the problem to show a completely reduced answer for for the derivative in the Correct answer display.
But now my problem is that a students answer does not have to be reduced.
Is there a way that I can require students to enter their answers in a reduced form and perhaps prompt them to reduce such an answer?
The LimitedFraction Context works great in this regard, but it seems that this context only works with numeric values and not when variables are included in the answer.
Please see the code and images below.
# Webwork Workshop 2015 for Payer, Homework 1, Practice:
# Exercises to practice derivative of log reductions.
$a = Real(random(3,8,1));
$b =Real(random(2,9,1));
$c =Real(random(2,9,1));
$f = Compute(" ln(x**(($b)(1/$a)))");
($br,$ar) = reduce($b,$a);
$g =Formula("($br/($ar x))");
###Find the derivative of [``f(x) = \ln\left(\root [$a] \of {x^{[$b]}}\right)``] ###
[``f'(x) = ``] [______________]{$g}
Is there a way that I can require students to enter their answers in a reduced form and perhaps prompt them to reduce such an answer?
No, there is no context that enforces that at the moment. WeBWorK's philosophy in general is to allow any equation that is equivalent to the answer, rather than enforcing particular formats for the
problems. For example, there is nothing stopping a student from entering x+1-1 for an answer that expects x. We have found that trying to enforce too much structure on the answer is counter
productive in general.
Would you disallow 5/6x + 2/3x for example? Both fractions are reduced, and the answer is equivalent. Once you allow some computation (as you do in a Formula), it becomes much more difficult to tell
students which operations are allow and which are not. For many equations, it is not clear what the "most reduced" form should be, and students get quite frustrated if they have the "right" answer
but WeBWorK doesn't accept it because the form is different from what the instructor wants. For example, you have not indicated that the answer has to be an any particular reduced form, so it would
come as a surprise if their answer were marked wrong when it is numerically correct.
A note about fomat: you are using a heading for your formula, which is a bad idea, in general, since your question is not a heading. This may not seem like much of an issue, but you should not misuse
the semantics of the layout that you are creating. For example, students with visual challenges who need screen readers will get the wrong idea from your problem, since they will be informed that
your question is a 3rd-level heading, and this will be extra confusing because there is no 1st or 2nd-level headings. If you want bold lettering, use bold, not headings (but I'd still recommend you
not use bold either, as there is no need for it to be highlighted).
Do not over specify layout. You may think you are making things look nicer, but you are just making things harder for people in the long run.
On this topic, I am currently using WeBWorK for a developmental math class, where, for example, the skill of reducing a fraction from 8x^3y/(2xy^3) to 4x^2/y^2 is exactly what is tested in a problem,
or where the student is asked to derive the expression x^2-2xy+y^2, given the expression (x-y)^2. Years ago we had some custom made macros written for these and similar problems, but I do not have
access to those files now. I would be quite interested in learning how to handle problems like that.
I didn't mean to say that this isn't something that people want to test, only that WeBWorK isn't designed for it, and that in general it is a hard thing to test well.
WeBWorK does have a few contexts for certain well-defined situations. One is the LimitedPolynomial context defined contextLimitedPolynomial.pl. It provides a means of requires polynomials in expanded
form, so would allow x^2-2xy+y^2 but not (x-y)^2. So that could be used for the second version.
For the first, there is a RationalFunction context defined in contextRationalFunction.pl that forces the student answer to be a rational function (a quotient of polynomials), but it doesn't require
the quotient to be reduced. You could use this to ensure that the result is a rational function equal to the right answer, and then use a custom checker to take apart the fraction and test the
numerator to see if it matches the numerator of the correct answer. That would ensure that it is reduced (provided the correct answer is).
This approach could be used for Tim Payer's 9/6x as well, but I would not want to use the RationalFunction context there, as that would give away something about the form of the answer if the student
entered certain incorrect answers. But a custom checker that checked if the answer is correct, is a fraction, and has numerator equal to what you expect could be used to check for the reduced
I'll see if I can put tougher an example before class today.
OK, here is a custom checker that does what I outlined above.
$f = Formula("4x^2/y^2");
$cmp = $f->cmp(checker => sub {
my ($correct,$student,$ans) = @_;
return 0 unless $correct == $student;
return 0 unless $student->{tree}->class eq "BOP" && $student->{tree}{bop} eq "/";
my $cnum = Formula($correct->{tree}{lop});
my $snum = Formula($student->{tree}{lop});
Value->Error("Your answer isn't reduced") unless $cnum == $snum;
return 1;
[:8x^3/(2xy^2):] reduces to [_____________]{$cmp}
The custom checker tests whether the student and correct answer are equal (and return 0 if not), then checks if they student answer is a fraction (so that we know we can remove its numerator for the
next test), and if so, it takes the numerators of both the correct and student answers and makes them in to Formula objects on their own, and compares them. If they are not equal, we produce a
message for the student telling her that the answer isn't reduced (we already know it is correct). Otherwise, we return 1 for full credit.
Tim could use essentially the same custom checker for his situation, but in the Numeric context. Note, however, that this would not prevent answers like 3/(1+1)x or 3/(2x-x) in his case, since we are
not enforcing polynomial format in the Numeric context.
Thank you very much Davide!
I appreciate the detailed answer. I will use this!
Thank you Valerio. It is nice to know that it is possible. I am torn between holding the same standard in webwork as i do on exams and and quizzes, and and on letting the looser standard hold.
|
{"url":"https://webwork.maa.org/moodle/mod/forum/discuss.php?d=3645","timestamp":"2024-11-07T02:58:58Z","content_type":"text/html","content_length":"140808","record_id":"<urn:uuid:ff0a49ea-89ba-47af-80ea-19e372c31dae>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00691.warc.gz"}
|
17.9 Inches to Meters
17.9 in to m conversion result above is displayed in three different forms: as a decimal (which could be rounded), in scientific notation (scientific form, standard index form or standard form in the
United Kingdom) and as a fraction (exact result). Every display form has its own advantages and in different situations particular form is more convenient than another. For example usage of
scientific notation when working with big numbers is recommended due to easier reading and comprehension. Usage of fractions is recommended when more precision is needed.
If we want to calculate how many Meters are 17.9 Inches we have to multiply 17.9 by 127 and divide the product by 5000. So for 17.9 we have: (17.9 × 127) ÷ 5000 = 2273.3 ÷ 5000 = 0.45466 Meters
So finally 17.9 in = 0.45466 m
|
{"url":"https://unitchefs.com/inches/meters/17.9/","timestamp":"2024-11-03T22:38:49Z","content_type":"text/html","content_length":"22756","record_id":"<urn:uuid:e533b781-9e86-4285-9b30-c3cc9507d77f>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00509.warc.gz"}
|
Analytical design of end-of-life disposal manoeuvres for highly elliptical orbits under the influence of the third body's attraction and planet's oblateness
Politecnico di Milano
SCHOOL OF INDUSTRIAL ANDINFORMATION ENGINEERING Department of Aerospace Science and Technology (DAER)
Master of Science in SPACE ENGINEERING
Analytical design of end-of-life disposal manoeuvres
for Highly Elliptical Orbits under the influence of
the third body’s attraction and planet’s oblateness.
Master Thesis
Francesca Scala
Matricola 874013
Prof. Camilla Colombo Co-supervisor:
Copyright© December 2018 by Francesca Scala. All rights reserved.
This content is original, written by the Author, Francesca Scala. All the non-originals information, taken from previous works, are specified and recorded in the Bibliography.
When referring to this work, full bibliographic details must be given, i.e.
Scala Francesca, “Semi-analytical design of end-of-life disposal manoeuvres for Highly Elliptical Orbits under the influence of the third body’s attraction and planet’s oblateness”. 2018, Politecnico
di Milano, Faculty of Industrial Engineering, Department of Aerospace Science and Technologies, Master in Space Engineering, Supervisor: Camilla Colombo, Co-supervisor: Ioannis Gkolias
Remember to look up at the stars and not down at your feet. Try to make sense of what you see, and wonder about what makes the universe exist. Be curious.
— Stephen Hawking
First, I would like to acknowledge and to express my gratitude to my thesis supervisor Professor Camilla Colombo, of Aerospace Science and Technology Department at Politecnico di Milano, for giving
me the opportunity to work on such stimulating topics and for giving me very professional guidance. I’m also sincerely grateful to my correlator Dr Ioannis Gkolias for the precious advice and support
he gave me. I want to thank them for sharing invaluable scientific guidance and innovative ideas, helping me to choose the right direction for completing the work, until the last day.
I want also to acknowledge Professor Michèle Lavagna for giving me the opportunity to participate in many events at Politecnico for the Aerospace department, as the MeetMeTo-night and many others. It
was exciting and stimulating to work with her team.
I would like to thank all my colleagues, they taught me how to work in a team, sharing their knowledge and expertise. A special thanks to my colleagues and friends Marzia Z., Margherita P. and Laura
P, you were the light for my days in Bovisa. Thank you for making me feel at home and for all your valuable advice. Thank you for all the conversations and the experiences we shared together. I want
to thank Paolo V., Lorenzo T. and Giovanni B. for supporting and encouraging me during these months of thesis. And finally thank-you to Marco G., Cristiano C., Matteo Q., Luca M., Sebastiano A.,
Claudio F., Emanuele B., Marco M., Hermes S., Alessandro S., Francesco A., Francesco R., Andrea P., Andrea G. and Andrea S.. With each of you, I shared unforgettable and precious moments of my
university experience. I own you all my achievements, work and live together is the best experience I could have ever imagined.
I want to thank my friends from the high school, Federica V., Francesca P, Matteo M., Roberto G., Luca C and Martina T., you were always by my side and you always support me, no matters what.
A big thank-you to my best friend Erica C., you were always my polar star. Thank you for your wonderful and splendid friendship.
A special thank goes to my parents, they always support me during my study, encour-aging me in front of difficulties and success. Without you, none of this would have been possible.
Finally, I wish to tank my boyfriend Mattia, you share with me more than any other. These years were not simple, but all our experiences made us even closer. Thank you for your patience, presence and
love in any moment of my life. I’m nothing without you and you make me happy as nobody else can do.
The increasing number of satellites orbiting the Earth gives rise to the need for in-vestigating disposal strategies for space vehicles, to keep operative orbits safe for future space missions. In
the last years, several studies have been conducted focused on designing end-of-life trajectories. The aim of this thesis is the feasibility analysis for a fully-analytical method for end-of-life
de-orbit strategy of spacecraft in Highly Elliptical Orbits. The main perturbations to the Keplerian motion are planet’s oblateness and third bodies’ gravitational attraction. Following the classical
theory, the analytical expression of the double-averaged potential due to a third body’s perturbations and zonal J[2] effect is derived in the planet Centred Equatorial frame. This allows for a
simplified formulation of the system’s long-term dynamics. This thesis aims to introduce an innovative approach for the manoeuvre design, relying on a fully-analytical method to reduce the
computational cost. Some studies were already developed but using a semi-analytical approach: the resulting algorithm is time consuming for manoeuvres’ optimisation. The analysis is done considering
the third body and J2 contributions and the re-entry is modelled using the two-dimensional Hamiltonian
phase space. The model is used to estimate the eccentricity variations of the large set of orbits required during the optimisation process. The disposal manoeuvre is selected through a multi-criteria
optimisation. As real cases scenarios, the disposal manoeuvre for Earth’s and Venus’ satellite missions are designed, considering the limitation upon the available propellant onboard. It is
demonstrated that the third-body gravitational perturbation provides a suitable environment for manoeuvres design. These results could serve as initial conditions for more accurate analysis with a
high-fidelity model and confirm the potential efficiency of exploiting the use of orbital perturbations for satellite navigation. This thesis was part of the COMPASS project: “Control for orbit
manoeuvring by surfing through orbit perturbations”(Grant agreement No 679086). This project is European Research Council (ERC) funded project under the European Unions Horizon 2020 research.
Keywords: Orbital perturbations; Long-term Evolution; End-of-life disposal; Optimal manoeuvres
L’incremento del numero di satelliti in orbita terrestre fa nascere la necessità di studiare metodi di rimozione dei veicoli spaziali, per mantenere gli standard di sicurezza necessari in orbita per
le missioni future. Nel corso degli anni, diversi studi hanno analizzato strategie di rimozione passiva. Lo scopo di questa tesi è lo studio e la determinazione di metodi analitici per il rientro e
la rimozione di satelliti in orbite fortemente ellittiche. Seguendo la teoria classica, è stata sviluppata una formulazione analitica del potenziale dovuto al disturbo del terzo corpo e di J[2], nel
sistema di riferimento equatoriale, eliminando i contributi ad alta frequenza. Lo scopo di questa tesi è l’introduzione di un approccio innovativo, basato su algoritmi analitici, con lo scopo di
ridurre il costo computazionale. In passato, diversi studi si sono basati su un approccio semi analitico: l’agoritmo risulta computazionalmente inefficiente per l’ottimizzazione delle manovre. Si è
sviluppata quindi un’analisi semplificata, considerando l’effetto della perturbazione del terzo corpo e di J2, sviluppando una descrizione
dell’Hamiltoniana bidimensionale. La manovra è stata modellata tramite un processo di ottimizzazione su diversi parametri, e il risultato è stato ottenuto tramite le equazioni di Gauss per una
variazione dei parametri orbitali. Come casi studio, le possibili traiettorie a fine vita sono state valutate sia per missioni terrestri che intorno a Venere, considerando anche il limite dovuto alla
quantità di combustibile disponibile a bordo. Dai risultati, è stato dimostrato che la perturbazione orbitale dovuta all’attrazione del terzo corpo può essere sfruttato per il design di manovre nello
spazio delle fasi, e questo può essere considerato come il punto di partenza per ulteriori analisi, che comprendano modelli più accurati, dimostrando la potenziale efficienza dell’utilizzo di codici
analitici che sfruttino le perturbazioni orbitali per il controllo dei satelliti. Questa tesi è parte del progetto COMPASS: “Control for orbit manoeuvring by surfing through orbit perturbations”
(Grant agreement No 679086). Questo progetto è un progetto finanziato dall’European Research Council (ERC) nell’ambito della ricerca European Unions Horizon 2020.
Abstract v
List of Figures xii
List of Tables xiv
List of symbols and data xvii
1 Introduction 1
1.1 State of the Art . . . 4
1.1.1 Analytical modelling of orbit perturbations . . . 4
1.2 Aim of the thesis . . . 11
1.3 Thesis outline . . . 12
2 Review of the perturbed two body problem 15 2.1 Historical Background . . . 15
2.2 Earth Centred Inertial Frame . . . 16
2.3 Perifocal Reference Frame . . . 17
2.4 The two-body problem dynamics . . . 17
2.5 Perturbed Equations of Motion . . . 18
2.5.1 Lagrange Planetary Equations . . . 19
2.5.2 Long-Term Lagrange Planetary Equations . . . 19
2.5.3 Gauss planetary equation . . . 20
2.6 Hamiltonian formulation . . . 21
2.6.1 Hamiltonian principle . . . 21
2.6.2 Hamiltonian of the perturbed two-body problem . . . 23
3 Orbital perturbations 25 3.1 Historical Background . . . 26
3.3 Zonal Harmonic Potential . . . 27
3.3.1 Analytical expression . . . 27
3.3.2 Long term effect via closed-form single-averaging . . . 29
3.4 The third body disturbing function . . . 32
3.4.1 Analytical expression of the disturbing potential . . . 32
3.5 Third body secular effect via double averaging technique . . . 37
3.5.1 Single-averaged third-body disturbing function . . . 38
3.5.2 Double averaged disturbing function . . . 42
3.5.3 Moon Disturbing Function . . . 42
3.5.4 Sun disturbing Function . . . 44
3.5.5 Moon double averaged potential . . . 45
3.5.6 Sun double-averaged potential . . . 52
3.6 Validation of the secular and long-term evolution of the orbital parameters 54 4 Hamiltonian Formulation: Phase Space Maps 59 4.1 Earth-Moon-Sun system: Hamiltonian representation . . . 60
4.2 Venus-Sun system: Hamiltonian representation . . . 61
4.3 Two-Dimensional Hamiltonian phase space . . . 62
4.3.1 Elimination of the Node . . . 63
4.3.2 Kozai Parameter: . . . 65
4.4 Phase space maps . . . 65
4.5 Accuracy analysis . . . 70
4.6 Problem of the node elimination . . . 75
5 Disposal Manoeuvre Strategy 79 5.1 Disposal manoeuvre design . . . 83
5.2 First case: impulse in normal direction (α = 90°) . . . . 85
5.3 Second case: impulse in a generic direction . . . 85
5.4 Optimisation procedure . . . 85
5.4.1 Cost function for the optimal altitude: Earth’s or Venus’ re-entry . . 86
5.4.2 Cost function for the optimal graveyard trajectory . . . 87
5.4.3 Cost function for the optimal ∆v cost . . . . 87
5.4.4 Cost function for the optimisation procedure . . . 87
5.4.5 Disposal constraint . . . 89
5.4.6 Optimal single impulse design . . . 89 x
5.4.7 Disposal Algorithm . . . 93
6 Numerical results for the study case missions 97 6.1 Venus’ Orbiter disposal design . . . 98
6.1.1 Mission Definition . . . 98
6.1.2 Mission strategy and constraint. . . 98
6.1.3 Numerical Results . . . 100
6.2 INTEGRAL disposal design . . . 104
6.2.1 Mission Definition . . . 104
6.2.2 Disposal strategy and constraint . . . 104
6.2.3 Numerical Results . . . 107
6.3 XMM-Newton disposal design . . . 112
6.3.1 Mission Definition . . . 112
6.3.2 Disposal strategy and constraint . . . 112
6.3.3 Numerical Results . . . 115
7 Conclusions and further developments 119 A Fundamental Relations 123 A.1 Trigonometric relation for true, mean and eccentric anomaly . . . 123
A.2 Latitude and argument of latitude . . . 125
B Planetary Fact Sheet 127 C Third body disturbing potential: Moon 129 C.1 A and B coefficient . . . 129
C.2 Double averaged Moon disturbing function: Circular case . . . 130
C.3 Double averaged Moon disturbing function: Elliptical case . . . 131
D Third body disturbing potential: Sun 137 D.1 A and B coefficient . . . 137
D.2 Double averaged Sun disturbing function . . . 137
Acronyms 141
List of Figures
1.1 No. of launched object per year, in the period 1957-2017 from Spacecraft
Encyclopedia1. . . 2
2.1 Earth centred inertial frame and perifocal reference system. . . 16
3.1 Schematic procedure for the analytical recovery. . . 26
3.2 Spherical Harmonics bands of latitude for J[2] to J[6] term in the Zonal contri-bution (Vallado, [39]) . . . 28
3.3 J2 effect on Ω and ω: rate of change in time. . . . 31
3.4 Three body system geometry. . . 33
3.5 Third body position geometry. . . 43
3.6 Comparison between the time evolution of the exact, single and double-averaged approximations. . . 55
3.7 Focus on the first year of time evolution. . . 56
3.8 Different level of approximation of the orbital dynamics by considering at first only the Moon influence, then the J[2] zonal contribution is added, and finally also the Sun effect. . . 56
3.9 Satellite orbit propagation in case of circular Moon orbit (green) and elliptical Moon orbit (blue). . . 57
4.1 Time variation of Moon orbital plane inclination. . . 67
4.2 Orbital parameter variation, in the triple averaged approach, considering the influence of Moon ephemeris in time (blue line), and constant condition of Moon orbital parameter in time (red line).
. . 68
4.3 Phase space comparison between different approximation. . . 69
4.4 Earth’s satellite: Comparison of different models to describe the phase space maps. . . 71
4.5 Earth’s satellite: Comparison of different models to describe the orbital parameters time evolution. . . 73
4.6 Venus’ orbiter: Comparison of different models to describe the orbital
para-meters time evolution. . . 74
4.7 3D Hamiltonian phase space for ω = 3π/2. . . . 77
5.1 Impulse representation of the in-plane and out-of-plane angle α, β in the t, n, h reference frame. . . . 83
5.2 Representation of different strategy for the disposal. . . 84
5.3 Flow chart of the optimisation procedure algorithm used in the manoeuvre design. . . 92
6.1 Venus’ orbiter two dimensional phase space. In blue the spacecraft trajectory, in red the target eccentricity. . . 99
6.2 Representation of minimum (M1) and maximum (M2) eccentricity condition for the manoeuvre. . . 101
6.3 Representation of optimal disposal options for the Venus’ orbiter. . . 103
6.4 INTEGRAL satellite, image retrieved from ESA website [66] . . . 105
6.5 INTEGRAL Phase Space two-dimensional representation . . . 106
6.6 Representation of different strategy for INTEGRAL disposal. . . 108
6.7 Representation of new phase space for both minimum and maximum eccent-ricity conditions. . . 111
6.8 Time evolution of Integral-like orbit. Orbital propagation after the delta-v impulse. In blue the original trajectory, The change in the perigee altitude shows that the manoeuvre assess the
target value for the re-entry. . . 112
6.9 XMM-Newton satellite, image retrieved from ESA website [69]. . . 113
6.10 XMM-Newton Phase Space two-dimensional representation . . . 114
6.11 Representation of new phase space for both minimum and maximum eccent-ricity conditions: the critical condition for the re-entry is not reached in both cases. . . 116
6.12 XMM-Newton: Representation of new phase space for both minimum and maximum eccentricity conditions. . . 118
6.13 XMM-Newton: Representation of the perigee altitude for different initial condition of the manoeuvre (different epoch). . . 118
A.1 Spherical Geometry in the equatorial frame I, J, K with I aligned with the γ direction to the equinox. . . . 125
List of Tables
1.1 Number of space objects of different size from ESA website4 [. . . .] [2]
2.1 Magnitude of forces for an INTEGRAL like mission, (with area-to-mass ratio 1, and a semi-major axis of 8.7 × 104[km) . . . .] [23]
3.1 Earth’s Zonal Harmonics coefficients (from EGM-2008 [45]) and Venus’ Zonal Harmonics coefficient (from Mottinger et al. [46]) . . . 28 3.2 Legendre Polynomials Pl,m[cos S] for m = 0 . . . . 34
3.3 Orbital parameters of a INTEGRAL-like orbit. . . 54 5.1 Difference in computational time between a numerical and a semi-analytical
approach. . . 91 5.2 The result obtained for the MultiStart and the Genetic Algorithm methods for
an Integral like orbit, using a semi-analytical method. Notice the difference in the computational time. . . 95 6.1 Orbital parameters of a fictitious Venus’ orbiter. . . 98 6.2 Results for maximum
and minimum eccentricity condition of the Venus’ orbiter.100 6.3 Orbital parameters of a fictitious Venus’ orbiter. . . 102 6.4 Venus’ probe optimal solution from the fully-analytical approach using
Hamiltonian triple averaged model: starting date 22/03/2013. . . 102 6.5 Most important data of INTEGRAL mission. . . 104 6.6 Results for maximum and minimum eccentricity of INTEGRAL mission. . . 107
6.7 INTEGRAL solution from the fully-analytical approach using the Hamiltonian
triple averaged model and the semi-analytical propagation using the double-averaged potential function. The results from the semi-analytical approach are comparable with the literature results of
Colombo et al., [24] . . . 109 6.8 Most important data of XMM-Newton mission. . . 113 6.9 Results for the minimum and the maximum eccentricity condition for INTEGRAL
mission. . . 115 xiv
6.10 XMM-Newton solution from the fully-analytical approach using the Hamilto-nian triple averaged model. . . 117 B.1 Planetary Fact Sheet in metric units . . . 127
List of symbols and data
G 6.67 × 10−20km3/kg/s2 Universal gravitational constant
µ 398 600 km3/s2 Earth’s gravitational parameter
R⊕ 6378.16 km Earth’s mean equatorial radius
J2 1082.628 × 10−6 Coefficient of the Earth’s oblateness
a (km) Semi-major axis
hp (km) Altitude of perigee
e (-) Eccentricity
i (rad) or (deg) Inclination
ω (rad) or (deg) Argument of perigee
Ω (rad) or (eg) Right ascension of the ascending node
f (rad) or (deg) True anomaly
E (rad) or (deg) Eccentric anomaly
M (rad) or (deg) Mean anomaly
δ (rad) or (deg) Geocentric latitude
r (km) Satellite position vector
v (km/s) Satellite velocity vector
R (km2/s2) Disturbing function
H (km2/s2) Hamiltonian Function
Θkozai (-) Kozai parameter
∆v (km/s) Velocity variation of a manoeuvre
α (rad) In-plane ∆v angle
β (rad) Out-of-plane ∆v angle
$ - Symbol for Moon parameters
- Symbol for Sun parameters
Chapter 1
You sort of start thinking anything’s possible if you’ve got enough nerve.
—J. K. Rowling, Harry Potter and the Half-Blood Prince
ince the first Earth’s artificial satellite Sputnik 1 was launched in October 19571, the number of space vehicles per year increased exponentially. An average of 143 spacecraft per years was launched
between 1957 and 2017, for a total of 8’593 satellites. Figure 1.1 shows the number of launches since 19722, grouped by proprietary country and mission scope. In the last decade, the overall number
of private satellites grew up significantly, reaching over 350 satellites in orbit in 2017. As a matter of fact, the continuously increasing space activity results in a systematic congestion of
particular orbital regions about the Earth. Any spacecraft that remains in orbit uncontrolled after the end of the operations is considered a space debris.
During the first few decades of humans’ space exploration, no strategy for the end-of-life was studied or planned, resulting in an increasing amount of dead satellites still on orbit. Despite space
vehicles in Low Earth Orbit (LEO) achieving a natural re-entry in short time period due to the atmospheric drag, in Geostationary Earth Orbit (GEO), Medium Earth Orbit (MEO) or Highly Elliptical
Orbit (HEO) they could stay on their orbits even for centuries, becoming for all practical purposes space debris. Moreover, the proliferation of debris is triggered by collisions between dead
satellite still in orbits or explosions of malfunctioning spacecraft at the end of mission. As Kessler and Cour-Palais described in
https://www.nasa.gov/multimedia/imagegallery/image_feature_924.html, last visited 24/10/2018
Master Thesis 1. INTRODUCTION
Figure 1.1 No. of launched object per year, in the period 1957-2017 from Spacecraft Encyclopedia3[.]
Table 1.1 Number of space objects of different size from ESA website4
Diameter no. of objects > 10 cm 29’000 1 cm to 10 cm 750’000
1 mm to 1 cm 166 million
1978, the rising number of artificial satellites would increase further the impact probability. In particular, they point out that, in the absence of any additional measure, the debris flux could
overcome the natural meteoroid one. They studied mathematical model to predict the rate of formation of such a belt (Kessler and Cour-Palais, [1]). This problem is commonly addressed as the “Kessler
Syndrome”: in an uncontrolled environment, the debris flux will increase exponentially in time (Kessler et al., [2]). The main danger revealed by Kessler’s work is reaching a critical debris
population density, such that a collisional cascade process is triggered. To prevent such a scenario, the need of introducing a severe normative for future mission rises: both post-mission disposal
and active debris removal should be introduced to reduce the effective number of objects in orbit.
Spacecraft Encyclopedia - C. Lafleur - http://claudelafleur.qc.ca/Spacecrafts-index.html, re-trieved on 24/10/2018
From the latest report provided by ESA’s Space Debris Office4[, the number of debris]
objects regularly tracked by Space Surveillance Networks is about 21’000. In particular, if space debris of smaller size are also considered, as in Table 1.1, then the situation looks way more
alarming and action should be taken. Since its foundation in 1993, the Inter-Agency Space Debris Coordination Committee (IADC)5 defines the recommended guidelines for the mitigation of space debris.
There are two protected regions around the Earth, one for low-Earth orbits and one at geosynchronous altitude. The removal of any object in LEO is required within 25 years after the end-of-mission,
while for GEO the guideline is to move to a graveyard orbit 250 km above (IADC Space Debris Mitigation Guidelines, [3]). For the Highly Elliptical Orbit (HEO) there is no regulation yet, but, since
many current and future missions target that region (e.g. Proba-3, INTEGRAL, XMM-Newton, Cluster II, Image, Themis, Chandra, IBEX), the implementation of a strategy is highly recommended. In this
work the end-of-life is considered subject to the same regulation as for the LEO region: the residence time in LEO must be within 25 years in case of atmospheric re-entry, but the total de-orbit
trajectory could last longer.
The dynamics of HEO, with high apogee altitude, is much affected by the influence of the Moon’s gravitational attraction, which is more important than the second degree of the zonal harmonic J2 term.
A correct approximation of the orbit evolution in time
requires a model including at least the J[2] and the third body disturbing function, the latter expanded up to the fourth order in the parallax factor (Colombo et al., [4]). To model the end-of-life
disposal, the short-period effects are negligible, and they are removed by using a double-averaged model of the potential function. In fact, the third-body attraction causes secular and long-term
variation in all the orbital parameters, except for the semi-major axis. Especially important for the understanding of the problem is the secular evolution in the eccentricity-argument of perigee
In this work, the natural evolution of satellites’ orbital elements is exploited for lowering the perigee altitude. In particular, taking an approach different from previous works (see Section 1.1),
the aim is to design manoeuvres with a fully-analytical approach and subsequently reduce the computational time. The disposal manoeuvres are designed for satellites in HEO, by studying the long-term
and secular variation in eccentricity and inclination caused by the third-body attraction. The end-of-life strategy could be either an atmospheric re-entry or a graveyard orbit.
ESA website, Space Debris by the numbers: https://www.esa.int/Our_Activities/Operations/ Space_Debris/Space_debris_by_the_numbers, last visited on 24/10/2018
Master Thesis 1. INTRODUCTION
1.1 State of the Art
This section provides a historical overview of the model used to describe the orbital dynamics under the influence of a third-body external perturbation to the classical Keplerian description. In
addition, it presents the current state of the art for HEO disposal design.
1.1.1 Analytical modelling of orbit perturbations
The study of secular perturbations caused by a third body was widely studied in the past. The third body effect was first addressed in the ’60s by Lidov (1926-1993), a Russian scientist and expert in
physical and mathematical sciences, who was also part of the Soviet orbital design team during the space race. He made a fundamental contribution in the determination of mathematical models to
describe the three body environment for the Earth-Moon and the Sun-Moon system. In one of his most important works (Lidov, [5]), he described the mathematical model for the orbital evolution of
satellites under the effect of a third body, either the Moon or the Sun. The resulting oscillation of the orbital parameters was computed for Earth’s satellites for a wide class of orbits, in terms
of eccentricity values, with a semi-major axis of the order of 30-40 × 103km.
In the same years, also Kozai (1925-2018), a Japanese scientist and astronomer at Tokyo Astronomical Observatory, developed a model to describe the dynamics of the orbit of an asteroid in the
Jupiter-Sun system (Kozai, [6]). The model was developed in the Delaunay’s variables, by exploiting the Hamiltonian perturbation theory, and he was able to assess the secular variation under the
assumption of circular Jupiter orbit. For the first time, the results were expressed in the Hamiltonian phase space to study the stationary and libration points for the orbit.
The results of these studies are today addressed as the Lidov-Kozai effect. The long-term evolution was produced using an averaging technique on the Hamiltonian system, under the assumptions that the
characteristic time of variation of the orbital parameter is much higher than the characteristic period of the third body. Both Kozai and Lidov found out that the orbiting particle oscillations are
dependent on the initial eccentricity and inclination of the orbit (Shevchenko, [7]). In particular, the effect is more evident for highly inclined orbits: the Highly Elliptical Orbits have a typical
inclination of about 60°. Basically, this was an unexpected behaviour, since classical theories were applied to low inclination solar system bodies, for which this effect was not present.
In the ’70s, Kaufman developed the expressions for the orbital disturbing potential. In his works (Kaufman, [8]; Kaufman and Dasenbrock, [9]), he expanded the third body 4
1.1 STATE OF THE ART
potential using Legendre polynomials, following the classical approach of Laplace-Lagrange. On the other hand, the single averaged was done in close form by considering that the period of the
satellite is much lower than the secular characteristic time. He also described the analytical expressions of orbital variation in terms of Earth’s oblateness J2 and J22, starting
from the works of Kozai, [10] and Kaula, [11].
In more recent years, many studies arise upon the Lidov-Kozai mechanism for different astrophysical applications. The most interesting results were produced by studying the application to highly
inclined orbit. Recently, some studies (Folta and Quinn, [12]; Abad et al., [13]) used the Hamiltonian formulation to describe the natural evolution of Lunar orbiter under the third body influence.
These works relied on the development of the Earth perturbation as a third body disturbance and the potential was expanded up to the second order term in the Hamiltonian formulation to study the
frozen conditions for the orbit.
All the previous works considered the third body perturbation only. Nevertheless, especially for Earth’s satellites, other external effects are present: the J[2] zonal contribution is very important
to describe the actual long-term evolution of a space probe. In particular, for HEOs, the coupling between the Moon and J[2] contribution generates a particular behaviour in the orbital dynamics.
Delsate et al. used the Hamiltonian system to describe the time evolution of a Mercury orbiter [14]. They developed a simplified model combining the zonal effect of J[2] together with the third body
disturbance of the Sun. They discovered that the coupling effect due to the J2 coefficient acts against the increasing of eccentricity
predicted by the Lidov-Kozai effect. Moreover, in his model, it was assumed that the perturbing body orbit lies on the planet equatorial plane. This is a good assumption when the angle between the
equator and the ecliptic is relatively small (for Mercury, the obliquity of the orbit is 0.034°).
In 2005, due to the increasing interest of exploring Europa, Lara and San Juan studied the possible implication of Jupiter’s attraction on the dynamical behaviour of an orbiter around Europa. They
studied the stability regions for different inclined and eccentric family of orbits. Their model considers the perturbing effect of both J[2] and the third body attraction of Jupiter, [15]. A similar
study was done by Carvalho et al. in 2012. He studied the stationary condition for space vehicle orbiting Europa [16]. In particular, he developed a model considering the most relevant term of the
gravitational attraction: J2,
J3, J22, together with the third body effect expanded up to the second order. But still, the
perturbing body is considered on the same orbital plane of the parent body. An important contribution was given by Tremaine et al., [17]. They described and studied the satellites dynamics in the
Laplace plane. They focused on the stability and instability condition of
Master Thesis 1. INTRODUCTION
the classical Laplace surface, for highly inclined orbits. In his studies, he considered the potential of an oblate planet in the quadrupole approximation, considering the coupling with the third
body influence. Depending on the obliquity with respect to the equator, he defined stable, coplanar Laplace equilibria for both circular and eccentric solutions.
In 2012, Colombo et al. [18] develop a model to study the orbital dynamics of spacecraft with a high area-to-mass ratio. Their model considers the influence of the zonal harmonic
J2 and the Solar Radiation Pressure (SRP). They develop an analytical and a numerical
model to investigate the feasibility of using SRP for passive de-orbit for MEO satellites. Similarly, Tresaco et al. developed a complete analytical model to study the feasibility of solar sails for
orbit navigation around Mercury ([19], [20]), in addition, they considered the third body in case of elliptical orbit approximation. They explored the effect of SRP upon satellite to enhance
different future mission concept using Solar Sails: deep space exploration, space debris removal and long-term mission in the solar system. The dynamical model was developed in the Mercury equatorial
frame rotating with the Sun node, considering the Sun orbiting on an elliptic inclined orbit. They considered the influence of J[2] and J[3] zonal harmonics terms, the third body perturbation of the
Sun expanded up to the second order and the SRP. The resulting Hamiltonian system was used to study the frozen orbit conditions for probes around Mercury: they focused their study on frozen
conditions for low eccentric orbits (less than 0.1). Recently, Naoz [21] studied the hierarchical three body approximation applied to astronomical systems, such as planetary or stellar scales and
supermassive black holes. He recovered the dominant behaviour of a system under the elliptical Lidov-Kozai effect, studying both the chaos regime and the frozen condition for a particle in the
perturbed environment.
In all these works, the perturbation model was averaged to study the secular and the long-term dynamics; in addition, the averaging procedure reduces the degree of freedom of the system, resulting in
a Hamiltonian representation dependent only on eccentricity and argument of perigee, while the semi-major axis and inclination are treated as parameters. The approximation was found out to be
suitable for the description of frozen condition and the secular evolution. Some two-dimensional maps were provided to describe the oscillation of eccentricity and inclination as a function of the
perigee anomaly. Moreover, in all those works the simplification that the third body is on the equatorial plane of the main attractor or on the same plane of the satellite was considered. This
simplification was necessary to drop in the Hamiltonian the dependence on the argument of the node, resulting in a one degree of freedom expression, suitable to describe analytically the phase space
in terms of eccentricity and perigee anomaly. This description is not suitable in the case a second 6
1.1 STATE OF THE ART
disturbing body is considered: the model becomes very complex and the argument of the node is no more cancelled out.
In successive works (Colombo et al. [4]; Breiter, [22]; Rosengren et al., [23]) the single and the double-averaged potential was used to describe the resonance effects of a third body for Earth’s
satellites. The Hamiltonian description was carried out in the perturbing body plane (Moon) centred at the Earth. In particular, the results were used to describe the two-dimensional Hamiltonian
phase space for HEOs under the effect of the third body perturbation only. From those maps, a strategy for the end-of-life condition was designed, targeting the disposal trajectory in the phase
space. In 2014, Colombo et al., [24], [25], increased the precision of the model, by including numerically the SRP, the zonal and the Sun effect, dropping the simplification that the Sun and the Moon
lie on the same plane. In addition, they determined that a description of the third body potential is essential for a correct prediction of HEOs evolution, using the dual averaged expression to
develop the long-term evolution through the Lagrangian equations. Nevertheless, in the analytical Hamiltonian derivation, the third body orbit was approximated to a circular one, to simplify the
expressions, in fact, the Moon is on a low eccentric orbit (Colombo, [25]; Colombo et al., [26]).
These works developed for the first time the single and the double-averaged method to compute orbital manoeuvre for disposal purposes. They exploited the manoeuvre to navigate in the phase space,
computing the optimal manoeuvre through an optimisation procedure. The delta-v was computed using the Gauss equations in finite difference (Colombo, [27]), then the dynamic of the final orbit was
computed by integrating the semi-analytical model, with single or double-averaged dynamic. The optimisation was based on a global optimiser to evaluate the best direction of the manoeuvre in the
phase space. In addition, the manoeuvre optimisation was done numerically, integrating the equations of motions at each time step: this results in a very expensive code in terms of computational
time, as it needed to be integrated over a period of 20 to 30 years for each function evaluation, even if benefiting of the speed of the semi-analytical technique.
In this thesis, the analytical expression of the disturbing potential function is computed for the third body effect in case of an elliptical inclined orbit of the disturbing body (both for the Sun
and the Moon) and for the zonal harmonic expression. The former is expanded up to the fourth order to have a high fidelity model, while for the latter, only the J2 term
was considered since the other terms are negligible for Highly Elliptical Orbits due to their big semi-major axis (in the order of 104[−10]5[km).]
Master Thesis 1. INTRODUCTION
Analytical theories with Hamiltonian formulation
The Hamiltonian formalism was widely used in past dissertations. Classically, the dynamic of a system can be described by the energy approach developed by Lagrange (1736-1813) [28]. This formulation
is based on the knowledge of the kinetic T and potential V energy of the system. Hamilton (1805-1865) [29] then developed the Hamiltonian principle to describe the behaviour of a system in time, in
an equivalent way to the Lagrangian representation. The orbital evolution of solar system bodies was one of the main driving forces for the advancements of the Hamiltonian system theory. Many
techniques were developed particularly to address orbital mechanics problems (Shevchenko, [7]). In recent years, Valtonen and Karttunen applied those principles to orbital mechanics. Moreover, the
Hamiltonian function for the two-body problem is a time-invariant relation and it is typically described in Delaunay’s variables (see Valtonen and Karttunen, [30]). They showed that the Hamiltonian
function for the perturbed two-body problem is a time-invariant relation. The Hamiltonian formulation is very effective for describing satellite dynamics in the phase space representation. In many
works, it was used to try to develop an analytical approximation to the real dynamics of the system (Colombo et al. [24]; Colombo et al. [26]; Delsate et al. [14]; Tresaco et al. [20]). The complete
reduction of the orbital dynamics in a particular body configuration is not always feasible but the strength of this approach resides in the possibility to describe the evolution of the real system
using one single equation. Maps in terms of the argument of perigee were used to understand and analyse the natural libration in inclination and eccentricity. This kind of representation allows
stressing the natural oscillation of an orbit in a perturbed environment, identifying also stable and unstable conditions for the space probes.
In this work, the Hamiltonian formulation was used to try to find one degree of freedom representation. This approach has the potential of describing the orbit dynamics using two-dimensional maps for
the design of disposal manoeuvres. The two-dimensional maps are recovered from the analytical Hamiltonian, which in the case of no dependence of the node argument, can be expressed through the
Lidov-Kozai parameter. The latter is a constant of motion for the system, defined by the initial conditions, and allows to describe the oscillation in inclination as a function of the eccentricity.
In this way, since the semi-major axis is constant in the double average representation, the Hamiltonian of the system has only one-degree of freedom: the solutions can be represented as level curves
in the eccentricity-argument perigee plane. This model can be used to produce the eccentricity-eccentricity-argument of perigee maps (Colombo et al. [24]; Colombo [25]).
1.1 STATE OF THE ART
End-of-life strategies
As already explained, the need for mitigation and disposal strategy for Earth’s satellites is becoming more and more important. For HEO no guidelines currently exist, but the imple-mentation of
removal manoeuvres and the mitigation of risk collision is highly recommended for current and future missions (Ailor [31]; Colombo et al. [26]). For the Highly Elliptical Orbit very few studies were
performed in the past since the major interests were on the LEO, MEO and GEO spacecraft. Current and future space missions target the HEO for scientific purposes, like INTErnational Gamma-Ray
Astrophysics Laboratory (INTEGRAL), the first space observatory that can simultaneously observe objects in gamma rays, X-rays and visible light, X-ray Multi-Mirror Mission-Newton (XMM-Newton), the
biggest European satellite with a telescope for X-ray detection. Typically, the end-of-life removal could consist of an active removal of the satellite from the protected region, or a passive
approach exploiting the orbit energy. The former can be done after mission completes and the satellites become a space debris, while the latter is required before the operative life of the space
vehicle ends.
Active space debris removal The European Space Agency (ESA) defines that Active
Debris Removal (ADR) is essential to stabilise the growth of space debris. In fact, the removal of the current dead space vehicle still in orbit is mandatory to reduce the probability of an orbit
impact, which could cause even the duplication of space debris. This approach is the only feasible solution to compensate for the non-compliance satellite to disposal normative. ADR is very efficient
to reduce the probability of future on-orbit collision. ESA Clean Space6 [is studying an active debris removal mission called e.deorbit to tackle with]
such a problem. There are two concepts under consideration: one using a net and the other a robotic arm.
An overview of ADR options is given in Shan et al., [32]. Net debris capture was studied also in other works at Politecnico of Milan (Benvenuto et al., [33]), while the robotic arm is mainly
developed by Airbus and OHB (Airbus, [34]; Forshaw et al., [35]). These methods can be very effective for the removal of already existing debris but do not exploit any prevention or mitigation
strategy for avoiding the creation of more future dead satellites in orbit. To face the problem of preventing more space debris in orbit, the passivation technique can be studied. Some studies were
already done in this field.
ESA Clean Space website: http://www.esa.int/Our_Activities/Space_Engineering_Technology/ Clean_Space, latest visited on 11/2018
Master Thesis 1. INTRODUCTION
Passive space debris removal From ESA statistics7[, only a few very large objects, such]
as heavy scientific satellites, re-enter Earth’s atmosphere in a year. In total, about 75% of all the largest objects ever launched have already re-entered. Objects of moderate size, i.e. 1 m or
above, re-enter about once a week, while on average two small tracked debris objects re-enter per day. The controlled or uncontrolled re-entry of spacecraft and space systems is, however, associated
with several legal and safety aspects that must be considered.
Passive de-orbiting systems, such as drag augmentation devices and tethers, can be used for de-orbiting and re-entry (uncontrolled) of small satellites in LEO. In the same way, for space vehicles in
orbits affected by the SRP, solar sails can be used for orbit navigation. The SRP was investigated for changing the operative orbit at the end-of-life of the satellite by exploiting the effect of the
cross-sectional area. In particular solar sails were already studied for navigation purposes, by enhancing the SRP effect. On the other hand, this contribution was considered for the de-orbiting by
increasing the area-to-mass ratio at the end of mission so that the re-entry in the atmosphere is feasible (Lücking et al. [36]; Lücking et al., [37]). It was used by Colombo et al. [38] to study the
orbit evolution and maintenance of a constellation of very small satellites, the Space Chips. The Hamiltonian approach was used to study the possible frozen orbits in the phase space representation.
In a following work, Tresaco et al. [20] explored the effect of SRP upon satellite to enhance different future mission concept using Solar Sails: deep space exploration, space debris removal and
long-term mission in the solar system. For the HEOs, the effect due to the attraction of the Moon was defined by Kozai and Lidov, as reported previously. Colombo et al. in [24] investigated how those
long-term effects can be used to identify stability and instability conditions. The former, quasi-frozen orbits, are addressed for graveyard orbit determination, while the latter was used to target
an Earth re-entry. This approach was developed also in subsequent works (Colombo et al., [26]; Colombo, [25]), and all of them study the orbital evolution in the phase space. The model here developed
was used to optimise the manoeuvre to achieve the re-entry or the graveyard condition. Global optimisation methods were used to find the best manoeuvre direction, evaluating the dynamics with a
semi-analytical approach at each step. The Hamiltonian formulation for the phase space determination was computed analytically in terms of the third-body disturbing function only. The consideration
of other perturbations effect, as the coupled luni-solar perturbation and J2 effect, was done by
numerically integrating the single-averaged equation of motions to simplify the Hamiltonian formulation and the phase space representation.
ESA Clean Space website: http://www.esa.int/Our_Activities/Space_Engineering_Technology/ Clean_Space
1.2 AIM OF THE THESIS
1.2 Aim of the thesis
The aim of the present thesis is to investigate the feasibility of producing a fully analytical approach for optimal disposal manoeuvre computation. An analytical model for orbital perturbations of
both Earth’s and Venus’ satellite was developed, including the potential due to the J2 zonal harmonics and the third body disturbance: the Sun for Venus’ orbiter
and Moon and Sun for Earth’s case. Since the study focuses on the secular and long-term prediction, the single and double averaged expression of each contribution is analytically evaluated and then
substituted in the Hamiltonian system. From the latter, the two-dimensional phase space representation is recovered in terms of eccentricity and argument of perigee values, once the dependence on the
Right Ascension of the Ascending Node (RAAN) is dropped. The analytical representation of the reduced Hamiltonian system has two advantages:
• the two-dimensional phase space describing the time evolution and oscillation of orbital elements is derived algebraically and not through a numerical integration.
• the Hamiltonian can be solved for the initial condition of the satellite to recover the maximum or the target eccentricity condition: this brings the knowledge of the minimum perigee that the orbit
under study can reach in time due to natural evolution. This operation in previous works (Colombo et al [24], Colombo [25]) was done by numerically integrating the single-averaged dynamics until the
maximum eccentricity, corresponding to the minimum perigee, was reached.
Hence, computing analytically the natural evolution of the satellite orbit can reduce the computational time and costs for the selection of the disposal manoeuvre. In fact, contrary to previous
analysis, the optimal manoeuvre is not evaluated performing the integration of the Lagrange planetary equations in time but working completely in the reduced phase space. The aim of the thesis is to
study the Hamiltonian phase space for developing the optimal manoeuvre for the end-of-life strategy. The dissertation develops the argument of perigee ω-eccentricity e maps in the Hamiltonian
formalism. The study presents how the phase space changes depending on the orbital elements. Only the secular dynamics is represented: the double averaging technique is quite effective for describing
the long-term evolution of satellite orbit since it reduces significantly the computational time for orbit propagation. In this work, the two-dimensional maps are obtained by eliminating the node
effect, hence this model would be addressed as a triple averaged one. It results that the third body effect is essential for HEO time evolution, in combination with planet gravitational
Master Thesis 1. INTRODUCTION
field: J[2] tends to reduce the beneficial effect of the third body attraction, for the re-entry purpose: as the satellite is closer to the planet surface (lower altitude), it will feel a smaller
influence of the third body. For this reason, only highly elliptical orbits are considered in this thesis.
The ’surfing’8 among the natural orbital perturbations (Colombo, [25]) results in the minimisation of the velocity required for different missions, and its feasibility was already demonstrated by
former studies (Colombo et al., [24]). This is a revolutionary concept and can reduce the cost and the mass of a spacecraft: it can simply navigate in the space around a planet following the natural
evolution of the orbit.
In addition, the novelty of this work is the feasibility study to develop a very light analytical code capable of computing the best manoeuvre condition starting from the current ephemeris of the
satellite. The model used could be further improved in the subsequent works, with the aim of producing a new generation of onboard software for the disposal strategy design, since it requires very
low computational effort.
1.3 Thesis outline
This thesis provides an in-depth presentation, of the end-of-life strategy design. The work is organised to provide the reader with the knowledge necessary to understand how the perturbations can be
modelled and which methods can be used to perform optimal manoeuvre for satellite removal in HEOs.
Chapter 2 is an outline of the perturbed two-body problem, introducing the classical equations of the two body system in the planetocentric Inertial Reference Frame. Then the Gauss and Lagrange
planetary equations are reported in order to introduce the long-term dynamics in orbital elements. Finally, a brief overview of the Hamiltonian formulation is provided.
Chapter 3 covers the analytical recovery of the disturbing function. The double-averaged Hamiltonian expression for the description of the secular effects. Only the J[2] term of the zonal harmonic
contribution will be considered, while for the Luni-Solar perturbation, the potential is expanded up to the fourth order and then it is averaged twice, both over the satellite’s mean anomaly and the
perturbing body’s mean anomaly.
Chapter 4 shows the procedure used to reduce the Hamiltonian formulation obtained with the second average potentials, to one degree of freedom. It describes the triple averaging technique used for
the node elimination and discusses the level of accuracy of the new
8[COMPASS website: www.compass.polimi.it] 12
1.3 THESIS OUTLINE
reduced model.
Chapter 5 describes the disposal manoeuvre strategy, focusing on the optimisation procedure using an analytical or semi-analytical orbital propagation to recover the best delta-v solution. The
disposal algorithm scheme is reported for a better understanding of the approach.
Chapter 6 presents the numerical results for the cases of study missions: Venus’ orbiter, INTEGRAL and XMM-Newton. The optimal manoeuvre considering different levels of approximation is reported,
first without considering the Sun influence and then adding it. Moreover, different approaches are compared in terms of orbital evolution and the cost of the manoeuvre.
Chapter 7 concludes the thesis work, describing the main achievements and possible future developments of semi-analytical approaches for disposal manoeuvre design.
Chapter 2
Review of the perturbed two body
2.1 Historical Background
ince ancient times, the motion of celestial bodies is of great interest for humanity. Copernicus (1473-1543) was the first to develop a model of the Solar System, which considers the Sun as the
centre and the planets orbiting on trajectories around the Sun. This model was the starting point to all the subsequent theories.
Galileo Galilei (1564-1642) picked up the work done by Copernicus. His work was fundamental for the work of Johann Kepler and Isaac Newton.
Kepler’s Laws Johannes Kepler (1571-1630), based on astronomical observations, de-veloped the three fundamentals laws of the orbital mechanics, to describe the kinematics of planetary motion.
1. the orbit of each planet is an ellipse with the Sun at one focus,
2. the line joining the planet to the Sun sweeps out equal areas in equal times,
3. the square of the period of a planet is proportional to the cube of its mean distance to the Sun.
Newton’s Laws Isaac Newton (1642-1727), was able to describe the dynamics of the
planetary motion. In his book Philosophiae Naturalis Principia Mathematica he introduced the three laws of motion:
Master Thesis 2. REVIEW OF THE PERTURBED TWO BODY PROBLEM
1. each body continues its state of rest, or uniform motion in a straight line unless it is compelled to change that state by forces exerted on it.
2. The change of motion is proportional to the force and is made in the direction in which the force is exerted.
3. To each action, there is always opposed an equal reaction.
Classically, the motion of a satellite was simply characterised with Newton’s gravitational law, proportional to the inverse of the square distance between the two bodies. It was derived by the
second law of motion, describing the equations of motion of a space object under the influence of the central main attractor only.
2.2 Earth Centred Inertial Frame
The first important requirement is to define a suitable reference system for describing the orbit of the satellite. For Earth’s satellites, an Earth-based system is typically used (Vallado, [39]).
This system uses as reference plane the equatorial plane. It originates at the centre of the Earth, and it is defined by three orthogonal unit vectors: IJ K. The I axis point towards the vernal
equinox, the K axis is in the North Pole direction, and the J axis completes the orthogonal tern. It is an inertial frame fixed with the vernal equinox direction.
This Earth centred equatorial frame is typically referred to the Earth-Centred-Inertial (ECI) frame, as represented in Figure 2.1a.
(a)Earth Centred Inertial Frame (IJ K) (Vallado, [39]). (b)Perifocal reference system (Narumi and Hanada, [40]).
Figure 2.1 Earth centred inertial frame and perifocal reference system.
2.3 Perifocal Reference Frame
A second important reference system is the Perifocal frame, which is commonly used for processing satellite observation. It is defined in the spacecraft orbit plane centred at the main attractor. The
x-axis is defined in the direction of the orbit perigee, described by the eccentricity direction vector. The z-axis is in the direction of the a-dimensional orbital angular momentum unit vector ˆh,
and finally, the y-axis is defined to complete the tern in
the direction of the semi-latus rectum.
The resulting orthogonal unit vector tern is called ˆQ, ˆP, ˆh, as reported in Figure 2.1b.
2.4 The two-body problem dynamics
As shown in several texts, Curtis [41], Vallado [39], Chao [42] and many others, starting from the Newton’s Law of gravitation and the second law of motion, the spacecraft equations of motion can be
derived under the influence of a central force field.
The ideal two-body problem, with no external perturbation at all, consists of: • One main attractor: a central planet with mass m0,
• Orbiting object: Satellite/Asteroid/Moon with mass m[1] under the following assump-tions:
– m[1] m[0] (its mass is negligible),
– orbital parameters: semi-major axis a, eccentricity e, inclination i, argument of
perigee ω, Right Ascension of the Ascending Node Ω,
– position vector r and velocity vector v.
The equations of motion are written in terms of relative vector r, the position vector from the centre of the main attractor to the satellite:
dt2 = −
r3. (2.1)
The gravitational attraction of the main body was considered through the planetary constant µ, related to the mass of the main attractor m0 and of the orbiting object m1 and
to the universal gravitational constant G = 6.67 × 10−20km3/kg s2:
µ = G(m0+ m1)
m[−→ µ = Gm]1m0
Master Thesis 2. REVIEW OF THE PERTURBED TWO BODY PROBLEM
On the other hand, it is typically convenient to pass to the orbital elements {a, e, i, ω, Ω, f }: the semi-major axis a, the eccentricity e, the inclination i, the argument of perigee ω, the right
ascension of the ascending node Ω, the true anomaly f . In this new representation, the celestial body trajectory can be described by the following relation:
r = a(1 − e
1 + e cos f, (2.3)
where the true anomaly f along the orbit varies in time in the interval [0, 2π] through the eccentric and mean anomaly relations, as better explained in Appendix A. In fact, the mean anomaly is
directly connected with the time t passed from the perigee passage through the mean motion n. tan [f] 2 = s 1 + e 1 − etan [E] 2 , (2.4) M = E − e sin E, (2.5) M = M0+ n(t − t0), (2.6)
where M[0] and t[0] are the mean anomaly and the time at the perigee passage, n =p
µ/a3 [is]
the mean motion, E the eccentric anomaly and e the eccentricity.
2.5 Perturbed Equations of Motion
In the real world, the Keplerian equations of motion in Equation (2.1) are an ideal represent-ation of the actual motion of a space object. In fact, several external sources of perturbrepresent-ation
to the ideal motion are present. In the following sections, a detailed formulation and deriva-tion are reported for the most relevant sources for planetary orbiters: planet gravitaderiva-tional
harmonics, in particular, the zonal ones and the third body attraction: for the Earth due to both Sun and Moon, for Venus due to the Sun only.
All the external sources of disturbance act on the orbiting object as a perturbing acceleration, which affects the ideal Keplerian motion as follow:
dt2 = −
r3 + aperturbing. (2.7)
Although this approach is very simple to derive, to better understand the effect of disturbances on the real motion, it is better to describe the orbit in terms of Keplerian elements: a, e, i, Ω, ω,
f . This is a more intuitive approach. In this way, the variation from 18
the ideal motion is presented in terms of variation of orbital elements instead of position vector components, making more understandable the overall behaviour.
2.5.1 Lagrange Planetary Equations
If only conservative forces due to external perturbations are considered, the equations of variations of the Keplerian elements in terms of the disturbing function, R, can be written in the
Lagrangian form (see Valtonen and Karttunen [30]; Vallado, [39]; El’iasberg, [43] for theoretical reference): da dt = 2 na ∂R ∂M de dt = 1 na2[e] (1 − e2)∂R ∂M − p 1 − e2∂R ∂ω di dt = 1 na2[sin i]√[1
− e]2 cos i∂R ∂ω − ∂R ∂Ω dΩ dt = 1 na2[sin i]√[1 − e]2 ∂R ∂i dω d = − 1 na2[sin i]√[1 − e]2 cos i ∂R ∂i + √ 1 − e2 na2[e] ∂R ∂e dM dt = n − 1 − e2 na2[e] ∂R ∂e − 2 na ∂R ∂a, (2.8)
where M is the mean anomaly and n is the mean motion of the satellite. The disturbing function R is defined as the opposite of the force potential function V : R = −V , and it is represented by the
sum of the contribution of each external source. For an Earth’s satellite it is defined as:
R = R[gravity]+ R[M oon]+ R[Sun], (2.9)
where Rgravity is due to the gravitational attraction of the main body, RM oon is due to the
third body gravitational attraction of the Moon, and R[Sun] due to the Sun disturbance.On the other hand for a Venus’ probe it depends only on planet’s oblateness and Sun attraction:
R = R[gravity]+ R[Sun]. (2.10)
2.5.2 Long-Term Lagrange Planetary Equations
A common practice to study the long-term behaviour of the equation of motion is to average the disturbing potential function. Through the averaging technique, see for details Chao [42], the
short-period terms averages out, retaining only secular and long-periodic effects.
Master Thesis 2. REVIEW OF THE PERTURBED TWO BODY PROBLEM
The averaging technique consists of a first integration over one period (one revolution) of the orbiting object, and after that, a second averaging is performed over one period of the third
disturbing body. The single and the double-averaged potential must be replaced in Lagrangian equations of motion (Equation (2.8)). The general expression is described by:
d ¯α dt = d ¯α dt α, ¯R, (2.11) d¯α¯ dt = d¯α¯ dt α,R¯¯, (2.12)
where α is the vector of the orbital elements, ¯R is the single-averaged potential and ¯R is¯ the double-averaged potential. This approach is very useful for the case of study since it allows to
reduce a lot the computational time of integration. Furthermore, it is a correct approach to describe the long-term behaviour of a satellite, as it is required for the design of end-of-life strategy.
2.5.3 Gauss planetary equation
When not only conservative forces are present, but also non-conservative ones, the Lagrange’s planetary equations (Equation (2.8)) cannot be used any more. This happens, for example, when the
satellite’s dynamic is subjected to drag disturbance or impulsive firings. In this case, the equations of variations of Keplerian elements are expressed in terms of the disturbing acceleration, a
[perturbing], in the velocity frame ˆt, ˆn, ˆh, with ˆt aligned with the direction of
motion, ˆh is in the direction of the adimensional angular momentum and ˆn completes the
orthogonal tern. ˆ[t =] v ||v|| ˆ h = r × v ||r × v|| ˆ n = ˆh × ˆt, (2.13)
where v is the velocity vector, r is the position vector. The acceleration can be written in terms of variation of velocity in ˆt, ˆn, ˆh direction:
aperturbing= [at, an, ah] = [dv] t dt , dvn dt , dvh dt . (2.14) 20
In case an impulsive manoeuvre is given, the Gauss planetary equations can be written, in first approximation, in terms of impulsive variation of the velocity vector δv = [δvt, δvn, δvh]T
(Colombo, [27]), where r[d] and v[d]are respectively the radius and the velocity at the point where the instantaneous change is provided, h is the angular momentum, p is the semi-latus rectum, u[d]=
ω + f is the argument of latitude, b is the semi-minor axis, and dM takes into account only the instantaneous change in the mean anomaly.
δa = 2a 2[v] d µ δvt δe = 1 vd 2 (e + cos f ) δvt− rd a sin f δvn δi = rdcos ud h δvh δω = 1 evd 2 sin f δvt+ 2 e + rd a cos f δvn −rdsin udcos i h sin i δvh δΩ = rdsin ud h sin i δvh δM = − b e a vd
" 2 1 +e 2[r] d p ! sin f δ vt+ rd a cos f δvn # . (2.15)
2.6 Hamiltonian formulation
The Hamiltonian formulation is now introduced for a later derivation of the planar phase space since it is essential for the study of the disposal manoeuvre at the end-of-life of a mission. In
particular, the Hamiltonian formalism can fully describe the dynamical properties of the system. In this section, the Hamiltonian is derived for a two-body system, starting from the general problem.
The formulation here presented is based on Valtonen and Karttunen [30].
2.6.1 Hamiltonian principle
In general, any dynamical system can be described by means of the Lagrangian function L. This function is written in terms of generalised coordinates q[i], their derivatives ˙qi and time
t. From the formulation in Lagrange [28], the Lagrangian Function is defined as:
L = L(q[1], ..., qm, ˙q1, ..., ˙qm, t) = T − V , (2.16)
where T and V are respectively the kinetic and the potential energy of the system, assumed to have m degrees of freedom. In addition to the generalised coordinates q[i], it is possible to
Master Thesis 2. REVIEW OF THE PERTURBED TWO BODY PROBLEM
define the generalised momenta p[i]:
pi =
∂L ∂ ˙qi
. (2.17)
Therefore, the Lagrangian function becomes: L(pi, qi, t). Similarly, using a Legendre
transformation, the Hamiltonian H of the system can be recovered from L:
H = H(qi, pi, t) = m X i=1 ˙ qipi− L(q, ˙q, t), (2.18)
leading to the following equations of motion:
dqi dt = ∂H ∂pi , dpi dt = − ∂H ∂qi , (i = 1, 2, ..., m). (2.19)
At this point, it is possible to demonstrate that the Hamiltonian does not depend explicitly on time, hence it is a constant of motion of the system and it will not vary in time (see Valtonen and
Karttunen [30]). Furthermore, this yield to demonstrate that the Hamiltonian is equal to the total energy of the system and thus, using the Euler’s theorem for the two-body problem:
H = m X i=1 ˙ qi ∂T ∂ ˙qi − L = 2T − L = T + V . (2.20)
Now, using canonical coordinates, the Hamiltonian of the two-body system vanishes, and the variables become constants of motion. The new Hamiltonian is reported in Equation (2.22), while the orbit of
the satellite is described using the simplified canonical elements, also called Delaunay’s elements in Equation (2.21). This representation allows to write the Hamiltonian formulation in terms of the
semi-major axis of the orbit, a.
l = M, g = ω, h = Ω, L =√aµ, G = Lp1 − e2[,] [H = G cos i.] (2.21) Hnew = K = − µ2 2L2 = − µ 2 a. (2.22)
From now on, the new Hamiltonian will be simply referred to H and it is expressed in the Keplerian orbital elements, rather then Delaunay variable, due to their more physical meaning.
2.6.2 Hamiltonian of the perturbed two-body problem
The representation in Equation (2.22) is valid only for the ideal two-body problem, with no perturbation at all. Therefore, to consider all the external sources that are present in the real motion, a
perturbing potential must be introduced in the Hamiltonian. This is simply the one recovered in Equation (2.11), which considers all the possible external disturbances. The perturbed Hamiltonian of
the Earth-satellite system is reported in Equation (2.23). But the relation is general and can be written for any planet-satellite system.
H = − µ
2 a− R = −
2 a− Rzonal− RM oon− RSun− RSRP. (2.23) In this dissertation, the influence of the Sun will be considered only as a second approx-imation, while Solar Radiation Pressure effect will not be
considered. This approximation is valid under the assumption that for HEO orbit, the most significant perturbations are the Lunar and the J[2] effect.
Sensitivity analysis for Earth’s satellite disturbing function In this thesis, only HEOs
are investigated. As an example case, an orbit like the INTEGRAL one can be considered. In addition, the satellite is assumed to have a small area-to-mass ratio. This is a reasonable assumption for
most of the Earth’ satellites since typically they do not have solar sails on board or very big cross-sectional area. Obviously, if the mission under study has a significant area exposed to the Sun,
the contribution of the SRP must be retained in the analysis.
The magnitude of different perturbations acting on a HEO is reported in Table 2.1.
Table 2.1 Magnitude of forces for an INTEGRAL like mission, (with area-to-mass ratio 1, and a semi-major axis of 8.7 × 104[km)]
Source Disturbing force R
Earth Oblateness (J[2]) 10−5
Moon Attraction 10−4
Sun Attraction 10−6
|
{"url":"https://123dok.org/document/nzw14dvq-analytical-design-disposal-manoeuvres-elliptical-influence-attraction-oblateness.html","timestamp":"2024-11-02T20:28:38Z","content_type":"text/html","content_length":"214550","record_id":"<urn:uuid:ad1c791d-b6fc-42e8-9d61-1b0281d0dd9f>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00416.warc.gz"}
|
Merohedral twins revisited: quinary twins and beyond
^aLaboratoire de Métallurgie de l'UMR 8247, IRCP Chimie-ParisTech, 11 rue Pierre et Marie Curie, F-75005 Paris, France
^*Correspondence e-mail: denis.gratias@chimie-paristech.fr
Edited by V. Petříček, Academy of Sciences, Czech Republic (Received 26 June 2015; accepted 29 September 2015)
A twin is defined as being an external operation between two identical crystals that share a fraction of the atomic structure with no discontinuity from one crystal to the other. This includes
merohedral twins, twins by reticular merohedry as well as coherent twins by contact where only the habit plane is shared by the two adjacent crystals (epitaxy). Interesting and original cases appear
when the invariant substructure is built with positions belonging to the same ). The Painter's Manual: a Manual of Measurement of Lines, Areas and Solids by Means of Compass and Ruler. Facsimile
Edition (1977), translated with commentary by W. L. Strauss. New York: Abaris Books]. This paper will show that the Dürer twins, once defined in five-dimensional space, are simple merohedral twins,
in the sense of Georges Friedel, leaving the five-dimensional lattice invariant. This analysis will be generalized to some other higher-order
1. Introduction
From an historical point of view, twins have played a special role in mineralogy and crystallography as they are an aggregate of identical crystals oriented with respect to each other in a very
specific and characteristic manner (see, for instance, Groth, 1906 ; Putnis, 1992 ). Several twin laws have been proposed that can all finally be summarized by the basic idea that the specific
relative orientation of a twinned crystal is a special isometry that keeps invariant – either exactly or approximately – a part of the atomic structure or of a specific property between the two
twins. The idea is that the larger the common part is, the more stable is the twin and the more frequently it occurs in nature. Using this kind of intellectual guide, Friedel (1904 , 1926 , 1933 )
proposed a classification of twins based on the geometry of the sole crystal lattice (Bravais, 1851 ; Mallard, 1885 ; Donnay, 1940 ):
(i) Merohedral twins where the crystals share the same lattice (this can happen only for non-holohedral structures).
(ii) Twins by reticular merohedry where the crystals share only a fraction, a sublattice, of the crystal lattice; this corresponds, in metals and alloys, to the so-called special boundaries between
grains like the famous mirror twin along the (111) direction often observed in the f.c.c. (face-centred cubic) metals.
(iii) Pseudo-merohedral twins or twins by reticular pseudo-merohedry where the previous definitions are satisfied only approximately.
In the present paper, we choose a general definition of twinning as being an operation between two identical crystals that share a fraction of the atomic structure or of a specific property with no
discontinuity from one crystal to the other, in the spirit developed by Nespolo & Ferraris (2004 ), Grimmer & Nespolo (2006 ), Marzouki et al. (2014 and references therein):
(i) Twinned crystals in mutual orientation by reticular merohedry in three dimensions (two dimensions or one dimension) that share a common three-dimensional (two-dimensional or one-dimensional)
(ii) A twin by contact where only the habit plane is shared by the two adjacent crystals (epitaxy).
(iii) More generally, any twin operation keeping invariant a fraction of the Wyckoff positions of the structure.
2. Formalism
2.1. Symmetry operations and space groups
A symmetry operation in isometry of g and a translation part t, and noted r in
Designating by space group
Considering now the subset twin operation (see Fig. 1 ), its symmetry group
It is generally different to subgroup of it.
Thus, the fundamental group–subgroup relation defining the geometry of the two processes implies the intersection group
The group scheme is shown in Fig. 2 . It defines two integers n and m that are the indices of
Their meaning is the following:
(i) n-1 is the number of different possible twinned crystals around one given crystal and all share with the central crystal the same subset
(ii) m is the number of equivalent subsets space group
Each coset element ^1 i.e. an operation that relates two twinned crystals sharing the same coset
As a simple example, let us consider the standard so-called sublattice of unit cell U), with the intersection group unit cell U, as shown in Fig. 3 on the right. This leads to n = 2, meaning that the
twin operation connects two individuals and m = 4 ×4 ×3/4 = 12 different crystals – corresponding to the 4 ×3 families of (1, 1, 1) planes, A, B or C – can be formed around one single crystal.
Finally, the twin index that corresponds to the indices of the lattices only is
This same twin can be as well defined through its epitaxy property and using point groups. The two adjacent crystals share the same (1, 1, 1) plane; thus, n = 2 variants and m = 4 different
individuals – the four orientation families of (1, 1, 1) planes – that can be formed around one given variant.
All known types of twins enter the general group–subgroup tree of Fig. 2 .
For instance, merohedral twins are characterized by merohedry with a grain boundary index being the order of the lattice of
3. Generalization
As we will show here, there are cases where the scheme of Fig. 2 leads to original results such as the twin structures first drawn by Albrecht Dürer and reproduced here in Fig. 4 (a) from the
original work De symmetria partium in rectis formis humanorum corporum (1532) and Underweysung der Messung (1538) (available on CD-ROM, Octavo Editions, CA, 2003).
3.1. The Dürer structure
The basic structure invented by Albrecht Dürer is shown in Fig. 5 (a). It is built with six adjacent regular pentagons and has the crystallographic two-dimensional space group c2mm. Taking the radius
of the elementary pentagon as the unit length (see Fig. 5 a on the left), we find the lattice parameters τ is the golden mean .
Twins of the Dürer structure can be generated in a very symmetrical tenfold symmetry according to various equivalent modes, either radiant central as in Dürer's original drawing, or spiral-like twins
made of ten two-dimensional crystals along the ten directions of a regular decagon as seen in Fig. 4 (b).
3.2. The hidden symmetries of the Dürer structure
The very specific feature of the Dürer structure is that the atomic positions x[j] are all vertices of interconnected identical regular pentagons so that they can all be defined as integer sums of
the five vectors relating the centre to the five vertices of the elementary pentagon:
This structure is thus a periodic decoration of a ^2 of rank 5 (4 in fact, because the sum of the five unit vectors gives the zero vector) generated by the five vectors defined by the regular
pentagon. In a more geometric view, the Dürer structure is a two-dimensional projection of a five-dimensional periodic structure in a four-dimensional hyperplane perpendicular to the five-dimensional
main diagonal (1, 1, 1, 1, 1).
Embedding the Dürer structure in five-dimensional space is easily performed. We choose the origin in Ω as shown in Fig. 5 (b) and find the unit cell defined by (1,1,1,1,1) in five-dimensional space.
The two Wyckoff positions are located at the nodes w[1] = (0,1,0,0,0) for the blue one and w[2] = (0,0,1,0,0) for the green one. The point symmetry operations are 5 ×5 signed permutation matrices
given by
and the corresponding space operations that generate the space group c2mm are
These operations together with the translation group generated by (A,B) form a faithful representation of the group c2mm in five-dimensional Euclidean space.
3.3. Twin operation
Now, we choose the underlying five-dimensional lattice generated by the five mutually orthogonal vectors E[i], i = 1,5 whose projections are the five basic vectors of the pentagon, as the geometric
object that should be left invariant. The group p10 mm generated by
Thus, the general group–subgroup tree (Fig. 2 ) is built with the point groups subgroup of pure twin by merohedry because
The coset decomposition leads to five (20/4) different possible twins that are the five individuals drawn in Fig. 4 (b). These are the five different ways of constructing the Dürer structure using
the same pentagon (and its inverse). For example, one among the possible cosets of equivalent twin operations is given by
and is the glide mirror shown in Fig. 6 .
It can be easily verified that this twin is perfectly coherent although it has no two-dimensional coincidence lattice. The boundary is defined by a sinuous row of adjacent pentagons that belong to
both structures. Moreover, irrespective of the centring c, the lattice of the Dürer structure is the set of five-dimensional points V = p A + q B = (2q, q-p, -2q, -2q, p+q) where p and q are
integers. This lattice transforms into the set 2q = i.e. for the direction (1,1). This is the habit direction of the twin: we have a perfect epitaxy with no (two-dimensional) coincident lattice.
4. Beyond the Dürer twin
Dürer-like structures can easily be found using identical regular polygons of order n (later on, designated as n-gons) connected by edges. All these structures have the basic property of being
defined by Wyckoff positions that are all on the same n-dimensional structures.
We discuss here some of the simplest of these kinds of polygonal tilings where the n-gons in the unit cell are all crystallographically equivalent. We shall designate these patterns as monogeneous n
-gon patterns.
An efficient way of characterizing these patterns consists of reporting in a vector the sequence of the number of free edges between each connected edge around an n-gon as exemplified in Fig. 7 for n
= 9. We call it the vector of free edges, the length of which is equal to the coordination of the n-gon. Under these notations, the Dürer structure of the previous section with n = 5 has coordination
p = 3 and is characterized, up to a circular permutation, by the vector (0,1,1).
The search for possible periodic solutions is significantly simplified by observing that, for an n-gon surrounded by p identical n-gons, the vector of free edges
Also, the maximum possible number P[n] of non-overlapping n-gons sharing an edge of a central identical n-gon is given by
For the simple case p = 3, monogeneous non-overlapping n-gon periodic patterns are generated only if the centre of the central n-gon is inside the triangle formed by the centres of the three adjacent
n-gons, in which case the triangle characterizes the unit cell of the structure (see Fig. 7 ).
The vector of free edges has the form n, the centres of the three n-gons are located at V[1] = (1, 1, 0, …), V[2] = (0, 0, …, V[3] = (0, 0, …, A = V[2]-V[1] = B = V[3]-V[1] =
All twins in these structures are merohedral twins (built with the same n-gon). They are characterized by the symmetry elements of the n-dimensional lattice that leave the projected two-dimensional
space invariant and that do not belong to the symmetry group of the structure as dictated by the general group tree of Fig. 2 . In two-dimensional space, these twin operations are symmetry operations
of the central n-gon that are not symmetry elements of the two-dimensional periodic structure and that leave invariant a row of the structure to form a perfect plane of epitaxy. For p = 3, these
elements are signed permutations of the n basic vectors generating the n-gon that transform into each other two of the other adjacent n-gons and put the third one in a new position. This translates
in the vector of free edges in exchanging two symbols while keeping the third constant. For example, the vector of free edges (1,2,3) of the case n = 9 generates three possible coherent twinned
crystals: (1,3,2), (3,2,1) and (2,1,3) as exemplified in Fig. 7 , whereas the vector of free edges (1,2,2) of the case n = 8 generates only two, (2,2,1) and (2,1,2). The interface operations are
glide mirrors oriented along the common row of n-gons shared between the two adjacent crystals, as shown in Figs. 8 and 9 .
5. Conclusion
We propose here a formal extension of the notion of twin operation as an isometry between two identical crystals that preserves part of the atomic structure. Its internal symmetry group can possibly
contain hidden symmetries issued from high-dimensional space when the Wyckoff positions of the preserved part of the structure belong to a sublattice can still be labelled as merohedral twins when
they share the same
Beyond this n-dimensional generalization, the interest of the present approach is the simplicity of its basic group–subgroup tree shown in Fig. 2 that can be used in all cases of actually known
twins, once
^1As discussed long ago (Guymont et al., 1976 ; Gratias et al., 1979 ; Portier & Gratias, 1982 ), interfaces (twins or grain boundaries) in homogeneous crystals are described by cosets of space
groups. Indeed, consider two identical crystals related by the operation x of the first crystal to an equivalent twin operation is equivalently described by any operation that results in the product
of any symmetry element of symmetry element of coset of the form
^2A p in x defined by e[i] are arithmetically independent (no non-zero p integers lead to a sum giving the null vector); it is isomorphic to an irrational projection in d-dimensional space of an N
-dimensional lattice, with d = p the
The authors are grateful to the French ANR for financial support of project ANR METADIS 13-BS04-0005.
Bendersky, L. A. & Cahn, J. W. (2006). J. Mater. Sci. 41, 7683–7690. Web of Science CrossRef CAS Google Scholar
Bendersky, L. A., Cahn, J. W. & Gratias, D. (1989). Philos. Mag. 60, 837–854. CrossRef CAS Web of Science Google Scholar
Black, P. J. (1955a). Acta Cryst. 8, 43–48. CrossRef CAS IUCr Journals Web of Science Google Scholar
Black, P. J. (1955b). Acta Cryst. 8, 175–182. CrossRef CAS IUCr Journals Web of Science Google Scholar
Bravais, M. (1851). J. Ecole Polytechn. XX (XXXIV), 248–276. Google Scholar
Donnay, G. (1940). Am. Mineral. 25, 578–586. CAS Google Scholar
Dürer, A. (1525). The Painter's Manual: a Manual of Measurement of Lines, Areas and Solids by Means of Compass and Ruler. [Facsimile Edition (1977), translated with commentary by W. L. Strauss. New
York: Abaris Books.] Google Scholar
Ellner, M. (1995). Acta Cryst. B51, 31–36. CrossRef CAS Web of Science IUCr Journals Google Scholar
Ellner, M. & Burkhardt, U. (1993). J. Alloys Compd. 198, 91–100. CrossRef CAS Web of Science Google Scholar
Friedel, G. (1904). Bulletin de la Société de l'Industrie Minérale, Quatrième Série, Tomes III et IV. Saint-Etienne: Société de l'Imprimerie Theolier J. Thomas et C. Google Scholar
Friedel, G. (1926). Leçons de Cristallographie. Nancy, Paris, Strasbourg: Berger-Levrault. Google Scholar
Friedel, G. (1933). Bull. Soc. Fr. Minéral. 56, 262–274. CAS Google Scholar
Gratias, D., Portier, R., Fayard, M. & Guymont, M. (1979). Acta Cryst. A35, 885–894. CrossRef CAS IUCr Journals Web of Science Google Scholar
Groth, P. (1906). Chemische Krystallographie, Erster Teil. Leipzig: Verlag von Wilhelm Engelmann. Google Scholar
Guymont, M., Gratias, D., Portier, R. & Fayard, M. (1976). Phys. Status Solidi, 38, 629–636. CrossRef CAS Web of Science Google Scholar
Grimmer, H. & Nespolo, M. (2006). Z. Kristallogr. 221, 28–50. Web of Science CrossRef CAS Google Scholar
Mallard, E. (1885). Bull. Soc. Fr. Minéral. 8, 452–469. Google Scholar
Marzouki, M. A., Souvignier, B. & Nespolo, M. (2014). IUCrJ, 1, 39–48. Web of Science CrossRef CAS PubMed IUCr Journals Google Scholar
Nespolo, M. & Ferraris, G. (2004). Acta Cryst. A60, 89–95. Web of Science CrossRef CAS IUCr Journals Google Scholar
Portier, R. & Gratias, D. (1982). J. Phys. Colloq. 43, C4-17–C4-34. Google Scholar
Putnis, A. (1992). Introduction to Mineral Sciences. Cambridge, New York, Port Chester, Melbourne, Sydney: Cambridge University Press. Google Scholar
This is an open-access article distributed under the terms of the Creative Commons Attribution (CC-BY) Licence, which permits unrestricted use, distribution, and reproduction in any medium, provided
the original authors and source are cited.
|
{"url":"https://journals.iucr.org/a/issues/2016/01/00/pc5056/","timestamp":"2024-11-06T01:41:12Z","content_type":"text/html","content_length":"145756","record_id":"<urn:uuid:013092e1-2c10-42c4-aff9-aad4c5c8c6e1>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00070.warc.gz"}
|
NCERT Solutions Class 6 Maths Chapter 2 Lines and Angles Exercise 2.10 PDF
Class 6 Maths NCERT Solutions for Chapter 2 Lines and Angles Exercise 2.10 - FREE PDF Download
NCERT Solutions for Class 6 Maths Chapter 2, Exercise 2.10, "Lines and Angles," helps students learn how to draw angles accurately in geometry. This exercise focuses on using a compass and protractor
to create various types of angles, which is essential for understanding geometric concepts. It aligns with the CBSE Class 6 Maths Syllabus and provides clear, step-by-step guidance to assist in
learning and applying the concept of drawing angles.
1. Class 6 Maths NCERT Solutions for Chapter 2 Lines and Angles Exercise 2.10 - FREE PDF Download
2. Glance on NCERT Solutions Maths Chapter 2 Exercise 2.10 Class 6 | Vedantu
3. Access NCERT Solutions for Class 6 Maths Chapter 2 - Lines and Angles
4. Benefits of NCERT Solutions for Class 6 Chapter 2 Lines and Angles Exercise 2.10
5. Class 6 Maths Chapter 2: Exercises Breakdown
6. Important Study Material Links for Class 6 Maths Chapter 2 - Lines and Angles
8. Chapter-wise NCERT Solutions Class 6 Maths
9. Related Important Links for Class 6 Maths
Students can download these NCERT Solutions as a FREE PDF for easy practice and better understanding. NCERT Solutions for Class 6 include helpful tips and simple methods, making learning engaging and
effective. This approach helps students form a strong foundation for more advanced topics in geometry.
Glance on NCERT Solutions Maths Chapter 2 Exercise 2.10 Class 6 | Vedantu
• This exercise focuses on the practical skill of drawing angles using a compass and protractor.
• Students learn to create different types of angles, including acute, obtuse, and right angles, with clear step-by-step instructions.
• The solutions provide visual aids to help students understand the process of measuring and drawing angles accurately.
• Real-life applications are included to show how angles are used in everyday situations, enhancing the relevance of the topic.
• Various practice problems allow students to apply what they’ve learned, reinforcing their understanding of angle construction effectively.
FAQs on NCERT Solutions for Class 6 Maths Chapter 2 - Lines and Angles Exercise 2.10
1. What is Exercise 2.10 about?
Exercise 2.10 focuses on drawing angles using a compass and a protractor. It teaches students how to create different types of angles accurately. This skill is essential for understanding geometry.
2. How do the NCERT Solutions help with Exercise 2.10?
The NCERT Solutions provide clear steps and explanations for drawing angles. They guide students through each part of the process, making it easier to learn and practice.
3. Can I download the solutions for Exercise 2.10?
Yes, you can download the NCERT Solutions for Class 6 Maths Chapter 2, Exercise 2.10 as a FREE PDF from Vedantu website. This makes it easy to study and practice at home.
4. What types of angles will I learn to draw?
In this exercise, students learn to draw various angles, including acute, obtuse, and right angles. Each type of angle is explained with specific steps to follow.
5. Are there practice problems included?
Yes, Exercise 2.10 includes practice problems for students to apply what they have learned. These problems help reinforce their skills in drawing angles accurately.
6. How do the solutions support learning?
The solutions offer step-by-step guidance on how to solve angle-drawing problems. They help students understand how to use a protractor and compass correctly.
7. What should I do if I find a problem difficult?
If a problem is difficult, refer to the NCERT Solutions for Exercise 2.10. Clear explanations can help clarify the steps and guide you in solving the problem.
8. How can I practice drawing angles?
Use the FREE PDF of NCERT Solutions for Class 6 Maths Chapter 2, Exercise 2.10 to practice. Following the solutions helps you work through different types of angle problems.
9. How do I know if I’ve drawn an angle correctly?
Check your angles against the NCERT Solutions for Exercise 2.10. The solutions provide correct answers and methods, allowing you to verify your work.
10. What concepts are reinforced through Exercise 2.10?
Exercise 2.10 reinforces concepts such as measuring angles, using a protractor, and understanding different types of angles. This knowledge is essential for geometry.
11. How can I use these solutions to prepare for tests?
Review the NCERT Solutions for Class 6 Maths Chapter 2, Exercise 2.10 to understand how to draw angles. Practice using the solutions to prepare for tests effectively.
12. Are there any tips for drawing angles accurately?
Yes, always read each step carefully and use a protractor for precise measurement. Refer to the solutions for helpful tips on drawing angles correctly.
13. How often should I review Exercise 2.10 NCERT solutions?
Regularly review the NCERT Solutions for Class 6 Maths Chapter 2, Exercise 2.10 to strengthen your understanding of angles. This will help you become more confident in your skills.
|
{"url":"https://www.vedantu.com/ncert-solutions/ncert-solutions-class-6-maths-chapter-2-exercise-2-10","timestamp":"2024-11-05T06:06:32Z","content_type":"text/html","content_length":"354811","record_id":"<urn:uuid:9178f518-da94-4044-84b9-dd12b4a3a0e5>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00388.warc.gz"}
|
Error when using nested IF
Apr 1, 2011
I am having a problem using nested IFs.
The problem occurs on the last IF where the MATCH is.
I have tested each if separately and they all work as expected but when i try to group them, i get the error "The formula you typed contains an error:" and it points to this.
IF(AND($G4="2D",$F4="Qtr"),sum((VLOOKUP($C4,'Economy Variance'!$D$4:$P$10,
($F4,'Economy Variance'!$E$3:$P$3)+1,FALSE))*$E4)
Is there something missing that woudl cause the error?
any help appreciated
=IF(AND($G4="ND",$F4="Full"),VLOOKUP($C4,'Next Day Variance'!$D$4:$P$10,MATCH($E4,'Next Day Variance'!$E$3:$P$3)+1,FALSE),
IF(AND($G4="ND",$F4="Half"),sum((VLOOKUP($C4,'Next Day Variance'!$D$4:$P$10,MATCH($F4,'Next Day Variance'!$E$3:$P$3)+1,FALSE))*$E4),
IF(AND($G4="ND",$F4="Qtr"),sum((VLOOKUP($C4,'Next Day Variance'!$D$4:$P$10,MATCH($F4,'Next Day Variance'!$E$3:$P$3)+1,FALSE))*$E4),
IF(AND($G4="2D",$F4="Full"),VLOOKUP($C4,'Economy Variance'!$D$4:$P$10,MATCH($E4,'Economy Variance'!$E$3:$P$3)+1,FALSE),
IF(AND($G4="2D",$F4="Half"),sum((VLOOKUP($C4,'Economy Variance'!$D$4:$P$10,MATCH($F4,'Economy Variance'!$E$3:$P$3)+1,FALSE))*$E4),
IF(AND($G4="2D",$F4="Qtr"),sum((VLOOKUP($C4,'Economy Variance'!$D$4:$P$10,MATCH($F4,'Economy Variance'!$E$3:$P$3)+1,FALSE))*$E4),"NOT FOUND"))))))
Excel Facts
Shade all formula cells
To shade all formula cells: Home, Find & Select, Formulas to select all formulas. Then apply a light fill color.
Mar 26, 2007
There's a number of small problems with this formula
Rich (BB code):
IF(AND($G4="2D",$F4="Qtr"),sum((VLOOKUP($C4,'Economy Variance'!$D$4:$P$10,
MATCH($F4,'Economy Variance'!$E$3:$P$3)+1,FALSE))*$E4)
1) The Sum part serves no useful purpose as far as I can tell, and can be left out.
2) It is probably better (but not essential) to include a third argument for the MATCH function, the "match type", for example
3) The formula seems to be missing at least one closing bracket ")" character.
4) The formula seems to not include a FALSE argument for the IF function. Is there more to this formula that has been missed off your post ?
Apr 1, 2011
I've been working on some more of this and i'm getting closer
I need the sum on the vlookup becuase i'm multiplying the result by the value in another cell to give a correct figure.
I've tried this function and it works for any cells in column G that match ND no matter if column F is "Full", "Half" or "Qtr"
=IF(AND($G43="ND",$F43="Full"),VLOOKUP($C43,'Next Day Variance'!$D$4:$P$10,MATCH($E43,'Next Day Variance'!$E$3:$P$3,0)+1),IF(AND($G43="ND",OR($F43="Qtr",$F43="Half")),SUM(VLOOKUP($C43,'Next Day Variance'!$D$4:$F$10,MATCH($F43,'Next Day Variance'!$E$3:$F$3,0)+1))*E43))
if i try to nest that function creating 4 nested IFs, I get "You've entered too many arguments for this function"
This is my complete function
=IF(AND($G43="ND",$F43="Full"),VLOOKUP($C43,'Next Day Variance'!$D$4:$P$10,MATCH($E43,'Next Day Variance'!$E$3:$P$3,0)+1),IF(AND($G43="ND",OR($F43="Qtr",$F43="Half")),SUM(VLOOKUP($C43,'Next Day Variance'!$D$4:$F$10,MATCH($F43,'Next Day Variance'!$E$3:$F$3,0)+1))*E43),IF(AND($G43="2D",$F43="Full"),VLOOKUP($C43,'Economy Variance'!$D$4:$P$10,MATCH($E43,'Economy Variance'!$E$3:$P$3,0)+1),IF(AND($G43="2D",OR($F43="Qtr",$F43="Half")),SUM(VLOOKUP($C43,'Economy Variance'!$D$4:$F$10,MATCH($F43,'Economy Variance'!$E$3:$F$3,0)+1))*E43))))
Basically it looks up 2 values in my table and based on what's in my table it will perform the correct 2D VLOOKUP.
My combination are:
ND Full
ND Half
ND Qtr
2D Full
2D Half
2D Qtr
Mar 26, 2007
Try this.
I don't know if it does what you want, because I don't know what your data is like, but it is a valid formula.
I've shuffled a few brackets, and deleted all the SUMs, I really don't think you need them. I've also added the FALSE argument for all your Vlookups, to search for an exact match - maybe you don't
want that.
I've also added the false argument for your main IF statement, to show "???" if none of the conditions are met. Again, you might not want that, if it's possible that this argument will be required,
you probably want to specify something different. On the other hand, if there is no possibility that this argument will be required, you can remove your final IF statement, and include its arguments
as the FALSE argument for the previous IF statement.
VLOOKUP($C43,'Next Day Variance'!$D$4:$P$10,MATCH($E43,
'Next Day Variance'!$E$3:$P$3,0)+1,FALSE),
(VLOOKUP($C43,'Next Day Variance'!$D$4:$F$10,MATCH($F43,
'Next Day Variance'!$E$3:$F$3,0)+1,FALSE))*E43,
VLOOKUP($C43,'Economy Variance'!$D$4:$P$10,MATCH($E43,
'Economy Variance'!$E$3:$P$3,0)+1,FALSE),
(VLOOKUP($C43,'Economy Variance'!$D$4:$F$10,MATCH($F43,
'Economy Variance'!$E$3:$F$3,0)+1))*E43,"???"))))
|
{"url":"https://www.mrexcel.com/board/threads/error-when-using-nested-if.590712/","timestamp":"2024-11-05T06:18:51Z","content_type":"text/html","content_length":"113097","record_id":"<urn:uuid:b735396e-3d60-4a81-ad2e-a5e7021c272f>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00817.warc.gz"}
|
Computing summaries over distributed data - Rare & Special e-Zone
HKUST Electronic Theses
Computing summaries over distributed data
by Zengfeng Huang
THESIS 2013
Ph.D. Computer Science and Engineering
ix, 97 pages : illustrations ; 30 cm
Consider a distributed system with k nodes, where each node holds a part of the data. The goal is to extract some useful information from the entire data set or to compute some functions over the
data. We are interested in designing communication-efficient algorithms and also characterizing the communication complexity for various problems. We consider both a flat network structure and more
complicated tree networks.
In this thesis, we study some most important statistical summaries of the underlying data, in particular item frequencies, heavy hitters, quantiles, and ε-approximations, which are extensively
studied in database, machine learning, computational geometry and data mining. We provide general techniques for both designing efficient algorithms and proving communica...[
Read more
Consider a distributed system with k nodes, where each node holds a part of the data. The goal is to extract some useful information from the entire data set or to compute some functions over the
data. We are interested in designing communication-efficient algorithms and also characterizing the communication complexity for various problems. We consider both a flat network structure and more
complicated tree networks.
In this thesis, we study some most important statistical summaries of the underlying data, in particular item frequencies, heavy hitters, quantiles, and ε-approximations, which are extensively
studied in database, machine learning, computational geometry and data mining. We provide general techniques for both designing efficient algorithms and proving communication lower bounds, with which
we get almost tight bounds for these problems.
We also study graph problems in the distributed setting, where the edges of the input graph is partitioned across k nodes. We show how to compute an approximate maximum matching, one of most
important graph problems, communication-efficiently and prove a tight lower bound for this problem. To prove this lower bound, we develop new techniques, which we believe will have a wide
applicability to prove distributed communication complexity for other graph problems.
Close Viewer
Full Text
Full Text
Copyrighted to the author. Reproduction is prohibited without the author’s prior written consent.
• HKUST Electronic Theses
• Ph.D.
• Computer Science and Engineering
• Yi, Ke
• Huang, Zengfeng
• Electronic data processing
• Distributed processing
• Mathematical models
• Big data
• Data processing
• English
Call number
• Thesis CSED 2013 Huang
• 10.14711/thesis-b1250313
Permanent URL for this record: https://lbezone.hkust.edu.hk/bib/b1250313
|
{"url":"https://lbezone.hkust.edu.hk/bib/b1250313","timestamp":"2024-11-07T19:23:35Z","content_type":"text/html","content_length":"55266","record_id":"<urn:uuid:40201e21-aec1-4cc7-a2c6-b4e7cce32fa7>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00825.warc.gz"}
|
Latency and Alphabet Size in the Context of Multicast Network Coding
We study the relation between latency and alphabet size in the context of Multicast Network Coding. Given a graph G = ({V}, E) representing a communication network, a subset S \subseteq V of sources,
each of which initially holds a set of information messages, and a set T\subseteq V of terminals; we consider the problem in which one wishes to design a communication scheme that eventually allows
all terminals to obtain all the messages held by the sources. In this study we assume that communication is performed in rounds, where in each round each network node may transmit a single (possibly
encoded) information packet on any of its outgoing edges. The objective is to minimize the communication latency, i.e., number of communication rounds needed until all terminals have all the messages
of the source nodes.For sufficiently large alphabet sizes (i.e., large block length, packet sizes), it is known that traditional linear multicast network coding techniques (such as random linear
network coding)) minimize latency. In this work we seek to study the task of minimizing latency in the setting of limited alphabet sizes (\mathrm {i}.\mathrm {e}., finite block length), and
alternatively, the task of minimizing the alphabet size in the setting of bounded latency. Through reductive arguments, we prove that it is NP-hard to (i) approximate (and in particular to determine)
the minimum alphabet size given a latency constraint; (ii) to approximate (and in particular to determine) the minimum latency of communication schemes in the setting of limited alphabet sizes.
Publication series
Name 2018 56th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2018
Conference 56th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2018
Country/Territory United States
City Monticello
Period 2/10/18 → 5/10/18
• Codes (symbols)
• Data communication systems
• Linear networks
• Multicasting
• Communication latency
• Communication rounds
• Communication schemes
• Finite block length
• Information messages
• Information packets
• Latency constraints
• Random Linear Network Coding
• Network coding
Dive into the research topics of 'Latency and Alphabet Size in the Context of Multicast Network Coding'. Together they form a unique fingerprint.
|
{"url":"https://cris.ariel.ac.il/en/publications/latency-and-alphabet-size-in-the-context-of-multicast-network-cod-2","timestamp":"2024-11-07T14:09:47Z","content_type":"text/html","content_length":"61792","record_id":"<urn:uuid:1737f093-7373-4012-b066-f84e29fa18e9>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00720.warc.gz"}
|
Testing (What are the experimental results?)
Most of these categories are quite basic, but some are not so clearly defined. All of these areas will be discussed in the subsequent sections.
36.3 General Requirements
The general requirements of a path planner are discussed in this section. Some of the factors involved in the application, and for evaluating their performance, are described. These factors do not
include all the essential parts of a path planner, nor all of the possible requirements, but they provide a good idea of what the current requirements are.
36.3.1 Problem Dimensionality
A problem may be represented in one of a number of different dimensions. The fewer the dimensions in the problem, the simpler the solution. A 2D problem is relatively simple, and good solutions
already exist for finding paths in this representation. When height is added to the 2D obstacles, it becomes a 2.5D problem. This 2.5D problem is also within the grasp of current problem solving
routines. When we try a 3D problem the problem starts to push the current limits of researched techniques and computational ability.
Figure 36.3 Figure 2.1 Dimensionality of Obstacles
36.3.2 2D Mobility Problem
When a simple mobile robot has to navigate across a factory floor, it must solve the classic ’piano movers’ problem. This representation is easily done with convex polygons, and it runs quickly. This
problem is referred to as the piano movers problem, because it involves having to move a very large object (like a piano) through a cluttered environment, without picking it up. The perspective is
that, the obstacles can be seen from directly above, but they are assumed infinite in height. This method may be adapted for a robotic manipulator, if it is working in a clear workspace, and is
performing pick and place operations. The use of this method will save time, in all applicable cases.
As a result of the speed benefit of the 2D path finding solutions, they may be used as analytical tools. A special property can make the 2D methods applicable to 3D problems. If a 2D view of a 3D
work space shows a path, then the same path will also exist in the 3D workspace. This has been used in some path planning methods, and can provide a ‘trick’ to avoid extensive 3D calculations.
Another trick which may be used, is to represent the moving object with a box or a circle. This result in a simple technique for maintaining a minimum obstacle clearance, for collision avoidance.
Figure 36.4 Figure 2.2 Simplification of 3D Problem
36.3.2.1 - 2.5D Height Problem
When height is added to 2D objects, it is no longer necessary to assume that they have an infinite height. It is equivalent to changing the view from looking straight down on the workspace to looking
at it from an angle. This can allow some very simple path plans in which the manipulator is now allowed to move over and around objects. With pick and place tasks this can be the basis for a method
which guarantees that the payload, and manipulator links, are above collision height. This is a very practical approximation that is similar to the manipulations which humans tend to favor (i.e.
Gravity fed obstacles and objects).
This method is faster than 3D solutions, while still allowing depth in obstacles. Any object in the real world that does not have a vertical orientation will be very difficult to model. Despite this
problem, this is still a very good representation for solving most pick and place problems (in an uncluttered workspace).
36.3.2.2 - 3D Space Problem
When we have a cluttered 3D space problem, it may not be possible to resort to the use of 2D and 2.5D representations, if obstacles have irregular surfaces which may not be represented in 2D or 2.5D.
This is the most frustrating problem. This problem has fewer simplifying restrictions, and as a result the dimensions grow quickly for every new factor.
The most devastating setback to the 3D methods occurs when a multilink manipulator is added, without any limiting assumptions. The addition of each manipulator link will increase the problem
complexity geometrically. This method may be simplified when spheres are used to represent moving obstacles.
The most successful attempts at solving this problem have been the optimization attempts. Optimization methods are typically slow, and thus the three dimensional path planning problem will have to be
solved off line for now. If a good fast solution is discovered for this problem, it will eliminate the need for the 2D and 2.5D problems.
36.3.3 Collision Avoidance
Collision detection is the most important factor of Path Planning. Without automatic collision avoidance, the robotic workcell must be engineered to be collision free, or sub-optimal paths must be
chosen by a human programmer. Local Collision Detection is important when moving through an unknown, or uncertain environment. These allow for feedback to the planner, for halting paths which contain
collisions. Global Collision Avoidance may be done for planning paths which should avoid objects by a certain margin of safety. The actual detail of the method may vary, but moving close to obstacles
is avoided by these methods.
Figure 36.5 Figure 2.3 Collision Avoidance
36.3.4 Multilink
One problem that tends to paralyze most methods is the expansion to multilink systems. The first implementation of most techniques is made with a simple mobile robot. When the method is increased by
adding oddly sized links, and then a payload, the complexity grows at a more than exponential rate.
The number of degrees of freedom also play in the applications of the robot. If a manipulator has 6 degrees of freedom, then it can obtain any position or orientation in space. Some specific cases of
problems require only 3 or 4 degrees of freedom. This can be a great time saver. When an environment becomes cluttered then it may be desirable to have a higher number of degrees of freedom than six,
so that the redundancy of the robot can move through the environment. The complexity of most routines increases exponentially with the number of degrees of freedom, thus it is best to match the
manipulator degrees of freedom to the complexity of the task to be done.
One assumption that helps reduce the problem complexity is the approximation of motion in a single plane. The net result of this effort is that the robot is reduced to 2 or 3 degrees of freedom. The
payload may also be neglected, or fixed, and thus the degrees of freedom are reduced. A second approach is to approximate the volume of the links swept out over a small volume in space. This volume
is then checked against obstacles for collisions. A payload on a manipulator may sometimes be approximated as part of the robot if, it is small, or it is symmetrical. This means that the number of
degrees of freedom for a manipulator may be reduced, and thus the problem simplified in some cases.
Figure 36.6 Figure 2.4 Multi-Link Approaches
Multilink manipulators also come in a variety of configurations. These configurations lend themselves to different simplifications, which may sometime provide good fast solutions in path planning.
Cartesian (i.e. X, Y, Z motions)
Spherical (Stanford Manipulator)
The various robot configurations are fundamentally different. Many approaches have tried to create general solutions for all configurations, or alternate solutions for different specific
manipulators. The fastest solutions are the ones which have been made manipulator specific. With a manipulator it is also possible to describe motions in both Joint Space (Manipulator Space), and
Cartesian Space (Task Space). There are approaches which use one, or both of these.
36.3.5 Rotations
Rotations are another problem for some path planners. It can be difficult to rotate during motion, thus some will not rotate, some will rotate only at certain ’safe’ points, and some will rotate
along a complete path. The best scenario is when rotations may be performed to avoid collisions, and not just to meet the orientation of the goal state.
Figure 36.7 Figure 2.5 Payload Rotation
36.3.6 Obstacle Motion Problem
Motion of obstacles can cause significant path planning problems. Motion occurs in the form of rotation, and translation. In most path planners the only motions considered are for the payload and
manipulator. In most cases an obstacle in the environment will experience both rotation and translation. This has devastating effects on all of the path planning methods, because some tough new
requirements are added. The path planning systems must now incorporate a time scale factor, and keep an object description which includes a position, a velocity vector, and rotation vector. The
method must also do the calculations to detect collisions and status at every instant of time as the system changes. With simplifications the problem may be reduced to a more manageable level.
Obstacles may be categorized into motion categories; Static (un-moving), Deterministic (has predictable occurrence and positions), and Random (Freely moving, with no regular occurrence). All of these
are of interest because most parts fixed in a workcell are Static, workpieces from feeders and conveyors are Deterministic, and human intruders are Random. Random obstacle motion usually occurs so
quickly that path planning may only be able to escape the path of the obstacle, not compensate it. These motion strategies are often considered in the static and cyclic states, but the Random
solutions are not implemented widely, except as safety measures.
Figure 36.8 Figure 2.6 Obstacle Motion Types
36.3.7 Robot Coordination
If only a single robot may be used in a workcell, then bottlenecks will soon arise. When two robots are placed in close proximity, they can produce more bottlenecks, thus they must be given some
scheme for avoiding each other. If these robots are to cooperate then a complete and integrated cooperation strategy is required. These strategies require some sort of status link between both
manipulators. To make sense of the status of the other robot, it is also necessary to have algorithms for path planning in the Deterministic environment. This algorithm relies upon a proper moving
obstacle path planning scheme being developed first. It is possible to use assumptions, or an engineered environment to allow robot coordinations, without the extensiveness of a complete coordination
Figure 36.9 Figure 2.7 Multi-Robot Work Spaces
36.3.8 Interactive Programming
One factor that is now available for all commercial robots is the ability for the human user to plan the path. This may be done with a special graphical program, or by moving the robot and setting
points, or by entering sets of points. These methods are proven, and do work, but they are very inefficient, inflexible, and prone to errors. Thus the thrust of this paper will be to ignore the
interactive approach, because automated path planning is much more desirable. The interactive approach will almost guarantee that an operator will not find an optimal approach. Finding a good path is
not possible without having to stop the robot and lose production time.
36.4 Setup Evaluation Criteria
The type of environment information required by a path planner is critical to its operation. Some simple path planners may work in a simple collision detection mode. Some path planners require
pre-processed information about the environment, and they are much more difficult to deal with.
36.4.1 Information Source
The sources of information, which a path planner uses, will have a fundamental effect upon how a path is planned. This is a problem, in how the information is collected. Data may be collected before
or during the execution of a path. This means data may be presented in the form of an known obstacle (and position), or by a simple indication of contact. These tend to draw distinctions between
methods which have Collision Detection (Local or Trajectory Path Planners) and those which have obstacle information (Global Path Planners).
36.4.1.1 - Knowledge Based Planning (A Priori)
It is much easier to solve a problem if all the information is available at the beginning of the solution. In robotics we may plan paths before their execution, if we have some knowledge of the
environment. This is strictly a ‘blindly generate’ type of strategy that trusts the knowledge of the environment provided. Planning paths before execution allows efforts to get a shorter path time,
more efficient dynamics, and absolute collision avoidance. When working in this mode a priori knowledge (i.e. Known before) is used. Techniques are available to solve a variety of problems, when
given the a priori information. Some of the knowledge which we use for a priori path planning may come from vision systems, engineering specifications, or CAD programs.
A Priori Knowledge may be applicable to moving objects, if they have a predictable frequency or motion. A Priori knowledge may not be used for unpredictable or random motion, there is no detection
method allowed by its definition.
A Priori Knowledge may be derived from modeling packages or High Level Sensors. These sensors are slow and typically drive a World modeler in an Off-line programmer. The best example of this type of
sensor is the vision systems. This system is the most desired information collector for robotics in the future. This system would provide a complete means of world modeling for the path planner.
Another common detection system currently in use is the Range Imaging System, which use stripes of LASER light to determine geometry of objects. One example of this is the use of the system to
recognize objects and use the geometrical information to determine how to grasp the object [K.Rao, G.Medioni, H.Liu, G.A.Bekey, 1989]. Some of these sensors require knowledge from the world modeler
for object recognition purposes. In general these sensors are slower because of their need to interpret low level data, before providing high level descriptions.
Figure 36.10 Figure 3.1 A Priori Path Planning
36.4.1.2 - Sensor Based Planning (A Postieri)
Sometimes information is not available when we begin to solve a problem, thus we must solve the problem in stages as the A Postieri information becomes available. Sensor based planning is an
indispensable function when environments change with time, are unknown, or there are inaccuracies in the robotic equipment. A Postieri Knowledge may be used to find the next trajectory in a path (by
collecting information about the outcome of the previous trajectory) or even be used strictly to guide the robot in a random sense when exploring an environment. These techniques correspond to an
execute and evaluate strategy.
This information feedback is acquired through a set of different sensors. The sensors used may range from vision systems to contact switches. These low level sensors are not very sophisticated, but
their low cost makes them very popular. These sensors will typically detect various, expected, conditions. Good examples of these sensors are Position Encoders and Contact Switches. The Sensors can
return a signal when contact is made with obstacles, or measure a force being applied. When used in a feedback loop, they may provide actual joint position for a position control algorithm. High
level sensors also have the ability to provide low level data, and may be used to detect events. Such Low Level information from this system could also be used to check for collisions while in
motion, and detect moving objects. Quite naturally the extent to which this information is collected, determines how the path planner will work.
The ultimate robot would use these sensors to gather information about the environment, and then plan paths and verify information during execution. But this raises the point that a mixture of both
of the a priori and a postieri methods must be mixed to make a more dynamic planner. This especially critical when dealing with motion, either to coordinate with regular motion, or to detect and deal
with un-predicted motion. Some good papers have already been written on path planning in the A Postieri mode.
Figure 36.11 Figure 3.2 A Postieri Path Planning
36.4.2 World Modeling
How the world is modeled can make a big difference to the path planning strategy. Some of the assumptions about the world are that all obstacles are solid and rigid. Solid is assumed so that
collisions will occur on contact. Rigid is assumed so that deformations do not occur on contact. The objects must be represented with some sort of method. Some of the various methods are Polygons,
Polyhedra (constructed with 3D polygons), Ellipsoids, sets of points, analytic surfaces, Arrays, Oct-trees, Quad-trees, Constructive Solid Geometry (CSG), and Balanced Trees. The method chosen can
limit the use of complex shapes. Some methods are very receptive to data acquired through sensors and CAD systems.
The most common method of representing objects (in all dimensions) is with convex polygons. These are ideal when working with flat surfaces in the real world. Curved surfaces use flat polygons to
approximate their surfaces. One factor that makes the polygons an excellent representation is that if a point is found to lie outside one wall of a polygon, then it may be declared to be outside the
entire polygon. Most methods do not allow for concave polygons, because they are much more difficult to deal with, in computation. The way to over come this is to use overlapping convex polygons, to
represent a concave polygon. These types of representations can typically be derived from most CAD systems. This form allows easy use of existing facilities.
Arrays are good when fast recall of information from a map is required. The set up time for an array is long, the memory required is large, and algorithms are slow. This is a more intuitive approach,
but it is also not very practical with present equipment. Quad-trees (for 2D) and Oct-trees (for 3D) are excellent representations for the work space. These allow the workspace resolution to vary, so
that empty space in the work cell does not waste space in the representation. The disadvantage to these techniques is their complexity can slow down access times. An enhancement to the Quad-tree and
Oct-Tree structures which represent space with blocks and cubes, is a balanced tree which will use non-square rectangles to represent space. This could potentially save even more memory than the
other methods, but the routines would again make the access time even slower.
The most powerful method of representation available is CSG (Constructive Solid Geometry). This allows objects to be created by performing boolean operations with geometrical primitives. The original
design is done quickly, the object is very space efficient, represents complex surfaces easily, but it is very quite complicated to use. One method discussed is the use of bounding boxes for the
different levels of an object’s design tree. A discussion was given by A.P.Ambler [1985] about using Solids modeling with robotics. The thrust of this paper was the different operations,
communications, and information which a solids modeler would have to handle to drive a robotic system. This paper proposes a good setup for an Off-Line Programming Package.
It should be noted that sometimes information is given to the world modeler in an awkward form. This information may be represented in another way, or interpreted to make sense of the information.
Spatial Planes can be used to establish spatial orientation. Bounding Boxes and Bounding Polyhedra may be used to approximate complex surfaces so that they may be stored in a smaller space, and be
easy to use by most algorithms.
Figure 36.12 Figure 3.3 World modeling Techniques
Figure 36.13 Figure 3.4 Modeling Approximation Techniques
36.5 Method Evaluation Criteria
The major variation between the path planning methods arises in the approach to the solution. The methods range from simple mathematical techniques, to very sophisticated multi-component systems with
heuristic rules.
As a result of the numerous approaches to the path planning problem that have arisen, a basic knowledge is critical to an overview of the field. The best way to start this section is with a brief
definition of strategies, and then a brief explanation of the popular approaches (some specific methods are detailed in the appendices). Even though the system design strategies are not a direct part
of path planning, they have a profound impact on the operation of the path planner.
36.5.1 Path Planning Strategies
A path may be planned and executed in a number of different ways. The most obvious is the direct method of planning a path and then executing it. This section will attempt to introduce some of the
abstracts behind the strategies of path planners.
36.5.1.1 - Basic Path Planners (A Priori)
Path planners typically use environmental information, and initial and goal conditions. Through algorithmic approaches, the path planners suggest a number of intermediate steps required to move from
the initial to the goal state. The path may be described by discrete points, splines, path segments, etc.. Each of the path segments describe a location (or configuration) and rotation of the
manipulator and payload. These can be furnished in a number of ways, as joint angles, cartesian locations of joints, location of payload, as a series of relative motions.
36.5.1.2 - Hybrid Path Planners (A Priori)
A newer development is the possibility of hybrid path planning. In this mode a combination of path planning methods would be used to first find a general path (or set of paths) and then a second
method to optimize the path. This method is more complex, but has the potential for higher speed, and better results than a single step method.
This strategy may use methods based on alternate representations (like those in figure 3.4). Some common methods in use are Separation Planes, Bounding Boxes, Bounding Polyhedra, 2D views of 3D
workspaces, tight corner heuristics, backup heuristics, etc. These are some of the techniques that may be used to refine and produce better paths.
Figure 36.14 Figure 4.1 Basic Operation of a Hybrid Path Planner
36.5.1.3 - Trajectory Path Planning (A Postieri)
The amount of knowledge which a path planner has may be very limited. If the robot has no previous knowledge of the environment, then information must be gathered while the robot is in motion.
Trajectory planners rely on feedback for finding new trajectories and detecting poor results. Contact or distance sensors are used to detect an obstacle and the manipulator trajectory is altered to
avoid collision. This method will typically guarantee a solution (if it exists or if it does not encounter a blind alley), but at a much higher time cost, and a longer path. The collection of current
data becomes critical when dealing with moving obstacles, that do not have a periodic cycle. This method may also be tested by simulation as suggested by K.Sun and V.Lumelsky [1987], who developed a
simulator for a sensor based robots.
For the purpose of clarifying this subject a special distinction will be drawn between a path and a trajectory. When discussing a path, it will refer to the complete route traced from the start to
the goal node. The path is made up of a number of segments and each of these path segments is continuous (no stop points, or sharp corners). Another name for a path segment could be a trajectory.
This distinction is presented as being significant, by the author, when considering a trajectory planner, which basically chooses the locally optimum direction, as opposed to a complete path. Only
some path planners use trajectory based planning, which is easier and faster to compute, but generally produces sub-optimal paths.
36.5.1.4 - Hierarchical Planners (A Priori and A Postieri)
If the best of both controllers are desired in a single system, it is possible to use a high level A Priori planner to produce rough paths, and then use a low level A Postieri planner when executing
the path. This would make the planner able to deal with complex situations , and the ability to deal with the unexpected. This also has the ability to do rough path planning in the A Priori level,
and let the A Postieri level smooth the corners.
Figure 36.15 Figure 4.2 A Hierarchical Planner (and an Example)
36.5.1.5 - DYNAMIC PLANNERS (A PRIORI & A POSTIERI)
Dynamic Planners are a special mixture of an A Priori Path Planner and A Postieri Motion controller for a manipulator. The A Priori Path Planner could plan a path with limited or inaccurate
information. If during the execution of this path, a sensor detects a collision, the the A Priori Path Planner is informed, and it updates its World Model, then finds a new path. To be more formal,
the dynamic Planner is characterized by separate path planning and execution modules, in which the execution module may give feedback to the planning module. This is definitely a preferred path
planner for truly intelligent robotic systems. Some dynamic planners have been suggested which would allow motion on a path, while the path is still being planned, to overcome the path planning
bottle neck of computation time.
Figure 36.16 Figure 4.3 Dynamic Path Planning
36.5.1.6 - Off-Line Programming
One strategy that has become popular is the Off-Line Programming approach. When using Off-Line Programming we will take an environmental model, which allows human interaction and graphical simulation
to model the robot. Via the tools, the human may produce optimal paths in a combination of human intelligence and algorithmic tools. This is best compared to a CAD package that allows modeling of the
robot and the work cell. Once the modeling has been completed, a various assortment of tools are available to plan manipulator motions inside the workcell. This allows rearrangements of obstacles in
the workcell, and optimization of robot motions and layout. This sort of software package may be used in a number of modes. The Off-Line Program may interactively calculate, and download, a path
which directly drives the robot. The Off-Line Program may also create a path for the manipulator, which includes programming like directions (J.C.Latombe, C.Laugier, J.M.Lefebvre, E.Mazer [1985]). In
the Off-line programming mode the results are usually slower, in the order of minutes for near optimal path generation. This time is acceptable when doing batch work, and setups for large production
runs. If the Off-line program cannot find an optimal path before the previous tasks have completed, the workcell will have to halt. The most important aspects of the Off-line programmer is the World
modeler and Graphical Interface.
An Off-line programmer was discussed by A.Liegeois, P.Borrel, E.Dombre [1985]. The authors approach was to use a CAD based approach with graphical representation and collision detection, then convert
the results to actual cartesian or joint coordinates.
At present Off-Line programmers are possible, and there are many good graphical and path planning methods available in construction of these packages.
36.5.1.7 - On-Line Programming
The previous Off-line Programming method allowed a mix of human interaction and A Priori path planning in a modeled environment. The same concept is possible, with human interaction and the A
Postieri path planning. This is On-Line programming, because there are no graphical simulations in this strategy, the actual robot is used. On line programming allows the user to directly enter robot
motions and directly verify their safety and use tools to optimize the path. This is not desirable for path planning because it is time consuming and inflexible, this is similar to the original
method of robot programming with set via points.
36.5.2 Path Planning Methods
This section will attempt to provide a simple point of view, to classify the path planning techniques available. It should be noted that all of the methods are trying to optimize some feature of the
path, but they do not all use classical Optimization Techniques.
The most complete approach to path planning is Optimization. Optimization techniques may be done on a local and global basis. Through the calculation of costs, constraints, and application of
optimization techniques, an optimal solution may be determined. Some of the mathematical methods used for the optimization technique are Calculus of Variations, Trajectory optimization (hit or miss
shoot for the goal state) and dynamic programming. The most noticeable difference between Local and Global optimization is that Local optimization is concerned with avoiding collisions (it will come
very close), and global optimization will avoid objects (make a wide path around, when possible).
The most intuitive approach is the Spatial Relationships between Robot and Obstacles. This approach uses direct examination of the actual orientations to find distances and free paths which may be
traversed. It was suggested by Lozano-Perez (1983) that spatial planning "involves placing an object among other objects or moving it without colliding with nearby objects". There are quite a few
methods already in existence.
The Transformed Space solutions are often based on Spatial Planning problems, they are actually attempts to reduce the complexity of the spatial planning problems by fixing the orientations of
objects. These problems diverge quickly from spatial planning when they use transformed maps of space. When space is transformed, it is usually mapped so as to negate the geometry of a manipulator.
The best known approach is the Cartesian Configuration Space Approach as discussed by Lozano-Perez [1983]. These techniques have different approaches to representing the environment, but in effect
are only interested in avoiding objects, by generating a mapped representation of ’free space’ and then determining free paths with a FindPath algorithm.
An alternative to the previous methods are the Field methods. These methods impose some distributed values over space. The Potential Field method is a technique similar to Spatial Planning
representations. This involves representing the environment as potential sources and sinks and then following the potential gradient. This technique is slow and tends to get caught in Cul-de-sacs.
The Gradient method is very similar to the Potential field method. It uses distance functions to represent the proximity of the next best point. This method is faster than the Potential field method,
and it gets caught in Cul-de-sacs.
An approach which is beginning to gain popularity is the use of Controls Theory for path planning. This was done, very successfully, by E.Freund and H.Hoyer [1988]. This approach allows the
non-linear control of a robot, which includes the collision avoidance for many moving objects, mobile robots, manipulator arms, and objects that may have a variable size. This approach seems to have
great potential for use in low level control, in the A Postieri sense.
To be complete, there are some techniques that are uncommon (like path planning with simulated annealing S.Kirkpatrick, C.D.Gelatt, M.P.Vecchi [1983]), or are still in their infancy (like the
controls approaches). To cover these there will have to be a New and Advanced Topics classification. This does not indicate a shortcoming in the representations, but a lack of definition in the areas
36.5.3 Optimization Techniques
36.5.4 Internal Representations
Most problem solving techniques need some internal method of dealing with the problem. These methods all have their various advantages. Most of these internal representations depend upon the method
of world modeling used in the setup. These may be converted inside the program. There are factors to consider when choosing between various internal representations. For example if arrays are used
they are very large and, the updating procedure is very slow, but the lookup is very swift. This technique is of best use when, the obstacles do not move, and the arrays are set up off-line to be
used on line. With the current advances in computer hardware (i.e.. Vector Processing and Memory) special hardware may become available that will make the arrays based methods very fast, thus they
should not be discarded, but shelved.
The next technique of interest is lists. These are mainly an efficient way to store linked data. This may vary from an oct-tree representation of some object to a simple list of boxes in space. These
are efficient with space, but there is storage overhead, and lookups are slower than the array.
Functions are very space efficient, but they are subject to inflexibility. Functions may represent a variety of execution speeds from quite fast for a simple problem to very slow for a complex
problem. It should be noted that functions may grow exponentially with increased environment complexity. This function should be used with discretion.
36.5.5 Minimization of Path Costs
Each path has some features which affect the cost. As was described earlier the costs are time, velocity, dynamics, energy, and proximity to obstacles. A basic statement of the goal is given, and
this should be used when formulating a complete Measure of Performance function or algorithm.
“The manipulator should traverse the path in the least amount of time, with reasonable forces on the manipulator, using a reasonable amount of energy, and not approaching obstacles too closely.”
36.5.6 Limitations in Path Planning
The constraints in Path Planning were also described earlier. These also evoke a single statement in description,
"The Kinematic, Dynamic and Actuator constraints of the manipulator should not be exceeded, and obstacles should not be touched."
This is worded arbitrarily, because of the variations in different manipulators. If this statement is followed in essence, then the limits will be observed. Consideration of this statement will aid
when creating a Constraint function or algorithm for a robot path planner.
36.5.7 Results From Path Planners
The results from path planners tend to come in one of a number of simple forms. The actual numbers may represent joint or cartesian locations. These may be used in various ways to describe a path;
Splines, Functions, Locations, Trajectories, and Forces. This area is not of great importance because the problems have been researched, and it is easy to convert between the path planner results.
The only relevance of this area is that results are usually expressed in a form preferred by the path planning method.
36.6 Implementation Evaluation Criteria
This section is the most difficult to handle. In some cases the methods are excellent, but the implementations are so poor that they fail to display the merits of a path planning method. Up to now
researchers have occasionally included run-times and machines, for their discussions, although some have not tested their methods. This variation is noticeable, thus some standardization is needed.
In this section some simple means for comparison are suggested. A quick list of general implementation evaluation criteria is also in order,
minimum time for solution search
2D mobile robots or 3D linked manipulators
36.6.1 Computational Time
One thing that needs definition is the Typical Execution Time. The problem with this criterion is computers, languages and programs run at different speeds, and various problems will vary between
implementations. Thus the author proposes a set of two scenarios in 2D and 3D workspaces, and suggests that Typical Execution time be expressed in machine cycles (A good High Level Robotics
programming Language could save a lot of trouble at this point).
Figure 36.17 Figure 5.1 Path Planning Test Scenarios
These look like nicely packaged problems, but they call for a few exceptions. The classic "Piano Movers Problem" is a good example. The piano movers problem is perfectly suited to a mobile robot. If
the routine is used for mobile robots, then both the scenarios above should be used on the object to be moved (without the arm). In both of these problems the consideration of manipulator links is
important. In a 3D problem the path planner must consider all links of the robot as linked objects which are to be moved, without collision. The ideal minimum, to properly evaluate the methods on a
comparative basis, would need planning time, setup time, travel time, and a performance index (covering stress and strain in the robotic manipulator).
These tests have one bias, both are oriented towards representation with polygons. This was considered acceptable because most objects in the real world are constructed of flat surfaces, or can be
approximated with them (like Cylinders, Spheres, Toroids).
The ultimate path planning test could be a needle through a hole problem. In this scenario, a box with a hole in it could be held at an odd angle in space. The peg could be picked up off the ground,
and inserted into the hole. The manipulator could then force the peg through the hole as far as possible, and then remove the peg from the other side and return it to its original position. This
could also be approached as a single path, or as many as twelve separate paths.
4. Insert peg in hole and release.
6. Move arm to near other side of peg.
9. move peg to near original position.
10. place peg on ground and release.
12. Move arm to start position.
This evokes a number of different approaches. The most obvious is the use of both gross and fine motions. The second most obvious is a single path in which all of the tasks are located at via points
on the path. Another approach is to combine steps into more efficient tasks. This problem allows flexibility for all path planners.
Figure 36.18 Figure 5.2 Peg Through Hole Problem
36.6.2 Testing a Path Planner
Testing a new method is very subjective problem. The points to consider are the type of method used, and its implementation. A few points by which to describe these methods are given below, but these
facts are not always given in the path planning papers.
Ability for method to get trapped.
Ability to find alternate paths.
Degree of Automation in the Planning process.
36.7 Other Areas of Interest
Some areas do not fit into the standard criteria for evaluation. These are just as important, but have not been needed, or heavily researched yet. This section is intended as a catch-all for the
intangible, or so far in-formal, factors in Path Planning.
36.7.1 Errors
Errors are inherent in every sort of system. Robotics is no exception, but in robotics errors can be very costly and dangerous. Thus there is a definite need to deal with error trapping, error
reduction, error recovery, error simulation, and error prediction. All of these errors can arise in any part of the system, and in the domain of path planning they can have a drastic effect. An error
in world modeling can result in a faulty path that may be less than optimal, or at worst cause a collision. In a feedback system, the errors could indicate fictitious collisions, or empty space when
it is actually occupied. Thus all aspects of robotic path planning should eventually encorporate the ability to deal with unexpected events, and eliminate errors.
36.7.2 Resolution of Environment Representation
For some planning methods this is not applicable, but for other methods this becomes a critical factor. The real resolution of the environment is governed by accuracy and repeatability. The accuracy
is the variance between the commanded and actual manipulator position. Repeatability is the variation in position that occurs when a position in space is re-found by the manipulator. Both of these
factors are subject to possible errors which arise from sensor data. These errors can add a compounded error into the robot system, and thus they should be introduced into the path planning routines.
Neither of these methods is considered by many of the authors of papers.
The resolution will have the most profound impact when the environment is represented in an array, oct-tree, Quad-tree, integers, or via sensors. Thus good research into resolution for a robotic
system will allow auto scaling of Path Planning methods to be used, to match the environment, and provide realistic accuracies. One benefit of this approach would be running methods quickly at a poor
resolution, for fast solutions, and at a high resolution for accurate solutions, in tight corners.
It was suggested by J.H.Graham [1984] that probabilistic methods for path planning help by allowing looser tolerances of parts involved, and allowing motions not normally considered.
To help overcome the resolution problem there are a couple of tricks. When modeling in three dimensions the surfaces may be multifaceted, and hard to calculate quickly. This aspect of calculation may
be sped up by using simple approximating surfaces not near the start or stop point, where low resolution is acceptable. It is also possible to use gross motion approximations when traveling through
this space. Finally, if the path planner uses collision avoidance, errors in accuracy will be insignificant, because the avoidance of objects may be greater than the resolution..
36.8 Comparisons
Now that the various methods and techniques have been introduced it is possible to discuss them in a general sense. A chart format has been chosen to present some of the known information. The
information may be spotty in some cases where researches have not provided complete data. Also, because a chart is a rigid structure to present information of this sort, a small section is included
with comments. This table should only be used as an overview, it is not complete, and it is very general.
Figure 36.19 Figure 7.1a A Comparative Chart of Path Planning Methods
Figure 36.20 Figure 7.1b A Comparative Chart of Path Planning Methods
36.9 Conclusions
This is the most interesting section, but unfortunately, very vague. In the next few years Powerful computer hardware advances will allow low cost, but extremely powerful operations. This will make
many memory intensive path planning methods feasible, the very complex operations will now be reasonable for calculation. If computers become powerful enough, then an expert path planning system
would be a definite asset, making the robots flexible over a number of different frontiers, without any programming. The advance of customized computer hardware which now does graphics indicates that
it is very feasible to convert complex robotic path planning from a software to a hardware domain. The potential of parallel processing also opens doors to solving very complex and involved problems.
A good direction for robotics research is to allow a path planning system which will combine the various problems and solutions. This will allow the robot to switch modes with an expert system and
thus chose the solution to fit the problem, and not try to make the problem fit the solution, as is commonly done now. This would involve the use of rules and heuristics (based on the elements
discussed in this paper) to build up a very complex and complete path planner from very simple methods (which are currently being researched). The multi-level path planning strategies are seen to
also hold promise for robotic development (the author prefers the Dynamics Path Planning approach). These approaches allow the best of all planning methods.
36.10 Appendix A - Optimization Techniques
Optimization involves modeling a system as a function of some variables, then deriving some cost, or measurement of performance from it. The variables are then adjusted to get the best measure of
performance out. These methods give the best results, but they can be tricky to set up, and they can be very slow.
36.10.1 Optimization - Velocity
By optimizing the manipulator velocity the path time can be reduced, and the robot used to its full ability. One approach was made by S.Dubowsky, M.A. Norris and Z.Shiller [1986]. The thrust of their
research was production of a Path Planning CAD System called OPTARM II. Their program will produce a path which is made up of smooth splines. The optimization is done to keep either the maximum or
minimum torque on the robot actuators. Their success was in producing path within 5% of optimal value based on collision avoidance and motion limits on the joints. For a six degree of freedom
manipulator the program produces smooth motion on the microVAX in a few minutes of CPU time, which will give optimal paths up to twice as fast as the constant velocity method for path motion.
A different approach to the velocity optimization problem was taken by B.Faverjon and P.Tournassoud [1987]. A fast method was found for finding distances of separation between objects in space. The
world was modeled as geometrical primitives, and the primitives could be used to represent the world to various depths of complexity.
Figure 36.21 Figure A.1 Various Levels of Geometrical Representation
Using the distance of separation in a function for a velocity damper, the collisions were incorporated as a constraint. When the cost function was optimized for velocity the tendency was to avoid
obstacles where velocity was damped. This method was also made to work with moving objects.
This method was intended for use with a 10 link nuclear manipulator, and it has been implemented on a SUN 3 computer. The actual run time was not given, but the routine ran at a tenth of the speed
possible with the manipulator. The manipulator was said to have approached objects to within 1 cm.
36.10.2 Optimization: Geometrical
A different approach based on the calculus of variations has been developed by R.O.Buchal and D.B.Cherchas [1989]. The techniques use convex polygons to represent objects, and an iterative technique
to find the best path which avoids these polygons. The path is subdivided into a number of smaller intervals, linking a set of pre specified via points. Convex hulls are used to represent the volume
swept out by the motion of the moving object. The penetration depth, and penetration vector, are used to correct the path for each of the individual path segments.
Figure 36.22 Figure A.2 Path Segment Interference
The penalty function is formulated as to consider the limits on the joints and actuators. The optimization routine is used to correct the path segments, and this procedure takes about 1 minute on a
VAX 11/750 for a 2D model, with manipulator considered, without manipulator collisions. The main objective of this routine is to minimize time. This method needs a set of path points specified for
good solutions, and this is a potential area of research.
36.10.3 Optimization - Path Refinement
Some methods provide very rough paths containing jerky motions. To smooth the jerks in paths (as a post processor), a method was suggested by S.Singh and M.C.Leu [1987]. Using the technique of
Dynamic Programming, time and energy for a given path are minimized. This method does not account for collisions, but is intended to just be run Off-line. A control strategy is suggested for the
manipulator to use on the resulting path. Even though the performance of this method is not given, it looks like a good addition to any a priori programming strategy.
36.10.4 Optimization - Moving Obstacles
Every environment has something moving. To consider this problem B.H.Lee and Y.P.Chien [1987] suggest a method for off-line optimization of this problem. Unfortunately this method was not
implemented, thus the success of the technique is unknown. The first constraint is a maximum time for motion, this is to force movement before iminent collision and also to force speed in the path. A
constraint is assigned to the priority of the obstacle, which usually has first priority on the path. The constraints of start and stop points are also used. Collision constraints are also used, with
every object represented as a sphere. The Cost function considers smoothness of the path, and the torque of the actuators, to ensure that the robot is not overstretched.
36.10.5 Optimization - Sensor Based
An interesting approach to path planning was suggested by B.Espiau and R.Boulic [1986]. In the attempt to find a path planning method for a 10 degree of freedom manipulator, they came up with an
unusual approach to optimization. With proximity or contact sensors mounted on every link of the manipulator, the path was navigated. To do this the sensor data would be used alter the penalty
functions to represent detected obstacles encountered by the manipulator. The cost function was based on velocity and dynamic functions. This gave a dynamic approach to finding paths in which the
path was found by the local optimization of trajectories. For this particular method a large number of sensors are required, and unfortunately the authors did not provide statistics about the
performance of the method.
36.10.6 Optimization - Energy
In a method suggested by M.Vukobratovic and M.Kircanski [1982] the energy of the system was proposed as an excellent method for path planning. Using the techniques of optimization for energy the best
path is found. This is done by calculating the dynamics at certain points in space, and then using the dynamic programming technique (of operations research) to find the best path. This technique
runs a few minutes on a PDP11/70. The end effector is ignored in this method as negligible. This method is only intended to smooth paths, so that the stresses on the load and manipulator are
decreased. The information provided on this method was not very complete.
36.11 Appendix B - Spatial Planning
Spatial Planning is best described as making maps of space, then using the direct relationships between those objects in space to find paths. These methods cover a variety of techniques, but
essentially their primary function is to determine the spatial relations between the object and the obstacle and avoid collisions. These techniques in general have not produced the best paths, but
they produce paths quickly. These methods are also best used with 2D problems.
36.11.1 Spatial Planning - Swept Volume
Lozano-Perez and Wesley (1979) discuss a generate and test strategy used for path planning. The technique will begin with a straight line from start to goal, regardless of collisions. Then a
repetitive cycle of analyzing a path (by detecting collisions on the path with Swept Volumes) and then using the information to generate an improved path.
This method may be more formally described with a set of steps. The first step is to check the validity of the proposed path. The path validity is found by considering volume swept out as the object
moves along the path. If a collision is not detected the method will be halted. In the case of collision, the information about the collision will be used for correction of path. This information
used may include details about the collision, like shape of intersections of volumes, the object causing collision, depth of penetration, and the nearest free point.
The difficulties of this solution become obvious when some of the intricacies of the problem are considered. Models of complex surfaces can contain a very large number of simple surfaces. Calculating
the intersections of these numerous simple surfaces can be a very difficult task. A second problem is how we may determine a global optimum when only local information about obstacles at collisions
is made available. With the local information about collisions being used in path correction, radical different options are ignored. These two problems could result in an expensive search of the
space of possible paths with a very large upper bound on the worst case path length.
Figure 36.23 Figure B.1 Swept Volume Path Planning
36.11.2 Spatial Planning - Optimization
Lozano-Perez and Wesley [1979] describe their work (see Cartesian Configuration Space) as being an improvement over the Optimization Technique of Spatial Planning. The basic concept they describe, is
to explicitly compute the collision constraints on the position of a moving object relative to other obstacles. The trajectory picked is the shortest path which will satisfy all of the path
With objects modeled as convex polyhedra, vertices of the moving object may move between the obstacle surface planes and collide. This condition is easy to detect, because if a vertices is outside
any plane of an obstacle, there is no collision. One possible simplification is to use a circle to represent the objects geometry, and just maintain a radial clearance from all objects. It should
also be noted that the circular geometry is not sensitive to rotation. This was the path planning technique used in a mobile vehicular robot called SHAKEY by N.J.Nilsson [1969].
Figure 36.24 Figure B.2 Spatial Planning : Optimization
36.11.3 Spatial Planning - Generalized Cones
Generalized cones [R.A.Brooks, 1983] are a faster approach than the Cartesian Configuration Space method . These cones are achieved through a special representation of the environment. The surfaces
of convex polygons are used to determine conically bounded pathways, for the path of the object being moved. The method of determining free pathways (or Freeways) is based on the use of cone shaped
spaces. The cones fit snugly between objects and have spines that run along the center of the cones. These spines are the paths that the object may travel along. This makes the method inherently 2D
and thus has not been implemented in 3D as of yet, but it has spawned a method which is successful in 3D by Brooks[1983]. To determine which spines to follow, the author uses the A* search technique
to explore the various paths along the spines. This leads to problems in cluttered spaces where certain possible paths may be overlooked by the generalized cones. This method chooses a path with
considerable clearance of objects.
Figure 36.25 Figure B.3 Problem Represented with Generalized Cones
Figure 36.26 Figure B.4 Problem Represented with Generalized Cones
As can be seen the rotation with this technique is very restricted, and the object is typically oriented with the spine.
36.11.4 Spatial Planning - Freeways
A follow up to R.A.Brooks [1983] research into the use of Generalized cones for the representation of Free Space, R.A. Brooks [1983] developed a method of path planning for manipulators with 5 or 6
d.o.f. motion. This method is able to solve the pick and place problem in under a minute on an MIT Lisp machine, by approximating the robot as a 4 d.o.f. manipulator. His method is based on the
assumption that the world is represented as supported, and suspended, prisms. This method is suggested as a possible precursor for the use of video information about the workcell, from a high level
vision system. This has two points of interest; the objects which are suspended from the side will be grossly misrepresented, but this form of encription suits video cameras well. The assumption that
the manipulator may be treated as a set of 4 d.o.f. is based on the limitation of the problem to only pick and place operations and insertion (or fitting) operations. This unfortunately means that
the objects may only be rotated about the vertical axis when in motion.
The use of Freeways between obstacles allows a choice between alternate paths.
Figure 36.27 Figure B.5 Freeway Between Blocks
The freeways are basis for maps of joint configurations which are acceptable for motion through these freeways. The methods then find the path using link constraints. This method is described in
algorithm form, and the algorithms are quite substantial.
36.11.5 Spatial Planning - Oct-Tree
In a paper by T.Soetadji [1986] a method for 3D path planning by a mobile robot is suggested. The method is based upon use of an Oct-tree representation of 3D space. In the paper, the development of
the Oct-Tree routines is discussed, and a robotic system for implementation is suggested. Once the Oct-tree is set up, an A* or a Breadth First search is used to find the best path for the mobile
robot. This method finds the minimum distance (with collision avoidance) on a VAX 750. The search time for the path is on the order of 1 second. The tree structure also proves very efficient, because
it had only occupied 1.7 MBytes of memory for a very complicated environment.
36.11.6 Spatial Planning - Voronoi Diagrams
A newer approach to representing space has been done with Voronoi Diagrams. O.Takahashi and R.J.Schilling [1989] have suggested such a method. Using an environment of polygons, the pathways may be
represented with Voronoi diagrams, then represented in a graph.
Figure 36.28 Figure B.6 Voronoi Diagram of Simple Work Space
After the Voronoi diagram has been set up in graph form, a path may be found. This is simplified by the use of some heuristic rules for wide paths, tight bends, narrow gaps, and reversing, which
identify a number of orientations. This procedure produces short smooth paths (which avoid obstacles) for 2D objects on an IBM compatible computer with no co-processor in 10 seconds to 1 minute. This
method has potential for use with vision systems. Algorithms suggested by A.C.Meng [1988] allow for fast udate of Voronoi diagrams, in a changing environment. This makes the operation much faster, by
avoiding the complete reconstruction of the diagram, and makes real time trajectory correction feasible.
36.11.7 Spatial Planning - General Interest
E.G.Gilbert and D.W.Johnson [1985] created an optimization approach to path planning (with the piano movers problem), with rotations and collision avoidance. This method runs 10 to 20 minutes on a
Harris-800 computer. This method was based on work which was later published by E.G.Gilbert, D.W.Johnson, and S.S.Keerthi [1987] which provides algorithms for calculating distances between convex
(and concave) hulls in a very short time (order of milliseconds).
36.11.8 Spatial Planning - Vgraphs
A method proposed by K.Kant and S.Zucker [1988] involves collision avoidance of rectangles based upon the search of a VGRAPH. This method also suggests the use of a Low Level controller Collision
Avoidance Module. This controller would use low level information about the environment to behave as if experiencing forces from workspace obstacles. The manipulator would be experiencing a pull to
the goal state. This method produces a Kinematic and Dynamic, Time Optimal path. None of the implementation details were given in the paper.
Figure 36.29 Figure B.7 : Visibility of Corners of Rectangles (VGRAPH)
36.12 Appendix C - Transformed Space
Some methods find it beneficial to transform the representation of space, so that it simplifies the search for the path. These methods may often be based on Spatial Planning, or any other technique,
but they all remap space. These methods are usually very inflexible after mapping, and all environmental changes requires a remapping.
36.12.1 Transformed Space - Cartesian Configuration Space
The Configuration Space method was popularized by Lozano-Perez and Wesley (1979), and was clarified later by Lozano-Perez (1983). The technique of Lozano-Perez and Wesley has become a very popular
topic in path planning and is worth an exhaustive discussion. The method provides a simplified technique which will allow movement of one object through a scattered set of obstacles. This is done by
simplifying the problem to moving a point through a set of obstacles. The major assumption of this technique is that the objects do not rotate and obstacles are all fixed convex polygons. The
obstacles must be fixed to limit the complexity of the solution.
Figure 36.30 Figure C.1 Configuration Space (Find Space & Find Path)
The basic concept involves shrinking the moving object to a single reference point. This is done by growing the obstacles so that their new area covers all of the space in which the object would
collide with the obstacles. After the determination of configuration space the problem is then reduced to moving a point through an array of obstacles (i.e.. through the free space).
Unfortunately, a set of obstacles which have been grown are good only for an object which has no rotation throughout its path. This problem is not insurmountable, and may be over come by creating a
special representation of the moving object which identifies free space for the object to rotate in. This object will be a convex hull shaped to cover the area swept out when the block rotates about
the reference vertex. The convex hull is used to grow the obstacles in configuration space, to find a possible rotation point. Then the path is planned by, finding a path to the rotation point in the
first orientation, and a path from the rotation to the goal in the second orientation. The path may also be broken up into more than one rotation, as need demands. Each of these configuration space
maps at different orientations are called slices. As seen in the Figure A.4, this has the potential of eliminating some potential paths, or complicating the problem.
Figure 36.31 Figure C.2 Object Rotation in Config Space
The algorithmic technique which Lozano-Perez suggests is a two step solution to the ’Spatial Planning Problem’. His solution covers two main algorithms, Findspace and Findpath. He compares the
Findspace algorithm to finding a space in a car trunk for a suitcase. This is the part in which the obstacles are grown to two or three dimensions, rotations are accounted for, and degrees of freedom
are considered. At this point is should be considered that the tendency is to work in cartesian coordinates, but use of other representations could simplifiy the object expansion, and the conversion
to a VGRAPH. The Findpath problem is comparable to determining how to get the suitcase into that empty space. The Findpath algorithm determines the best path by finding a set of connecting free
points in the Configuration Space. To do this a simple three step process has been used. The objects are first grown, and then a set of vertices for each is determined. Next the vertices are check
for visibility (ie. can they be seen from the current point?) and a VGRAPH is constructed, and finally a search of the VGRAPH yields a solution. This is a good example of the problem solving
techniques of artificial intelligence.
It is very obvious that the technique is not very advanced by itself. There has been some expansion on advanced topics, and these will be discussed below. Another consideration is the addition of a
three dimensional setting. This becomes a much more complex problem because of the need to deal with deriving the hulls, locating intersections, finding enveloped points in space. When we expand to
three dimensional space the algorithm is still attempting to navigate by corner vertices. In three dimensional trajectories the best paths are usually around edges of objects. A trick must be used to
make the search graphs recognize the edges. The trick used is each edge is subdivided from a line with two vertices into a number of smaller lines (with a maximum length limit) and a greater number
of vertices. This obviously does not ease the calculation time problem in three dimensions. Another trick is to use care points, located either inside or outside the objects, and then use these as
points which require precise calculation nearby, and allow crude calculation when distant.
We must also consider the complexity introduced when a multi link manipulator is to be moved. Our object now becomes a non-convex hull, or a set of convex hulls, possibly represented as the previous
slices represented rotations.
An alternate development of the configuration space method was done by R.H.Davis and M.Camacho [1984], which implements the Lozano-Perez methods in Prolog under UNIX. They basically have formulated
the problem using the A* search, with a deeper development of the Path Finding Heuristics. Another variation on the Configuration Space Method was that of R.A.Brooks and Lozano-Perez [1985].
Configuration space was sliced and then broken into cells which could be full, half full or empty. From this representation a connectivity graph would be produced, and the A* search would be used to
find a path, This technique required alterations to both the FindPath and FindSpace routines. This technique does allow rotations ( but the run time is in the order of tens of minutes on an MIT LISP
An approach for using Cartesian Configuration Space on a Stanford Manipulator (1 Prismatic Joint) was proposed by J.Y.S.Luh and C.E.Campbell [1984]. This method had to consider both the front and the
rear travelway for the sliding prismatic link. To do this the manipulator was shrunk to a point, and the rear obstacles translated to the ’front workspace’. The method also makes use of an algorithm
for converting non-polygonal shapes to good approximation convex polygons. There were no statistics given with this method.
Configuration Space was used by J.Laumond [1986] to do planning for a two wheeled mobile robot. This method was successful in using configuration space to find paths from some very difficult
problems. The best example given of the success of this technique is the Parking Spot problem. In this case the wheeled robot is very similar to an automobile which is stuck between two other parked
cars. To get out the robot had to move in both forward and reverse directions to maneuver out of the ’tight spot’.
As for evaluating the overall success of this technique, the computational limits are the major concern. Lozano-Perez, claims success in a laboratory setting with his configuration space path
planning routines on a seven degree of freedom manipulator. But he does not note his execution speed, nor does he describe his results quantitatively. This technique was implemented previously by
Udupa (1977) for computer control of manipulators. His experiment used a robot with three approximated degree of freedom, in three dimensions. Because of the three degree of freedom limit, the robot
was limited in cluttered environments, and had to depend upon some heuristics. His technique also made use of a variable description of space, using recursive addition of straight paths. The
recursive determination of paths has some of the same drawbacks that the swept volume method has, and this is what Lozano-Perez and Wesley overcame with their graph search method. This technique is
very useful for planning 2 dimensional paths, based on the easy and speed of calculation. There is also no doubt that this technique could be made to work with a more complex robot path planning
problem, but the computational speed would be a major factor. Lozano-Perez and Wesley (1979) implemented their algorithms on an IBM370/168 in PL/1. This technique does not handle rotations well, and
the result may be a non-optimal solution, or possibly even no solution. The algorithm will provide the shortest distance path, with collision avoidance, but it may not produce continuous paths.
Lozano-Perez (1983) also discusses his algorithms in depth which means that his methodology is easy to incorporate in other projects.
36.12.1.1 - Transformed Space - A Cartesian Configuration Space Application
A use of the Configuration Space method was made by C.Tseng, C.Crane and J.Duffy [1988] for a good solution to the pick and place problem. The objects as 2.5D in the environment by represention as
upward extruded polygons. First the objects are grown to compensate for the cross-section of the manipulator. The manipulator arm is then ’lifted up’ to ensure clearance of all the objects in its
path. This method may fail in the presence of very tall objects. The method was implemented on a VAX 750 and could find paths in cluttered workcells in under 2 seconds.
36.12.2 Transformed Space - Joint Configuration Space
The Cartesian Configuration Space Method uses a check for which particular points in space are free, and then chooses a free path through. This is not very useful when expanding to a multilink
manipulator. Thus an approach has been formulated to determine which points in joint space are collision free. This method was formulated by W.E. Red and Hung-Viet Truong-Cao [1985]. This method was
applied to manipulators with two revolute joints, and to a robot with one revolute and one prismatic joint. This method works best with two joints, and expansion to three joints requires more
computation time. Thus this solution is ideal when the robot is operating in a 2D planar configuration. The effect is a setup time of 2 to 5 minutes, and then 15 seconds for any solution after the
initial setup.
Figure 36.32 Figure C.3 Joint Configuration Space for two Revolute Joints
This technique has some definite advantages in the speed of solution. Resolution errors occur due to the resolution of the configuration space map. This table can only be used for the motion of two
joints, the third increases the complexity exponentially, but it is still ideal for some batch processing applications.
36.12.3 Transformed Space - Oct-Trees
An interesting method created by B.Faverjon [1986] is to constructively model solid objects (via a custom CAD program) and then generate an Oct-tree representation of joint space from these. The A*
search is used to find trajectories in the Oct-tree. This method works in cluttered environments for pick and place operations. The method was solved on a Perkin-Elmer mini-computer in under a
36.12.4 Transformed Space - Constraint Space
K.L.Muck [1988] tried a different sort of mapping technique. Space is represented in an Oct-tree, with the Oct-tree representing robotic motion constraints in the environment. A connectivity graph is
then generated from the Oct-tree and the A* search is used. The main thrust of this routine is to reduce the problem to solving the specific motion constraints which apply to the current condition.
This technique was implemented for a single link manipulator in a convex hull environment.
36.12.5 Transformed Space - Vision Based
Some of the potential of Spatial Planning is exposed in some of the current research. To allow the development of vision for use in the field of path planning, E.K. Wong and K.S. Fu [1986] have done
some research. This research allows a path planning method to be run with three views of a work cell, and from these three views deduce the maximum filled volume. Once the information from the vision
system has been interpreted to provide the basic world model, then the objects may be grown into configuration space for an arbitrary moving object. The three views of the object then may be examined
from each of the three views, to determine the free path. This premise is based on the idea that if a clear path is visible in one view, then it is a clear path in three dimensional space. This
technique uses oct-trees for the representation of space, thus the technique may be very efficient (depending upon the resolution of the oct-tree). This method was implemented on a VAX 11/780 to find
a path for an obstacle in three space in 1 to twenty seconds (depending upon the oct-tree search depth). This had not been mated to a vision system in the cited paper.
36.12.6 Transformed Space - General Interest
E.Palma-Villalon and P.Dauchez [1988] came up with a method to do fast path planning for a mobile robot. Rectangles are used to represent obstacles, and the moving robot is represented with a circle.
The obstacles are grown by the radius of the circle (into configuration space). A map is created with a course resolution is made to indicate which objects are present in a grid box.
Figure 36.33 Figure C.4 Grid Representation of Space
This map only indicates an objects presence, the remainder of the information is kept in a concurrent list. By finding and using a series of holes and walls within the grid an A* search is applied to
find the best path. The cost function of the search is based on the path length and the number of turns made. The performance of this method is not stated, and thus no basis for comparison is
available. This method provides straight line path segments. The mapping to configuration space could be convenient for the first pass of a path planner for general path finding.
36.13 Appendix D - Field Methods
In an attempt to model ’natural flow’, there have been attempts to use gradients and potentials to choose natural path choices. These methods use the equivalent of field representations to provide a
direction of travel that is ’down hill’. These methods have long setup times, and they can get caught. These techniques will benefit the most as more powerful computer hardware becomes available.
36.13.1 Spatial Planning - Steepest Descent
In an attempt to find a fast path planning method, the steepest descent method was proposed by P.W.Verbeek, L.Dorst, B.J.H. Verwer, and F.C.A. Groen [1987]. Their method is an array based method
which will take the workspace represented polygons. This array is a two dimension representation of space in which the goal position is represented with a zero value, and the objects are represented
with a value of infinity. Each element of the array is considered in a series of iterations over the array. The result of these iterations is a map of a ’height’ with respect to the goal state. The
method then just follows the steepest gradient, down to the goal state.
Figure 36.34 Figure D.1 Steepest Descent Representation and Path
This method has been implemented for a 2D multilink manipulator, but it uses 4 Mbytes of memory for a workspace resolution of 32 by 32 by 32. On a 12 MHz, 68000 VME computer system the whole process
takes about 12 seconds. This is broken up into a 6 seconds for detection and labeling of forbidden states, a 6 second generation of distance landscape, and less than a second for finding the shortest
path by steepest descent.
36.13.2 Spatial Planning - Potential Field Method
When considering that the basic thrust of the path planning methods is to avoid obstacles, it is easy to compare this avoidance to repulsion. The basic concept of repulsion is based on potential
fields, as thus the potential field method of W.T.Park [1984]. In particular, the method was directed towards a workcell with multiple manipulators. In this setting there is a problem with potential
manipulator collisions, which must be considered when path planning. To do this a planar view of the work cell is created, and the arms are given a potential. The potential repulsion of the
manipulators is mapped for a number of various joint and slider positions. In the work of Park, a manipulator with two revolute joints, and a Stanford manipulator is used. Both of the manipulators
have two degrees of freedom, thus a number of two dimensional arrays are necessary to fully describe the work space. This memory requirement is very large, and thus is one of the drawbacks of this
technique. The Even with the use of compression techniques, the arrays still consume over 100 KBytes each. In the experimental implementation, the computer used was a VAX 750. The problem was
constrained to 4 d.o.f., which used 2 MBytes of memory, and took 8 hours of computation time, to calculate about a million combinations. This method may also get caught in cul-de-sacs. The bottom
line on this method is it highly oriented to batch and workcell operations, but too staggering to consider for use in a practical system.
In a more complex vein, Y.K.Hwang and N.Ahuja [1988] have developed a method using polygons and polyhedra to represent objects, from which a potential field is generated. First general topological
paths are found from valleys of potential minimums. The best of these paths is selected, and three FindPath algorithms are used to refine it, until it is acceptable. The first algorithm moves along
the path and reduces potential numerically. Second, a tight fit algorithm is used for small pathways. Lastly, an algorithm which will move off the given path if necessary is used, as a last resort.
This method has been implemented to run on a SUN 260 for a ’Piano Movers Problem’. The total runtime is in the tens of minutes, and it does fail in certain cases.
To avoid becoming trapped when using this method, another approach was developed to mapping the potential field by P.Khosla and R.Volpe [1988]. They have developed an alternate approach which avoids
the local minima found in traditional potential field methods. To do this they have used superquadratic approximation of the potential fields to drop off from obstacles swiftly. The superquadratic is
used to have a gradual slope to the goal, thus to make sure that its effect is more wide spread than the obstacles. The results which they have obtained are not described, but they have written a
program which will work with a two link manipulator, ignoring link collisions.
36.14 Appendix E - New and Advanced Topics
There are some methods of path planning which cannot be easily classified into the four preceding divisions. These are listed here, and they are devided into advanced topics, which are ahead of their
time, and new topics which are still awaiting implementation.
36.14.1 Advanced Topics - Dual Manipulator Cooperation
Dual manipulator cooperation is an interesting concept that allows maximum use of the manipulators. When a payload is encountered that is too heavy, or bulky, for a single manipulator, a second
manipulator may be used in unison with the first. I.H.Suh and K.G.Shin [1989] have developed some of the theoretical requirements for two manipulators to carry a single large object. This is done
through assigning one manipulator as the leader, and the second as the follower. To overcome the problem caused by the limited degrees of freedom, their method allows one manipulator to ’slide’ with
respect to the other during the carrying process. If two 6 degree of freedom manipulators were to try and carry a piece together they would find limits. By allowing one manipulator to slide, it is
equivalent to adding 1 degree of freedom to the system.
A paper on optimal arm configurations is given by S.Lee [1989]. A method is discussed for identifying optimal dual arm control based on manipulability ellipsoids. This is a lengthy paper outlining
all of the details of the method.
36.14.2 Advanced Topics - A Postieri Path Planner
A Sensor Based path planning strategy was devised by (E.Fruend and H.Hoyer [1988]) which allows real time trajectory planning and collision avoidance. This strategy is achieved in a hierarchical
system, which is also presented in this paper. This strategy is based on non-linear control theory, and it considers four cases,
1. Moving Arms, Stationary Robots, Stationary Obstacles, and Constant Obstacle Size.
2. Stationary Arms, Mobile Robots, Moving Obstacles, and Constant Obstacle Size.
3. Moving Arms, Mobile Robots, Moving Obstacles, and Variable Obstacle Size.
4. Moving Arms, Mobile Robots, Moving Obstacles, and Constant Obstacle Size.
the calculations of the trajectories in this technique are done on the order of milliseconds, and this shows great potential as the low end of a Dynamic Planner, which is completely autonomous. This
method considers the Dynamics of manipulators as well, and attempts to generate a minimum time trajectory, which is collision free.
36.14.3 New Topics - Slack Variables
The use of slack variables has been suggested vaguely in a paper by S.C.Zaharakis and A.Guez [1988]. This uses a 2D environment filled with boxes, in which an A* algorithm is used to find paths. This
method finds paths considering bang-bang (full on or full off) control theory, and the manipulator dynamics, to find minimum time paths. The implementation was done on a MAC II, and found results in
under a minute.
36.15 References
36.1 A.P.Ambler, "Robotics and Solid modeling: A Discussion of the Requirements Robotic Applications Put on Solid modeling Systems", 2nd International Symposium Robotics Research 1985, pp 361-367.
36.2 R.A.Brooks, T.Lozano-Perez, "A Subdivision Algorithm in Configuration Space for Findpath with Rotation" IEEE Transactions on Systems, Man, and Cybernetics, Vol.SMC-15, No.2, Mar/Apr 1985, pp
36.3 R.A.Brooks, "Planning Collision-free Motions for Pick-and-Place Operations", The International Journal of Robotics Research, Vol. 2, No. 4, Winter 1983, pp 19-44.
36.4 R.A.Brooks, "Solving the Find-Path Problem by Good Representation of Free Space" IEEE Transactions on Systems, Man, and Cybernetics (Mar/Apr 1983) Vol.smc-13, No.3, pp 190-197.
36.5 R.O.Buchal, D.B.Cherchas, "An Iterative Method for Generating Kinematically Feasible Interference-free Robot Trajectories", Future Publication in Robotica.
36.6 R.H.Davis, M.Camacho, "The Application of Logic Programming to the Generation of Paths for Robots" Robotica (1984) vol.2, pp 93-103.
36.7 S.Dubowsky, M.A. Norris, Z. Shiller, "Time Optimal Trajectory Planning for Robotic Manipulators with Obstacle Avoidance: A CAD Approach", Proceedings 1986 IEEE International Conference on
Robotics and Automation, pp.1906-1912, San Francisco, California, April 1986.
36.8 B.Espiau, R.Boulic, "Collision Avoidance for Redundant Robots with Proximity Sensors", The Third International Symposium of Robotics Research, 1986, pp243-251.
36.9 E.Freund, H.Hoyer, "Real-Time Pathfinding in Multirobot Systems Including Obstacle Avoidance", The International Journal of Robotics Research, Vol. 7, No. 1, February 1988, pp 42-70.
36.10 B. Faverjon, "Object Level Programming of Industrial Robots", Proceedings 1986 IEEE International Conference on Robotics and Automation, San Francisco, April 1986, pp 1406-1412.
36.11 B.Faverjon, P.Tournassoud, "A Local Based Approach for Path Planning of Manipulators With a High Number of Degrees of Freedom", 1987 IEEE International Conference on Robotics and Automation,
Raleigh, North Carolina, March-April 1987, pp 1152-1159.
36.12 K.S.Fu, R.C.Gonzalez, C.S.G.Lee, "Robotics; Control, Sensing, Vision, and Intelligence", New York: McGraw Hill, 1987.
36.13 E.G.Gilbert, D.W. Johnson, "Distance Functions and Their Application to Robot Path Planning in the Presence of Obstacles", IEEE Journal of Robotics and Automation, Vol. RA-1, No. 1, March 1985.
36.14 E.G.Gilbert, D.W.Johnson, S.S.Keerthi, "A Fast Procedure for Computing the Distance Between Complex Objects in Three Space", 1987 IEEE International Conference on Robotics and Automation", pp.
1883-1889, Raleigh, North Carolina, March-April 1987.
36.15 J.H.Graham, "Comment on "Automatic Planning of Manipulator Transfer Movements"", IEEE Transactions on Systems, Man, and Cybernetics, Vol. SMC-14, No.3, May/June 1984, pp 499-500.
36.16 Y.K.Hwang, N.Ahuja, "Path Planning Using a Potential Field Representation", Proceedings 1988 IEEE International Conference on Robotics and Automation, Philadelphia, April 1988, pp 648-649.
36.17 Y.Kanayama, "Least Cost Paths with Algebraic Cost Functions", 1988 IEEE International Conference on Robotics and Automation, Philadelphia, Pennsylvania, April 1988, pp 75-80.
36.18 K.Kant, S.Zucker, "Planning Collision-Free Trajectories in Time-Varying Environments: A Two-Level Hierarchy", Proceedings 1988 IEEE International Conference on Robotics and Automation,
Philadelphia, April 1988, pp 1644-1649.
36.19 P.Khosla, R.Volpe, "Superquadratic Artificial Potentials for Obstacle Avoidance and Approach", Proceedings 1988 IEEE International Conference on Robotics and Automation, Philadelphia, April
1988, pp 1778-1784.
36.20 S.Kirkpatrick, C.D.Gelatt, M.P.Vecchi, "Optimization by Simulated Annealing", Science, 13 May 1983, Vol.220, No.4598, pp 671-680.
36.21 J.C.Latombe, C.Laugier, J.M.Lefebvre, E.Mazer, "The LM Robot Programming System", 2nd International Symposium of Robotics Research, 1985, pp377-391.
36.22 J. Laumond, "Feasible Trajectories for Mobile Robots with Kinematic and Environment Constraints", Intelligent Autonomous Systems, An International Conference held in Amsterdam, The Netherlands,
December 1986, pp 346-354.
36.23 B.H.Lee, Y.P.Chien, "Time Varying Obstacle Avoidance for Robotic Manipulators: Approaches and Difficulties", 1987 IEEE International Conference on Robotics and Automation, Raleigh, North
Carolina, March-April 1987, pp 1610-1615.
36.24 S.Lee, "Dual Redundant Arm Configuration Optimization with Task-Oriented Dual Arm Manipulability", Vol. 5, No. 1, February 1989, pp.78-97.
36.25 A.Liegeois, P.Borrel, E.Dombre, "Programming, Simulating and Evaluating Robot Actions", The Second International Symposium of Robotics Research, 1985, pp 411-418.
36.26 T.Lozano-Perez, M.A.Wesley, "An Algorithm for Planning Collision Free Paths Among Polyhedral Obstacles" Communications of the ACM (October 1979) vol.22, no.10, pp 560-570.
36.27 T. Lozano-Perez, "Spatial Planning: A Configuration Space Approach" IEEE Transactions on Computers, Vol.c-32, No.2, February 1983.
36.28 J.Y.S.Luh, C.E.Campbell, "Minimum Distance Collision-Free Path Planning for Industrial Robots with a Prismatic Joint", IEEE Transactions on Automatic Control, Vol. AC-29, No. 8, August 1984.
36.29 A.C.Meng, "Dynamic Motion Replanning for Unexpected Obstacles", Proceedings 1988 IEEE International Conference on Robotics and Automation, Philadelphia, April 1988, pp 1848-1849
36.30 K.L.Muck, "Motion Planning in Constraint Space", Proceedings 1988 IEEE International Conference on Robotics and Automation, Philadelphia, April 1988, pp 633-635.
36.31 N.J. Nilsson, "A Mobile Automaton: An Application of Artificial Intelligence Techniques" Proceedings International Joint Conference Artificial Intelligence, 1969, pp 509-520.
36.32 E.Palma-Villalon, P.Dauchez, "World Representation and Path Planning for a Mobile Robot", Robotica (1988), Volume 6, pp 35-40.
36.33 W.T.Park, "State-Space Representation for Coordination of Multiple Manipulators", 14th ISIR, Gothenburg, Sweden, 1984, pp 397-405.
36.34 K.Rao, G.Medioni, H.Liu, G.A.Bekey, "Shape Description and Grasping for Robot Hand-Eye Coordination", IEEE Control Systems Magazine, Vol.9, No.2, February 1989, pp.22-29.
36.35 W.E.Red, H.V.Truong-Cao, "Configuration Maps for Robot Path Planning in Two Dimensions" Journal of Dynamic Systems, Measurement, and Control (December 1985) Vol.107, pp 292-298.
36.36 S.Singh, M.C.Leu, "Optimal Trajectory Generation for Robotic Manipulators Using Dynamic Programming", Journal of Dynamic Systems, Measurement, and Control, June 1987, Vol. 109, pp 88-96.
36.37 T.Soetadji, "Cube Based Representation of Free Space", Intelligent Autonomous Systems, An International Conference held in Amsterdam, The Netherlands, December 1986, pp 546-560.
36.38 I.H.Suh, K.G. Shin, "Coordination of Dual Robot Arms Using Kinematic Redundancy", IEEE Transactions on Robotics and Automation, Vol.5, No.2, April 1989, pp. 236-242.
36.39 K.Sun, V.Lumelsky, "Computer Simulation of Sensor-based Robot Collision Avoidance in an unknown Environment", Robotica (1987), volume 5, pp 291-302.
36.40 O.Takahashi, R.J.Schilling, "Motion Planning in a Plane Using Generalized Voronoi Diagrams", IEEE Transactions on Robotics and Automation, Vol.5, No. 2, April 1989, pp 143-150.
36.41 C.Tseng, C.Crane, J.Duffy, "Generating Collision-FreePaths for Robot Manipulators", Computers in Mechanical Engineering, September/October 1988, pp 58-64.
36.42 S. Udupa, "Collision Detection and Avoidance in Computer Controlled Manipulators" Ph.D. Thesis, California Institute of Technology, Pasadena, California, 1977.
36.43 P.W.Verbeek, L.Dorst, B.J.H. Verwer, F.C.A.Groen, "Collision Avoidance and Path Finding Through Constrained Distance Transformation in Robot State Space", Intelligent Autonomous Systems, An
International Conference held in Amsterdam, The Netherlands, December 1986, pp 627-634.
36.44 M.Vukobratovic, M.Kircanski, "A Method for Optimal Synthesis of Manipulation Robot Trajectories", Journal of Dynamic Systems, Measurement and Control, June 1982, Vol. 104, pp 188-193
36.45 E.K. Wong, K.S.Fu, "A Hierarchical Orthagonal Space Approach to Three-Dimensional Path Planning", IEEE Journal of Robotics and Automation, Vol. RA-2, No. 1, March 1986, pp 42-53.
36.46 S.C.Zaharakis, A.Guez, "Time Optimal Navigation Via Slack Time Sets", Proceedings 1988 IEEE International Conference on Robotics and Automation, Philadelphia, April 1988, pp 650-651.
|
{"url":"https://engineeronadisk.com/V3/engineeronadisk-174.html","timestamp":"2024-11-11T01:13:11Z","content_type":"text/html","content_length":"132484","record_id":"<urn:uuid:dc2598d9-dfa0-4069-ad02-496247513286>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00006.warc.gz"}
|
Spatiotemporal geostatistical modelling of groundwater level variations at basin scale: a case study at Crete's Mires Basin
Spatiotemporal geostatistical analysis of groundwater levels is a significant tool for groundwater resources management. This work presents a valid spatiotemporal geostatistical model for the
groundwater level variations of an aquifer in Crete, Greece. The goal of this approach is to accurately map the aquifer level at variable time-steps using joint space–time information. The proposed
model applies the space–time ordinary kriging (STOK) methodology using joint space–time covariance functions. A space–time experimental variogram is determined from the monthly groundwater level
time-series between the hydrological years 2009 and 2014 at 11 sampling stations. The experimental spatiotemporal variogram is successfully fitted by the product–sum model using a Matérn spatial and
temporal function. STOK was used to predict the monthly groundwater level at each sampling station from January to May 2015. Validation results show low prediction errors that range from 0.95 to 1.45
m, while the kriging variance accurately determines the variability of predictions. Maps of groundwater level predictions and uncertainty are developed for significant months of the validation period
to assess the aquifer spatiotemporal variability. This work demonstrates that space–time geostatistics can successfully model the spatial dynamic behaviour of an aquifer when the space–time
dependencies are appropriately modelled, even for a sparse dataset.
Geostatistical analyses usually deal with the spatial distribution and variability of measured data. In areas with spatial and temporal data availability, the application of space–time models can
improve the reliability of predictions by incorporating space–time correlations instead of purely spatial ones (Lee et al. 2010). Spatiotemporal continuity provides a more stable base for exploring
dynamic processes that evolve in time and space, since temporal neighbours can be as informative and useful as spatial neighbours. A spatial-only analysis would be less advantageous than a space–time
analysis of such processes because it would ignore the spatial and temporal nature of the involved dependencies. A major advantage of space–time distributed data, especially when they are sparse, is
that a larger number of data points are applied to support model parameter estimation and prediction.
In a statistical context, these data points can be considered random fields spread out in space and evolving in time (space–time random fields or S/TRF). Space–time geostatistical analysis is based
on the joint spatial and temporal dependence between observations. There are two ways to represent space–time random variables (Christakos 1991): (1) full space–time models using separable or
non-separable space–time covariance functions or variograms and (2) vectors of temporally correlated spatial random fields (SRFs) , where T is the number of temporally correlated SRFs or vectors of
spatially correlated time series , where n is the number of locations. The representation depends on the domain density (space or time).
Usually, spatiotemporal interpolation is performed by applying the standard kriging algorithms extended in a space–time frame. The space–time kriging method employs the first type of model previously
described. The two main tasks of space–time analysis are interpolation and extrapolation. The first refers to the estimation of variable values at unmeasured locations inside the spatial extent of
the study area, while the latter extends the estimations ahead of the boundaries of the observations in space or time. The main assumption used in interpolation and extrapolation is that the specific
patterns extracted from the available data analysis deliver sufficient information to capture the spatiotemporal dynamics of the observed data (Lee et al. 2010).
Space–time geostatistical approaches have been successfully applied to model hydrological data variability. Rouhani & Hall (1989) applied space-time kriging in geohydrology, using intrinsic random
functions (polynomial spatiotemporal covariance) for the space–time geostatistical analyses of piezometric data. Rouhani & Myers (1990) discussed potential drawbacks of the space–time geostatistical
analysis of geohydrological data (piezometric data). More recently, space-time kriging was applied to estimate the water level of the Queretaro-Obrajuelo aquifer (Mexico) using a product–sum model
with spherical components on a large space–time dataset (Júnez-Ferreira & Herrera 2013) and to map the seasonal fluctuation of water-table depths in Dutch nature conservation areas using a metric
space–time exponential variogram model (Hoogland et al. 2010). Furthermore, the space–time ordinary kriging (STOK) method was used to design rainfall networks and analyse precipitation variations in
space and time (Rodriguez-Iturbe & Mejia 1974; Biondi 2013; Raja et al. 2016), and it was tested in a comparison study for estimating runoff time series in ungauged locations (Skøien & Blöschl 2007).
Space-time kriging has also been used in a wide range of scientific fields and areas, such as agriculture (Stein et al. 1998; Heuvelink & Egmond 2010), atmospheric data (De Iaco et al. 2002; Nunes &
Soares 2005), soil science/water content (Snepvangers et al. 2003; Jost et al. 2005), surface temperature data (Hengl et al. 2011), wind data (Gneiting 2002), gamma radiation data (Heuvelink &
Griffith 2010), epidemiology (Gething et al. 2007) and forecasting municipal water demand (Lee et al. 2010).
Sparsely monitored watersheds are not regularly monitored through space and time, and therefore data availability is a factor limiting purely spatial or temporal analysis. This problem, and the
associated challenges around uncertainty of boundary conditions, means that it is difficult to set up a dynamic numerical model. However, the combination of measured data can create a very useful
dataset for spatiotemporal modelling and analysis by incorporating joint spatiotemporal correlations.
Space–time geostatistical analysis could therefore be used as a surrogate to model the aquifer behaviour in space and time and to identify basin locations of significant interest to aid the water
resources management of the area. In this work, space–time geostatistical approaches in terms of STOK for the spatiotemporal monitoring and prediction of the groundwater level in a sparsely gauged
basin were applied. Space–time dependencies were determined by calculating a space–time experimental variogram from a monthly groundwater level time series between the years 2009 and 2014 at 11
sampling stations that was fitted to an appropriate theoretical spatiotemporal variogram model. The purpose of this work was therefore to model the dynamic behaviour of the aquifer and to predict the
water-table spatial variation at selected time steps (January–May 2015) by exploiting the available space–time information. The efficiency of the approach is evaluated in a real case study.
The Mires Basin of the Messara Valley is located on the island of Crete in Greece. The basin has an area of almost 50 km^2, and is crossed by a river of intermediate flow. The aquifer is unconfined,
consisting primarily of homogeneously distributed alluvial deposits. Although it is the largest of the island, it is sparsely monitored. Since 1981, the groundwater resources have been significantly
reduced due to over-pumping (Varouchakis 2016a). Between 2003 and 2009 the basin was rarely monitored, but from 2009 to 2015 (until May), the local authorities and owners performed monthly monitoring
at 11 wells (Figure 1). However, gaps were present in the time series, and these were substituted by applying an autoregressive exogenous variable (ARX) model to each well that used the precipitation
surplus as an exogenous variable (Varouchakis 2016b). The accuracy of the ARX results was evaluated in comparison to reported basin averages to validate the proposed spatiotemporal model.
Several different techniques have previously been applied at the case study site. Groundwater flow modelling and aquifer level estimation were performed using the MODFLOW code focussing on the early
2000s (Kritsotakis & Tsanis 2009). At that time, extensive fieldwork was performed in the area due to the preparation of the island's first water resources management plan. Another study in the area
estimated the annual groundwater withdrawal based on the surface and groundwater hydrological balance, providing the groundwater level temporal variation in the basin (Tsanis & Apostolaki 2009).
Artificial neural networks have also been applied for the temporal modelling of the basin's groundwater level (Tsanis et al. 2008), while for the same purpose but using more recent data, an ARX model
using a Kalman filter has been used as well (Varouchakis 2016b). In addition, projections of surface water resources availability have been studied based on climate scenarios predicting a decreasing
trend during the next decades, connecting these results to lower recharge rates and, therefore, reduced aquifer levels (Koutroulis et al. 2013). Spatial analysis of aquifer levels using
geostatistical techniques and auxiliary information providing locations of significant importance for water resources management purposes has also been applied in the area for different hydrological
years (Varouchakis & Hristopulos 2013; Varouchakis et al. 2016). All the aforementioned studies have provided the aquifer status temporally or spatially. Only the numerical approach that used the
MODFLOW model has studied the aquifer in a space–time context but based on hydrogeological data.
This work employs a data-driven approach that combines space–time groundwater level data, estimating their interdependence to predict the aquifer level at unvisited locations and different time
steps. This study is important, as by developing an appropriate geostatistical space–time model, future aquifer level spatiotemporal variations can be estimated and used in groundwater resources
management plans. In addition, the model can be used to continuously process new space–time data to improve the spatial and temporal predictions.
Using spatiotemporal geostatistics, the groundwater level dataset can be usefully exploited in order to identify the spatiotemporal behaviour of the aquifer and to obtain useful information regarding
the space–time data correlations for more accurate space–time predictions. Space–time geostatistical analysis involved the following steps: (1) space–time variogram calculation, (2) application of
STOK for prediction, and (3) estimation of prediction accuracy.
The main goal of space–time analysis is to model multiple time series of data at spatial locations where a distinct time series is allocated. The time variable is considered an additional dimension
in geostatistical prediction. A spatiotemporal stochastic process can be represented by where the variable of interest of random field Z is observed at N space–time coordinates , while the optimal
prediction of the variable in space and time is based on (Christakos 1991; Cressie & Huang 1999).
Spatiotemporal two-point function
Set a second-order stationary space–time random field. is the spatial domain (
is the space dimension), and is the temporal domain, with expected value (
Myers et al. 2002
): , and covariance function: where , , . The covariance function depends only on the lag vector and not on location or time, while it must satisfy the positive-definiteness condition in order to be
a valid covariance function. Hence, for any , any real , and any positive integer
, must satisfy the following inequality:
The positive-definiteness condition is often presented as the non-negative definiteness condition (i.e. ). If is constant and depends only on the lag vector : Τhe S/TRF is characterised as
second-order stationary. Spatial and spatiotemporal geostatistical prediction methodologies generally rely on stationarity (stationary mean and covariance or variogram). In addition, the field is
isotropic if: meaning that the covariance function depends only on the length of the lag.
Under the weaker intrinsic stationarity assumption, the increment is second-order stationary for every lag vector instead of the random field. Then, is called an intrinsic random function and is
characterised by: and where the term var denotes the variance. The function only depends on the lag vector . The quantity is called the semi-variance at lag .
The random field has an intrinsically stationary variogram if it is intrinsically stationary with respect to both the space and time dimensions. The has a spatially intrinsically stationary variogram
if the variogram depends only on the spatial separation vector for every pair of time instants , and it has a temporally intrinsically stationary variogram if it depends only on the temporal lag .
Equation (
) provides the space–time stationary variogram function. Under the stronger assumption of second-order stationarity, the semi-variance is defined as:
The primary concern when modelling space–time structures is to ensure that the chosen model is valid and that the model is suitable for the data. The space–time kriging estimator can be applied if
the space–time covariance function satisfies the positive definiteness condition, , explained above (Cressie & Huang 1999). The model's suitability is ensured by testing a series of available
structures on the data. The variogram function must be conditionally negative definite to ensure that the space–time kriging equations have valid unique solutions (Myers et al. 2002; De Iaco 2010).
The STOK method is a well-established method for space–time interpolation, and the significant part in the space–time process is the choice of the variogram or covariance model and the estimation of
its parameters (Christakos et al. 2001; De Cesare et al. 2001). Two categories of models are used for variogram or covariance modelling. The first includes separable models whose covariance function
is a combination of a spatial and a temporal covariance function; the second includes non-separable models, usually physically based, in which a single function models the spatiotemporal data
dependence. Separable models, however, may suffer from unrealistic assumptions and properties (Snepvangers et al. 2003; Hengl et al. 2011). Both space–time covariance models are valid (Cressie &
Huang 1999; De Iaco et al. 2001, 2002). In this work, pure non-separable covariance functions are not examined, as those developed so far for space–time use do not concern the topic of groundwater (
Cressie & Huang 1999; De Iaco et al. 2001; Gneiting 2002; Kolovos et al. 2004; Porcu et al. 2008).
Spatiotemporal covariance or variogram models
This work examines the efficiency of two well-known space–time variogram functions in groundwater level data, and a comprehensive description of them follows.
The product model (
Rodriguez-Iturbe & Mejia 1974
) belongs to the separate space–time model category and is one of the most simple ways to model a covariance or variogram in space–time. The product of a space variogram and a time variogram is
generally not a valid variogram; on the other hand, the product of a space covariance and a time covariance leads to a valid model. A variogram structure can then be determined by the product
covariance model. Valid spatial and temporal covariance models can be used in the product form below to create spatiotemporal models
The product–sum space–time model (
De Cesare et al. 2001
) is a generalisation of the product and the sum model, while it constitutes the starting point for its integrated product–sum versions. It is defined as: are purely spatial and temporal covariance
models with , and . In terms of the variogram, the above equation is expressed as:
Each space–time model (sum, product) separately has limitations that their combination does not have. The variogram structure can be expressed alternatively as follows (
De Iaco et al. 2001
): where .
Summary of spatiotemporal models' characteristics
The product model and the product–sum model are produced by separate space and time functions. The main advantage of such models is their ease of use in modelling and estimation. The product model is
separable and integrable, while the product–sum model is non-integrable with respect to and and non-separable. The space–time variogram models described above are typically used to model space–time
experimental variograms, because an arbitrary space–time metric is not required and the fitting process is similar to that for spatial variograms (Gething et al. 2007; De Iaco 2010).
Spatiotemporal geostatistical analysis and prediction
Under the second-order stationarity hypothesis, the variogram and the covariance function are equivalent. For reasons of convenience, the variogram structure is preferred. The appropriate variogram
structure is fitted to the experimental spatiotemporal model given by: where r[s] = ||s[i] –s[j]||, r[t] = |t[i] – t[j]| and N() is the number of pairs in N(). The space–time experimental variogram
is estimated as half the mean squared difference between data separated by a given spatial and temporal lag .
Geostatistical prediction is then achieved using STOK (
Christakos 1991
Goovaerts 1997
). The STOK estimator is given below: where is the set of sampling points in the search neighbourhood of the prediction point, is the unsampled location/time, are the sampled location/time neighbours
and are the corresponding space–time kriging weights. where is the number of points within the search neighbourhood of , is the variogram between two sampled points and at times and , is the
variogram between and the estimation point , and is the Lagrange multiplier enforcing the zero bias constraint.
The STOK estimation variance is given by the following equation, with the Lagrange coefficient μ compensating for the uncertainty of the mean value: The prediction is also described in matrix
notation below where the system is solved to estimate the spatiotemporal weights : where is the matrix of the spatiotemporal variogram between the observed space–time data locations, are the
spatiotemporal weights and c is the matrix of the spatiotemporal variogram between the observed space–time data locations and the space–time estimation location.
Space–time predictions are usually based on a space–time neighbourhood that encloses observations inside a search radius in space and time; the search radii depend on the space and time correlation
lengths x[s], x[t] estimated from the variogram fitting process. For small datasets, the entire dataset is used for predictions.
Spatiotemporal prediction of groundwater level data in Mires Basin
First, the experimental variogram is determined. Then, it is modelled with theoretical spatiotemporal variogram functions. The product sum and the product models were applied here, as they have been
successfully applied in other space–time studies, providing better results than their counterparts (Gething et al. 2007; De Iaco 2010). Their main characteristic is their flexibility in modelling and
estimation. The Matérn variogram model was chosen to simulate the spatial and temporal continuity of the data within the space–time models. The purely spatial geostatistical analysis of groundwater
level data in previous works at different time periods has shown that the Matérn model describes the spatial correlation of the observed data very well (Varouchakis & Hristopulos 2013).
The separable spatial and temporal Matérn variograms are presented below (
Matérn 1960
): where is the variance,
is the range parameter, is the smoothness parameter,
is the gamma function, Κ is the modified Bessel function of the second kind of order
is the space lag vector and is the time lag. The Matérn variogram is a bounded model that reaches the sill asymptotically. For , the exponential model is recovered, whereas the Gaussian model is
obtained for tending to infinity. The case was introduced by Whittle (
Pardo-Iguzquiza & Chica-Olmo 2008
). The functions above are inserted in the product (8) and product–sum (10) space–time variogram structures.
Next, the STOK methodology is applied, performing cross-validation to examine the efficiency of the proposed model. Cross-validation of the estimates was performed at the 11 observation wells for the
first five months of 2015. The geostatistical analysis was performed using originally developed codes in the Matlab^® programming environment (Matlab v.7.10), while standardised coordinates in the
interval [0,1] were used to avoid numerical instabilities.
Spatiotemporal geostatistical analysis of Mires Basin groundwater level data was applied in order to identify the monthly spatiotemporal behaviour of the aquifer since 2009 and to undertake
predictions based on the space–time data correlations. The space–time experimental variogram was determined from the monthly groundwater level time series at the 11 sampling stations for the period
The theoretical space–time variogram model fitted using the product–sum and the product variogram structures on the experimental space–time variogram obtained from the observed data are presented in
Figure 2. The separate spatial and temporal variograms of the spatial and temporal averages of the data series are also presented. It can be observed that the Matérn function fits the experimental
variograms very well, proving that it is appropriate for the joint space–time modelling of the data interdependence. According to Figure 2, the product–sum model clearly provides a better fit, as it
completely captures the experimental variogram's trend. The respective parameters for the product–sum variogram type are = 44.22 m^2, = 0.25 (≈3 km), = 1.51, ≈ 5 months and = 0.81 with , and .
The prediction involves STOK application to estimate the groundwater level at the specified location and time during the period January–May 2015. The validation results for the spatial monthly
average using both the examined space–time variogram models are presented in Tables 1 and 2 in terms of well-known statistical measures, such as the mean error, mean absolute error, mean absolute
relative error, root mean square error and coefficient of determination.
Table 1
Month . MAE (m) . ME/BIAS (m) . MARE . RMSE (m) . R^2 .
January 1.10 0.18 0.12 2.75 0.90
February 0.95 0.08 0.11 2.14 0.91
March 1.25 −0.25 0.13 3.41 0.88
April 1.45 −0.32 0.15 3.80 0.88
May 1.24 0.20 0.13 3.34 0.88
Month . MAE (m) . ME/BIAS (m) . MARE . RMSE (m) . R^2 .
January 1.10 0.18 0.12 2.75 0.90
February 0.95 0.08 0.11 2.14 0.91
March 1.25 −0.25 0.13 3.41 0.88
April 1.45 −0.32 0.15 3.80 0.88
May 1.24 0.20 0.13 3.34 0.88
ME, mean error; MAE, mean absolute error; MARE, mean absolute relative error; RMSE, root mean square error; R^2, coefficient of determination.
Table 2
Month . MAE (m) . ME/BIAS (m) . MARE . RMSE (m) . R^2 .
January 1.42 0.26 0.16 3.15 0.89
February 1.37 0.19 0.16 3.34 0.90
March 1.54 0.34 0.18 3.74 0.88
April 1.72 −0.41 0.19 4.05 0.87
May 1.62 −0.31 0.18 3.91 0.88
Month . MAE (m) . ME/BIAS (m) . MARE . RMSE (m) . R^2 .
January 1.42 0.26 0.16 3.15 0.89
February 1.37 0.19 0.16 3.34 0.90
March 1.54 0.34 0.18 3.74 0.88
April 1.72 −0.41 0.19 4.05 0.87
May 1.62 −0.31 0.18 3.91 0.88
ME, mean error; MAE, mean absolute error; MARE, mean absolute relative error; RMSE, root mean square error; R^2, coefficient of determination.
As presented in Table 1, STOK using the product–sum variogram structure delivers more accurate results overall compared to the product variogram function. Specifically, for the five-month validation
period in absolute terms, it delivers 22% less mean absolute prediction error, leading to a closer agreement with the reported values. Another metric to assess the prediction efficiency of STOK in
terms of the two variogram models applied is the root mean square standardised error (RMSSE). It is used to assess the adequacy of the kriging variance as an estimate of the prediction uncertainty.
The RMSSE is close to one if the kriging variance accurately estimates the variability of the predictions, but if it is greater or less than one, then the prediction variability is underestimated or
overestimated, respectively, by the kriging variance. Here, the RMSSE for the five-month validation period when the product–sum variogram type is applied is equal to 0.91, while it is equal to 1.35
when the product type is applied. Thus, STOK with the product–sum model better captures the prediction variability, providing results very close to the measurement values. The RMSSE indicator further
supports the prediction efficiency of the product–sum variogram structure compared to the product one.
The aquifer level map is then derived using STOK with the product–sum spatiotemporal variogram structure for February, March, April and May of 2015, the last month of available data and the most
recent to date. The January map is very similar to that of February, as at those time periods, the aquifer levels in the basin are usually similar due to the gradual recharge of the aquifer. Thus, to
avoid redundancy, only the February map is presented. In March, the highest levels are recorded, because the recharge from the winter period has been completed. In April and May, the pumping period
starts, with significant pumping rates in the area. The contour maps of groundwater level spatial variability and uncertainty in physical space are shown in Figures 3 and 4. The maps are constructed
using estimates only at points inside the convex hull of the monitoring locations. The estimation uncertainty depends on the data density and the variogram function. For all the maps produced here,
the same variogram function and its parameters were used, and the data density was also the same. Therefore, the STOK standard deviation maps (estimation uncertainty) are similar for all the
prediction periods. Hence, the uncertainty of estimations for May expresses the uncertainty for other periods as well.
The cross-validation results can be characterised as very satisfactory, as the examined dataset was very small compared to the extensive datasets used in similar works. The delivered cross-validation
results and the standard deviation error are directly comparable with those of the two works most closely related to this study (Hoogland et al. 2010; Júnez-Ferreira & Herrera 2013). Specifically,
the proposed space–time model delivers lower error values compared to the similar studies. The Matérn function has a third shape parameter that leads to the better modelling of the experimental
variogram. Thus, the estimated parameters are more reliable, and the space–time kriging estimator provides more accurate results. Compared to similar works (Hoogland et al. 2010; Júnez-Ferreira &
Herrera 2013), this study extends the applicability of space-time kriging to relatively small space-time datasets without involving auxiliary information. In addition, it addresses a drawback
regarding the functions that can be used as spatial and temporal components of a space-time variogram structure to model efficiently both space and time variability (Rouhani & Myers 1990). It shows
that if a flexible component is chosen as the Matérn function, the space-time kriging method is reliable to be applied to sparse space-time datasets. The Matérn model controls the degree of
smoothness of the examined dataset introducing fitting flexibility (Pardo-Iguzquiza & Chica-Olmo 2008).
Figures 3 and 4 present the spatial variability of the groundwater level for February–May, estimated based on the space–time correlations of the data that consider the dynamic aquifer behaviour. It
shows that the groundwater level changes from the eastern part of the basin towards the western part following the ground surface elevation. Figure 4 also presents the uncertainty (STOK standard
deviation) of estimations and provides the basin locations where further sampling is needed, mainly at the central eastern part of the aquifer, to obtain more accurate estimations. Estimation
variances take the highest values when the density of data is poor and the lowest values when the data distribution is good (Hohn 1999). In addition, the variance range depends on the optimal fitting
of the theoretical variogram model to the experimental one (Chiles & Delfiner 1999), as is performed here (Figure 2). Therefore, the spatiotemporal properties of the data are optimally determined,
leading to accurate estimates with low error variances.
The scope of this work was to model spatiotemporally the Mires Basin aquifer response since 2009. Reliable modelling provides the grounds for spatiotemporal predictions with the highest possible
accuracy. A novel goal of this study was to assess the application of the product–sum variogram model to sparse groundwater level data. The model delivered an excellent variogram fit and very
accurate estimates. After the variogram fitting, the spatial correlation length was determined to be almost 3 km, and the temporal length was determined to be almost five months. This explains the
accurate estimates, as the spatiotemporal correlations were reliably represented inside the five-month estimation window.
Finally, the aquifer level maps (Figures 3 and 4) show that around a quarter of the examined basin area (western part), which corresponds to a significant part of the productive agricultural land,
has a decreased aquifer level compared to the basin's groundwater level presented in previous works during previous hydrological years (Varouchakis & Hristopulos 2013; Varouchakis 2016a; Varouchakis
et al. 2016).
Reliable space–time estimates are important for groundwater resources management. This work examined the spatiotemporal modelling of groundwater levels in a hydrological basin where the groundwater
resources had presented a significant reduction in the past 30 years. The spatiotemporal approach involves the application of the product–sum variogram function, which has been applied successfully
in other disciplines areas, adapting the flexible Matérn function. This non-separable spatiotemporal structure fits the experimental space–time variogram of the groundwater level very well, capturing
the space–time correlations of the available data. The STOK estimates accurately present the groundwater level variability for the examined validation period and provide the spatial distribution of
the aquifer level at ungauged locations. The approach is shown to provide a reliable alternative in the spatiotemporal modelling of aquifer levels that requires less data than a numerical model to
represent the head field and in less computational time.
The author would like to thank the two anonymous reviewers and the editor for their valuable comments and effort to improve the manuscript.
|
{"url":"https://iwaponline.com/hr/article/49/4/1131/38815/Spatiotemporal-geostatistical-modelling-of","timestamp":"2024-11-03T17:15:45Z","content_type":"text/html","content_length":"404110","record_id":"<urn:uuid:9a083020-fdc0-4172-b88c-81c3590f755b>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00241.warc.gz"}
|
A Size-Free CLT for Poisson Multinomials and its Applications
Jun 16, 2016
An $(n,k)$-Poisson Multinomial Distribution (PMD) is the distribution of the sum of $n$ independent random vectors supported on the set ${\cal B}_k=\{e_1,\ldots,e_k\}$ of standard basis vectors in $\
mathbb{R}^k$. We show that any $(n,k)$-PMD is ${\rm poly}\left({k\over \sigma}\right)$-close in total variation distance to the (appropriately discretized) multi-dimensional Gaussian with the same
first two moments, removing the dependence on $n$ from the Central Limit Theorem of Valiant and Valiant. Interestingly, our CLT is obtained by bootstrapping the Valiant-Valiant CLT itself through the
structural characterization of PMDs shown in recent work by Daskalakis, Kamath, and Tzamos. In turn, our stronger CLT can be leveraged to obtain an efficient PTAS for approximate Nash equilibria in
anonymous games, significantly improving the state of the art, and matching qualitatively the running time dependence on $n$ and $1/\varepsilon$ of the best known algorithm for two-strategy anonymous
games. Our new CLT also enables the construction of covers for the set of $(n,k)$-PMDs, which are proper and whose size is shown to be essentially optimal. Our cover construction combines our CLT
with the Shapley-Folkman theorem and recent sparsification results for Laplacian matrices by Batson, Spielman, and Srivastava. Our cover size lower bound is based on an algebraic geometric
construction. Finally, leveraging the structural properties of the Fourier spectrum of PMDs we show that these distributions can be learned from $O_k(1/\varepsilon^2)$ samples in ${\rm poly}_k(1/\
varepsilon)$-time, removing the quasi-polynomial dependence of the running time on $1/\varepsilon$ from the algorithm of Daskalakis, Kamath, and Tzamos.
* To appear in STOC 2016
|
{"url":"https://www.catalyzex.com/paper/a-size-free-clt-for-poisson-multinomials-and","timestamp":"2024-11-10T18:59:56Z","content_type":"text/html","content_length":"54921","record_id":"<urn:uuid:10a4c676-32d7-406c-997a-f5224d5781e5>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00136.warc.gz"}
|
31 Adjectives To Describe Math
Mathematics is a language of precision and abstraction, but it’s also a subject that can evoke a wide range of emotions and perspectives. Describing math using adjectives can help convey its various
characteristics, intricacies, and implications. Adjectives can capture the complexity, beauty, and practicality of math, making it more relatable and engaging. In this article, we will explore the
use of adjectives to describe math, providing a comprehensive guide to assist in conveying the diverse aspects of this discipline.
Key Takeaways
• Adjectives can be utilized to paint a vivid and detailed picture of mathematical concepts, making them more accessible and engaging.
• Describing math using adjectives can help emphasize its practical applications as well as its abstract and theoretical nature.
• Different types of adjectives can be employed to convey the precision, complexity, beauty, and utility of math.
Adjectives To Describe Math
1. Precise
Mathematics is known for its precision. It provides a framework that ensures every statement and calculation is clear and unequivocal.
2. Logical
Math is built upon logic. It follows a set of rules and principles that create a coherent structure, allowing us to derive accurate and meaningful conclusions.
3. Systematic
Mathematics is a highly organized discipline. It structures concepts and ideas in a systematic way, facilitating our understanding and enabling us to build upon previous knowledge.
4. Versatile
Mathematics can be applied to almost every aspect of life. It has applications in science, engineering, finance, medicine, art, and countless other fields, showcasing its versatility.
5. Universal
The power of mathematics transcends cultural and linguistic barriers. It is a universal language that connects people across the globe, allowing for collaboration and shared understanding.
6. Abstract
Mathematics often deals with abstract concepts that go beyond the constraints of physical reality. It enables us to explore ideas that may not have a tangible representation in the world around us.
7. Creative
While mathematics is often seen as analytical, it is also highly creative. It encourages the generation of new ideas, the discovery of patterns and connections, and the exploration of innovative
approaches to problem-solving.
8. Challenging
Mathematics has a reputation for being challenging, but this is precisely what makes it so rewarding. The process of grappling with complex problems and overcoming obstacles helps build resilience
and critical thinking skills.
9. Empowering
Mathematics empowers individuals to make informed decisions and solve real-world problems. It equips us with tools to analyze data, make predictions, and make rational choices.
10. Infinite
Mathematics is boundless. It encompasses an infinite range of concepts, theories, and possibilities, leaving room for continuous exploration and discovery.
11. Structured
Mathematics follows a well-defined structure. It builds upon fundamental principles and axioms, allowing for the development of more complex ideas.
12. Tangible
Although mathematics can delve into the abstract, it often has tangible applications. It can help us calculate distances, areas, probabilities, and many other real-world quantities.
13. Efficient
Mathematics provides efficient problem-solving techniques that can simplify complex situations. It helps streamline processes, optimize resources, and improve decision-making.
14. Fundamental
Mathematics serves as the foundation for many scientific disciplines. It offers a fundamental framework upon which theories, models, and applications can be built.
15. Harmonious
Mathematical relationships are often harmonious and aesthetically pleasing. Equations, patterns, and symmetries can evoke a sense of elegance and beauty.
16. Mental agility
Mathematics enhances critical thinking skills and mental agility. It requires logical reasoning, the ability to identify patterns, and adaptability in problem-solving.
17. Intuitive
With practice, mathematics becomes intuitive. Complex calculations can be performed almost effortlessly, as the mind grasps the underlying principles and shortcuts.
18. Fascinating
Mathematics has an inherent fascination. It allows us to explore the mysteries of the universe, from the behavior of subatomic particles to the grandeur of celestial bodies.
19. Progressive
Mathematics is a progressive discipline. It continuously evolves, allowing for advancements in theory, applications, and techniques.
20. Practical
Mathematics provides practical tools for everyday life. From budgeting to cooking measurements, it helps us make accurate calculations and informed decisions.
21. Satisfying
Solving mathematical problems can be incredibly satisfying. The sense of accomplishment and clarity that comes from finding the solution is deeply rewarding.
22. Unifying
Mathematics unifies seemingly disparate concepts and fields. It unveils the interconnectedness of the world, revealing how seemingly unrelated phenomena are governed by similar principles.
23. Fundamental to technology
Mathematics is the backbone of modern technology. From programming to cryptography, it underpins innovations that have transformed our lives.
24. Constant
Mathematics relies on constants, such as π and e, that have an unchanging value. They provide stability and a common reference point in a world of variables.
25. Timeless
Mathematical truths remain unchanged across time. The Pythagorean theorem, for example, has held true for thousands of years and will continue to do so.
26. Transformative
Mathematics has the power to transform the way we perceive the world and solve problems. It enables breakthroughs that revolutionize entire fields and industries.
27. Exact
Mathematics aims for exactness. It seeks to eliminate ambiguity and approximations, ensuring that results are precise.
28. Collaborative
Mathematics encourages collaboration and cooperation. Mathematicians often work together to solve complex problems, share knowledge, and push the boundaries of the discipline.
29. Illuminating
Mathematics illuminates the depths of understanding. It allows us to shed light on complex phenomena, distilling them into concepts that can be comprehended and communicated.
30. Time-efficient
Mathematical techniques often provide shortcuts and algorithms that help us save time. These efficient methods enable us to solve problems faster and with greater accuracy.
31. Foundational
Mathematics serves as a foundational discipline in education. It equips students with critical thinking skills and analytical reasoning abilities that are applicable across various subjects.
Why Use Adjectives To Describe Math
Mathematics encompasses a broad spectrum of concepts and applications, ranging from the theoretical constructs of pure mathematics to the practical utility of applied mathematics. Describing math
using adjectives can help individuals develop a more nuanced and comprehensive understanding of its multifaceted nature. Rather than being confined to dry and technical jargon, adjectives enable us
to convey the emotive, aesthetic, and contextual aspects of math. By employing adjectives, we can evoke curiosity, awe, and appreciation for the elegance and power of mathematical ideas.
Mathematics is often perceived as intimidating or inaccessible due to its reliance on abstract symbols, rigorous logic, and complex formulas. Describing math with adjectives can demystify its
enigmatic nature and humanize its concepts, making them more approachable and relatable. Whether it’s the elegance of a geometric proof, the creativity of problem-solving, or the conceptual clarity
of a mathematical model, adjectives offer a way to capture the essence of mathematical ideas in a more evocative and expressive manner.
How To Choose The Right Adjective To Describe Math
Selecting the appropriate adjectives to describe math involves considering the specific attributes, qualities, and aspects that one intends to accentuate. When choosing adjectives to characterize
mathematical concepts, it’s essential to reflect on the context, audience, and purpose of the description. Whether aiming to evoke a sense of wonder, communicate precision, or underscore practical
applications, the choice of adjectives should align with the intended message and the targeted reception.
Adjectives can be chosen based on various dimensions of mathematical concepts, such as their elegance, complexity, utility, abstraction, or relevance to real-world phenomena. Understanding the
nuanced connotations and implications of different adjectives is crucial in capturing the essence of mathematical ideas accurately. Additionally, the selection of adjectives should be guided by a
desire to convey the richness, depth, and diversity of math, showcasing its role as a fundamental language for understanding the world.
Types Of Adjectives For Describing Math
1. Precision And Accuracy
• Rigorous: Describing a mathematical proof or argument as rigorous emphasizes its thoroughness, precision, and lack of logical flaws.
• Exact: Referring to mathematical calculations as exact underscores their correctness and precision, highlighting the absence of approximation or ambiguity.
2. Complexity And Intricacy
• Intricate: Describing a mathematical pattern as intricate conveys its detailed and elaborate nature, showcasing its complexity and interconnectedness.
• Sophisticated: Characterizing a mathematical algorithm or method as sophisticated emphasizes its advanced, intricate, and nuanced design, reflecting its depth and ingenuity.
3. Elegance And Beauty
• Elegant: Describing a mathematical proof or solution as elegant evokes its simplicity, efficiency, and aesthetic appeal, highlighting its beauty and ingenuity.
• Symmetrical: Referring to a geometric figure or pattern as symmetrical accentuates its balance, harmony, and aesthetically pleasing properties, conveying its visual appeal.
4. Practical Utility
• Applicable: Describing a mathematical concept as applicable underscores its practical relevance and usefulness in solving real-world problems or phenomena.
• Utilitarian: Characterizing a mathematical approach as utilitarian emphasizes its practicality, efficiency, and relevance to practical applications, emphasizing its tangible benefits.
5. Abstract And Theoretical
• Abstract: Referring to a mathematical concept as abstract underscores its conceptual nature, disconnect from concrete phenomena, and emphasis on general principles and structures.
• Theoretical: Describing a mathematical framework or model as theoretical highlights its basis in abstract principles and its role in advancing theoretical understanding, emphasizing its
intellectual depth.
6. Clarity And Intuitiveness
• Intuitive: Describing a mathematical concept or method as intuitive underscores its ease of understanding, simplicity, and accessibility, conveying its natural and straightforward nature.
• Transparent: Characterizing a mathematical explanation or presentation as transparent emphasizes its clarity, coherence, and lack of ambiguity, showcasing its comprehensibility.
7. Innovation And Creativity
• Innovative: Referring to a mathematical approach as innovative underscores its originality, creativity, and departure from conventional methods, showcasing its pioneering nature.
• Creative: Describing a mathematical solution or approach as creative highlights its inventiveness, imagination, and unconventional thinking, demonstrating its artistic and inventive aspects.
Describing math using adjectives is a powerful tool for elucidating the diverse dimensions and attributes of mathematical concepts. By selecting adjectives that capture the precision, complexity,
elegance, utility, abstraction, clarity, and creativity of math, we can convey its richness and relevance to a broad audience. Adjectives enable us to humanize mathematical ideas, making them
more relatable and engaging while also highlighting their profound impact on our understanding of the natural and abstract world. Ultimately, the use of adjectives to describe math enriches our
perception of mathematics, emphasizing its beauty, practicality, and intellectual significance in a compelling and evocative manner.
Examples Of Adjectives For Different Types Of Math
Mathematics is a vast and complex subject that encompasses a wide range of concepts and branches. It deals with numbers, symbols, equations, and patterns to solve problems and make sense of the world
around us. While math is often considered abstract and objective, there are certain adjectives that can be used to describe different aspects of it. These adjectives help us communicate the nature,
properties, and qualities of mathematical concepts and processes.
1. Arithmetic
Arithmetic is the most basic form of mathematics that deals with basic operations like addition, subtraction, multiplication, and division. Some adjectives to describe arithmetic include:
• Simple: Arithmetic involves straightforward and elementary calculations.
• Foundational: Arithmetic lays the groundwork for more advanced mathematical concepts.
• Essential: Arithmetic is essential for everyday calculations and problem-solving.
• Numerical: Arithmetic primarily deals with numbers and their operations.
• Basic: Arithmetic encompasses fundamental mathematical concepts and operations.
2. Algebra
Algebra focuses on the manipulation and study of symbols and the relationships between variables. Here are some adjectives to describe algebra:
• Symbolic: Algebra employs symbols to represent numbers and quantities.
• Abstract: Algebra deals with concepts that are not directly tied to physical objects.
• Generalizable: Algebraic concepts can be applied to a wide range of situations.
• Equation-based: Algebra uses equations to express relationships between variables.
• Transformative: Algebra allows for the transformation and simplification of expressions and equations.
3. Geometry
Geometry explores the properties, relationships, and measurements of shapes, spaces, and figures. Here are some adjectives commonly used to describe geometry:
• Visual: Geometry is often associated with visual representations of shapes and figures.
• Spatial: Geometry deals with the spatial relationships between objects in the physical world.
• Geometric: Geometry focuses on the properties and measurements of geometric figures.
• Symmetric: Geometry explores symmetry and patterns in shapes and figures.
• Euclidean: Euclidean geometry follows the principles and axioms established by Euclid.
4. Calculus
Calculus, divided into differential and integral calculus, deals with rates of change, accumulation, and the study of continuous functions. Here are some adjectives to describe calculus:
• Calculative: Calculus involves intricate calculations and computations.
• Continuous: Calculus deals with functions that have no abrupt changes or breaks.
• Differentiable: Calculus focuses on functions that can be differentiated.
• Integrative: Calculus involves the accumulation or integration of quantities.
• Advanced: Calculus goes beyond basic arithmetic and algebra to study complex mathematical concepts.
5. Statistics
Statistics involves the collection, analysis, interpretation, and presentation of data. Here are some adjectives to describe statistics:
• Quantitative: Statistics deals with numerical data and measurements.
• Probabilistic: Statistics involves the study of probabilities and randomness.
• Inferential: Statistics allows for drawing conclusions and making predictions based on data.
• Descriptive: Statistics describes and summarizes data using various measures.
• Analytical: Statistics employs analytical methods to make sense of data.
6. Probability
Probability focuses on the study of uncertainty and the likelihood of events occurring. Here are some adjectives to describe probability:
• Random: Probability deals with unpredictable outcomes and events.
• Uncertain: Probability involves quantifying and analyzing degrees of uncertainty.
• Probabilistic: Probability relies on mathematical probabilities and distributions.
• Predictive: Probability allows for predicting the likelihood of future events.
• Experimental: Probability can be explored through experiments and simulations.
These examples demonstrate how different types of math can be described using adjectives that convey their unique characteristics and qualities.
Common Mistakes In Using Adjectives To Describe Math
While using adjectives to describe math can aid in expressing concepts more precisely, it is important to be mindful of potential mistakes. Here are some common mistakes to avoid:
1. Overgeneralizing
Using broad and vague adjectives that could apply to any type of math can lead to a lack of clarity. For example, describing all mathematical concepts as "difficult" or "complex" does not provide
sufficient information about the specific nature of the concept.
2. Oversimplifying
Conversely, oversimplifying mathematical concepts by using adjectives like "easy" or "simple" can undermine the complexity and depth of the subject. While certain concepts may be more accessible, it
is essential to acknowledge the intricacies and nuances of math.
3. Ignoring Context
Adjectives must be used in the appropriate context to accurately describe math. Different branches of mathematics may require specific adjectives that capture their unique nature. Neglecting this
context can result in miscommunication and confusion.
4. Failing To Differentiate
Mathematics encompasses a broad range of concepts and branches, each with its own distinct characteristics. Failing to differentiate between these branches when using adjectives can lead to the
misrepresentation of certain concepts.
5. Using Inconsistent Language
Consistency in language is crucial for effective communication. Using inconsistent adjectives to describe math can create confusion and ambiguity. It is essential to establish a clear and consistent
vocabulary when describing mathematical concepts.
By being aware of these mistakes, we can enhance our use of adjectives in describing math and promote clearer communication.
Using Adjectives Effectively
To use adjectives effectively when describing math, keep the following tips in mind:
1. Be Specific
Choose adjectives that precisely convey the particular qualities or properties of mathematical concepts. For example, instead of using the generic term "difficult," specify whether the concept is
challenging due to its complexity, abstract nature, or intricate calculations.
2. Consider Multiple Perspectives
Mathematics can be described from multiple perspectives, such as its logical, visual, or practical aspects. Select adjectives that encompass these different dimensions to provide a comprehensive
description of the concept.
3. Use Comparisons
Comparisons can be helpful in conveying the relative difficulty or complexity of different mathematical concepts. For example, describing a concept as "more abstract than" or "less intuitive than"
another concept can provide a meaningful frame of reference.
4. Incorporate Contextual Information
Consider the specific branch of mathematics and its fundamental properties when selecting adjectives. Tailoring your language to the context ensures accurate and precise descriptions of mathematical
5. Use Adjectives To Enhance Understanding
Adjectives can play a vital role in elucidating the essence of a mathematical concept. Use adjectives to highlight the fundamental properties, applications, or implications of a concept, aiding
comprehension and promoting further exploration.
Exercises And Practice
To reinforce your understanding of using adjectives to describe math, here are some exercises and practice activities:
1. Choose a mathematical concept or branch (e.g., algebra, geometry) and brainstorm a list of adjectives that best describe its unique characteristics and properties.
2. Write a short description of a mathematical concept using a combination of adjectives to convey its essential qualities.
3. Select a list of mathematical concepts and classify them according to their level of difficulty, using appropriate adjectives to differentiate between them.
4. Take a math textbook or a research paper and analyze the adjectives used to describe various mathematical concepts. Identify any inconsistencies or areas where more precise language could be
5. Engage in discussions or debates with fellow mathematicians or peers about the adjectives used to describe math. Explore the reasons behind their adjective choices and find common ground or
differences in interpretation.
By actively engaging in exercises and practice, you can refine your skills in using adjectives effectively to describe math.
Adjectives play a crucial role in describing and communicating the nature, properties, and qualities of mathematical concepts and processes. By selecting appropriate adjectives, we can enhance
clarity, precision, and understanding in the realm of math. This article has explored various adjectives that can be used to describe different branches of math, highlighted common mistakes to avoid,
and provided tips for using adjectives effectively. By incorporating these techniques and practicing their application, we can become more adept at capturing the essence and intricacies of
mathematical concepts through the use of adjectives.
FAQS On Adjectives To Describe Math
What Are Some Adjectives That Can Be Used To Describe Math?
Precise, logical, analytical, abstract, and systematic.
How Do These Adjectives Accurately Describe Math?
These adjectives all highlight different aspects of math that make it unique as a subject. Precise refers to the fact that math deals with exact quantities and calculations. Logical emphasizes the
step-by-step reasoning that is utilized in mathematical problem-solving. Analytical speaks to the ability to break down complex problems into smaller, more manageable parts. Abstract reflects the
fact that math deals with concepts and ideas rather than physical objects. And finally, systematic describes the methodical and organized approach that is essential to mastering math.
Are There Any Other Adjectives That Can Be Used To Describe Math?
Yes, there are many other adjectives that can be used to describe math, such as challenging, practical, creative, and universal. These adjectives may not always be the first to come to mind when
thinking about math, but they can be equally accurate in describing the subject.
Why Is It Important To Use Multiple Adjectives To Describe Math?
Using multiple adjectives provides a well-rounded understanding of math and its various components. By using different adjectives, we can highlight different aspects of math and showcase its
complexity, versatility, and depth.
Can These Adjectives Only Be Used To Describe Pure Mathematics Or Can They Also Apply To Applied Mathematics And Other Fields That Use Math?
These adjectives can be used to describe all forms of mathematics, including pure mathematics, applied mathematics, and various other fields that utilize math. While the application of math may
differ, these adjectives still accurately capture the nature and essence of the subject.
|
{"url":"https://wordsvocabpower.com/adjectives-to-describe-math/","timestamp":"2024-11-04T16:55:04Z","content_type":"text/html","content_length":"162670","record_id":"<urn:uuid:7f84be73-3e8d-4efb-91ba-b20e337282f0>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00357.warc.gz"}
|
Decoding the Mathematical Secrets of Ancient Egypt: Exponent Math Revealed - Eye Of Unity FoundationDecoding the Mathematical Secrets of Ancient Egypt: Exponent Math Revealed - Eye Of Unity FoundationDecoding the Mathematical Secrets of Ancient Egypt: Exponent Math Revealed
Decoding the Mathematical Secrets of Ancient Egypt: Exponent Math Revealed
Ancient Egypt is renowned for its impressive architectural structures, such as the pyramids and temples. However, what many people may not realize is that behind these magnificent monuments lies a
deep understanding and utilization of mathematics. The Egyptians were highly skilled mathematicians who developed various mathematical concepts and techniques, including the use of exponents.
Understanding Exponent Math
Exponents, also known as powers or indices, are mathematical expressions used to represent repeated multiplication of a number by itself. It is a shorthand way of expressing large numbers or repeated
multiplications. The concept of exponents allows for more efficient calculations and representation of numbers.
In ancient Egypt, the use of exponents was prevalent in various aspects of their daily life, from measuring land to calculating volumes and areas of structures. By understanding the mathematical
secrets of ancient Egypt, we can gain insights into their advanced knowledge and unravel the mysteries of their engineering marvels.
Exponent Math in Ancient Egyptian Architecture
One of the most notable examples of exponent math in ancient Egyptian architecture is the construction of the pyramids. The pyramids were built with precise measurements and angles, which required
advanced mathematical calculations. By employing exponent math, the Egyptians were able to determine the dimensions and proportions of the pyramid’s base, height, and angles.
The use of exponents can be observed in the relationship between the pyramid’s height and its base. The height of the pyramid is calculated using the formula: height = base * √2 / 3. This formula
demonstrates the utilization of exponents to determine proportions and measurements accurately.
Land Measurement and Exponent Math
Ancient Egyptians were also skilled at measuring land, which was essential for agricultural purposes and taxation. The Nile River flooding provided fertile soil, and it was crucial to calculate the
land area accurately for irrigation and distribution.
Exponent math played a significant role in land measurement. The Egyptians used a measuring system called “cubits,” which was based on the length of the forearm from the elbow to the tip of the
middle finger. By using a cubit stick, they measured the length and width of the land and calculated the area using exponents. For example, if a land plot measured 10 cubits long and 5 cubits wide,
the area would be calculated as 10 * 5 = 50 square cubits.
Frequently Asked Questions (FAQs)
Q: Why did the ancient Egyptians use exponents in their calculations?
A: The ancient Egyptians used exponents to simplify calculations and represent large numbers more efficiently. It allowed them to perform complex mathematical operations with ease.
Q: How did the Egyptians apply exponent math in their architectural designs?
A: The Egyptians used exponent math to determine the proportions and measurements of their architectural structures, such as the pyramids. The relationship between the height and base of the pyramids
is an example of exponent math in action.
Q: Were exponents used in other aspects of ancient Egyptian life?
A: Yes, exponents were used in various areas of ancient Egyptian life. Land measurement, calculations of volumes and areas, and even accounting systems utilized exponent math.
Q: How did the ancient Egyptians calculate the volume of structures using exponents?
A: The Egyptians used exponent math to calculate the volume of structures by multiplying the length, width, and height of an object. For example, to find the volume of a rectangular prism, they would
multiply the length, width, and height together.
Q: Did the Egyptians have a written system for exponents?
A: No, the Egyptians did not have a specific written system for exponents like we do today. However, their mathematical knowledge and understanding of exponents are evident in their calculations and
architectural designs.
The mathematical secrets of ancient Egypt continue to fascinate and amaze us. The utilization of exponent math in their architectural designs, land measurement, and other aspects of their daily life
demonstrates their advanced knowledge and ingenuity. By decoding these mathematical secrets, we gain a deeper understanding of ancient Egyptian civilization and the remarkable accomplishments they
achieved through their mastery of mathematics.
|
{"url":"https://eyeofunity.com/decoding-the-mathematical-secrets-of-ancient-egypt-exponent-math-revealed/","timestamp":"2024-11-10T11:47:45Z","content_type":"text/html","content_length":"84928","record_id":"<urn:uuid:83143335-69c2-4c23-a70f-2f1549b0db63>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00367.warc.gz"}
|
3.52 Mach to Knots
3.52 mach to kn conversion result above is displayed in three different forms: as a decimal (which could be rounded), in scientific notation (scientific form, standard index form or standard form in
the United Kingdom) and as a fraction (exact result). Every display form has its own advantages and in different situations particular form is more convenient than another. For example usage of
scientific notation when working with big numbers is recommended due to easier reading and comprehension. Usage of fractions is recommended when more precision is needed.
If we want to calculate how many Knots are 3.52 Mach we have to multiply 3.52 by 298170 and divide the product by 463. So for 3.52 we have: (3.52 × 298170) ÷ 463 = 1049558.4 ÷ 463 = 2266.8647948164
So finally 3.52 mach = 2266.8647948164 kn
|
{"url":"https://unitchefs.com/mach/knots/3.52/","timestamp":"2024-11-03T21:43:11Z","content_type":"text/html","content_length":"21935","record_id":"<urn:uuid:1e8cb569-e629-4a9d-822c-d77403ba9ed5>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00136.warc.gz"}
|
Introduction and folding instructions online and in print
Robert J. Lang’s Makalu (Six Intersecting Pentagons) is an interesting wireframe model and part of his Himalayan Peaks series. Original instructions for the units and how to connect them are
available in Origami USA Annual collection 2002.
Assembly tips
3D anaglyph (use red-cyan glasses for viewing)
I think the original instructions for joining the units are adequate, but not perfect. I came up with a number of tips not mentioned there which I think can help with assembling the complete model
(which is the hard part of making it) once you’ve folded the units. If you already managed to fold the model once, the actual folded Makalu is the best help you can get and much better than any flat
pictures or diagrams. If you don’t have a finished model yet, you can use the picture of the finished model found to the right as reference. The second picture is a three-dimensional anaglyph image
which can be viewed using red-cyan glasses.
Spolier alert: much of the fun is in coming up with how to connect the units by yourself, so while these tips should reduce the time needed to complete the model, they may also take away much of the
fun as well.
• As with Five Intersecting Tetrahedra, understanding the symmetries of the model is key to proper assembly. Each of the pentagonal rings intersects every other ring exactly once. Each vertex of
the dodecahedron consists of three units weaved together. Each face consists of five units weaved together. Note the relative location of vertices to edges in the finished model. This model is
chiral: there are two versions with different twisting directions, which are each other’s mirror-images.
• Start with three rings and assemble them so that each one intersects each of the other two exactly once. At this stage this is a sufficient condition for proper assembly but after three rings, it
is required but not sufficient any more. The three-ring structure is completely loose and the rings will all just lie on the table, unable to hold together any 3D shape.
• Given the three interconnected rings, try to form a single vertex of the docecahedron: a weave of three units, roughly at the half of the units’ lengths. Each of the three units should pass above
one of the other two and below the other. The rings will now enclose a roughly spherical space. They are not stable in their positions, meaning you have to hold them all the time or fix them in
position with rubber bands. You may need to move some rings around the model from one side to the other in order to get the proper weave order of the rings.
• Now, try to form another such vertex, symmetrically at the other “pole” of the sphere. It does not have to be very precise: just ensure the units overlap each other in the right order.
• Ring 4 is probably the hardest to attach as the three-ring structure you start with is very unstable. I usually attach rings 4-6 by folding the 15 units into 6 two-unit assemblies and leaving 3
units alone. This ring should form another vertex right next to the one you created initially, and another one, symmetrically at the back. Try to first insert the two-unit elements into proper
places, then connect them to each other, then attach the last, individual unit of the ring. The 4-ring assembly is tight enough for it to be able to stand on its own, but it is still fragile and
can easily succumb to the table with the rings still connected but not placed properly in 3D any more.
• Another important symmetry which helps in getting the right arrangement of the rings at this stage is the following. Look at one pentagonal face of the finished model. Each of the edges that make
up the pentagon lies below one of the other edges, above another one, and it also passes above one more unit which does not belong to this pentagon.
• And yet another one which helps in reordering the rings if you notice that “something is wrong” with your assembly: when you take any two non-neighboring edges of a single pentagonal face, there
will be a ring in the model such that they both lie above that ring.
• Add the first two-unit segment of the fifth ring (at this stage you should see where it should go based on the 4 rings already there and knowing the symmetries of the model). Once you do this,
the rings will be locked and should not fall apart any more which makes further work much easier.
• Add the rest of the fifth ring.
• Add the sixth ring whose position will now be abvious, and you’re done.
|
{"url":"https://origami.kosmulski.org/instructions/makalu-robert-lang","timestamp":"2024-11-06T08:47:42Z","content_type":"text/html","content_length":"36317","record_id":"<urn:uuid:0b48f783-dfdb-472f-9b36-38366eb4f1e5>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00482.warc.gz"}
|
Existence and non-existence results to a mixed Schrodinger system in a plane
Accepted Paper
Inserted: 3 feb 2024
Last Updated: 12 jun 2024
Journal: Asymptotic Analysis
Year: 2024
This article focuses on the existence and non-existence of solutions for the following system of local and nonlocal type $ -\partial_{xx}u + (-\Delta)_{y}^{s_{1}} u + u - u^{2_{s_{1}}^{}-1} = \kappa
\alpha h(x,y) u^{\alpha-1}v^{\beta} \quad \mbox{in} ~ \mathbb{R}^{2},$
$-\partial_{xx}v + (-\Delta)_{y}^{s_{2}} v + v- v^{2_{s_{2}}^{}-1} = \kappa \beta h(x,y) u^{\alpha}v^{\beta-1} \quad \mbox{in} ~ \mathbb{R}^{2},$
$ u,v ~ \geq ~0 \quad \mbox{in} ~ \mathbb{R}^{2}, $
where $s_{1},s_{2} \in (0,1),~\alpha,\beta>1,~\alpha+\beta \leq \min \{ 2_{s_{1}}^{},2_{s_{2}}^{}\}$, and $2_{s_i}^{} = \frac{2(1+s_i)}{1-s_i}, i=1,2$. The existence of a ground state solution
entirely depends on the behaviour of the parameter $\kappa>0$ and on the function $h$. In this article, we prove that a ground state solution exists in the subcritical case if $\kappa$ is large
enough and $h$ satisfies (1.3). Further, if $\kappa$ becomes very small in this case then there does not exist any solution to our system. The study in the critical case, i.e. $s_1=s_2=s, \alpha+\
beta=2_s$, is more complex and the solution exists only for large $\kappa$ and radial $h$ satisfying (H1). Finally, we establish a Pohozaev identity which enables us to prove the non-existence
results under some smooth assumptions on $h$.
|
{"url":"https://cvgmt.sns.it/paper/6401/","timestamp":"2024-11-04T11:19:53Z","content_type":"text/html","content_length":"9189","record_id":"<urn:uuid:a5fe6a6e-e011-424a-8fed-dac01294672c>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00486.warc.gz"}
|
I'm seeking to understand the math behind our current regulation
I'm seeking to understand the math behind our current regulation [General Statistics]
❝ Hi Victor, I tried to reconstruct your original post as good as I could. Since it was broken before the first “\(\mathcal{A}\)”, I guess you used an UTF-16 character whereas the forum is coded in
Hi Helmut, Thanks for helping me out :)
Edit: after a quick experiment (
click here to see screenshot
), it seems that the “\(\mathcal{A}\)” I used was a UTF-8 character after all? ⊙.☉
❝ Please don’t link to large images breaking the layout of the posting area and forcing us to scroll our viewport. THX.
Noted, and thanks for downscaling my original image :)
❝ I think that your approach has same flaws.
• You shouldn’t transform the profiles but the PK metrics AUC and C[max].
I see; I thought it would make sense for T
to also be transformed because of googling stuff like this:
coupled with the fact that
the population distribution that is being analyzed
looks a lot like a
Log-normal distribution
; so I thought normalizing T
just made sense, since almost all distributions studied in undergraduate (e.g.
used by ANOVA) are ultimately transformations of one or more
standard normals
. With that said, is the above stuff that I googled, wrong?
• The Null hypothesis is bioinequivalence, i.e.,
❝ $$H_0:\mu_T/\mu_R\not\in \left [ \theta_1,\theta_2 \right ]\:vs\:H_1:\theta_1<\mu_T/\mu_R<\theta_2$$ where \([\theta_1,\theta_2]\) are the limits of the acceptance range. Testing for a
statistically significant difference is futile (i.e., asking whether treatments are equal). We are interested in a clinically relevant difference \(\Delta\). With the common 20% we get
back-transformed \(\theta_1=1-\Delta,\:\theta_2=1/(1-\Delta)\) or 80–125%.
Thanks for enlightening me that I can now restate the current standard's hypothesis in a "more familiar (undergraduate-level)" form:
$$H_0: ln(\mu_T) - ln(\mu_R) \notin \left [ ln(\theta_1), ln(\theta_2) \right ]\:vs\:H_1: ln(\theta_1) < ln(\mu_T) - ln(\mu_R) < ln(\theta_2)$$
I now realize that I was actually using the old standard's hypothesis (whose null tested for bioequivalence, instead of the current standard's null for bio
equivalence), which had problems with their
(highlighted in red below, cropped from
this paper
), thus rendering my initial question pointless, because I was analyzing an old problem; i.e. before Hauck and Anderson's 1984 paper.
• Nominal \(\alpha\) is fixed by the regulatory agency (generally at 0.05). With low sample sizes and/or high variability the actual \(\alpha\) can be substantially lower.
• Since you have to pass both AUC and C[max] (each tested at \(\alpha\) 0.05) the intersection-union tests keep the familywise error rate at ≤0.05.
With that said, regarding the old standard's hypothesis (whose null tested for bioequivalence), I was originally curious (although it may be a meaningless problem now, but I'm still curious) on how
they bounded the
family-wise error rate (FWER)
for each hypothesis test, since the probability of committing one or more type I errors when performing three hypothesis tests = 1 - (1-
)^3 = 1 - (1-0.05)^3 = 14.26% (if those three hypothesis tests
actually independent).
The same question more importantly applied to
, since in the old standard's hypothesis (whose null tested for bioequivalence), "the consumer’s risk is defined as the probability (
) of accepting a formulation which is bioinequivalent, i.e. accepting H
when H
is false (Type II error)." (as quoted from
page 212 of the same paper
Do you know how FDA bounded the "global"
before 1984? Because I am curious on "what kind of secret math technique" was happening behind-the-scenes that allowed 12 random-samples to be considered "good enough by the FDA"; i.e.
• How to calculate the probability of committing one or more type I errors when performing three hypothesis tests, when the null was tested for bioequivalence (before 1984)?
• How to calculate the probability of committing one or more type II errors when performing three hypothesis tests, when the null was tested for bioequivalence (before 1984)?
Thanks in advance :)
Complete thread:
|
{"url":"https://forum.bebac.at/forum_entry.php?id=20815","timestamp":"2024-11-12T19:20:52Z","content_type":"text/html","content_length":"19528","record_id":"<urn:uuid:5721a023-b2f1-440b-814f-babe51e0c91b>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00683.warc.gz"}
|
Unemployment, Choice And Inequality [PDF] [k2c6cu3nfq00]
E-Book Overview
This monograph began as a study of the consequences of labor force effects, in cluding unemployment, for the distribution of earnings. I began by developing a model of job search. But following my
previous work on the distribution of earnings, the search theory took a different form from the standard literature. Workers and firms were engaged in mutual search which effectively assigned workers
to jobs. A number of open questions immediately became apparent, including the relation bet ween unemployment and inequality, the nature and costs of unemployment, and the role of choice. These
quickly provided sufficient material for the monograph. I began work on the project in 1980 at Miami University of Ohio. I wish to thank my chairman there, William McKinstry, for the support I
received during my last year there. My colleagues Donald Cymrot and James Moser provided some early com ments on the project and I am indebted to Joseph Simpson for extensive computer assistance.
E-Book Content
Michael Sattinger
Unemployment, Choice and Inequality With 7 Figures and 49 Tables
Springer-Verlag Berlin Heidelberg New York Tokyo
Professor Dr. Michael Sattinger Department of Economics State University of New York at Albany Business Administration 111 1400 Washington Avenue Albany, NY 12222, USA
ISBN -13 :978-3-642-70549-6 e- ISBN-13 :978-3-642-70547-2 DOl: 10.1007/978-3-642-70547-2 Library of Congress Cataloging in Publication Data. Sattinger, Michael. Unemployment, choice and inequality.
Bibliography: p. Includes indexes. 1. Unemployment. 2. Wages. I. Title. HD5701.5.S28 1985 331.13'7 85-9756 ISBN -13: 978-3·642·70549-6 (U.S.) This work is subject to copyright. All rights are
reserved, whether the whole or part of the material is concerned, specifically those of translation, reprinting, re·use of illustrations, broadcasting, reproduction by photocopying machine or similar
means, and storage in data banks. Under § 54 of the German Copyright Law where copies are made for other than private use, a fee is payable to "Verwertungsgesellschaft Wort", Munich.
© by Springer-Verlag Berlin Heidelberg 1985 Softcover reprint of the hardcover 1st edition 1985 The use of registered names, trademarks, etc. in this publication does not imply, even in the absence
of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.
To my Wife, Ulla
This monograph began as a study of the consequences of labor force effects, including unemployment, for the distribution of earnings. I began by developing a model of job search. But following my
previous work on the distribution of earnings, the search theory took a different form from the standard literature. Workers and firms were engaged in mutual search which effectively assigned workers
to jobs. A number of open questions immediately became apparent, including the relation between unemployment and inequality, the nature and costs of unemployment, and the role of choice. These
quickly provided sufficient material for the monograph. I began work on the project in 1980 at Miami University of Ohio. I wish to thank my chairman there, William McKinstry, for the support I
received during my last year there. My colleagues Donald Cymrot and James Moser provided some early comments on the project and I am indebted to Joseph Simpson for extensive computer assistance. At
the State University of New York at Albany, my chairmen, Melvin Bers and later Pong Lee, expressed considerable faith in this project and gave me complete support. My colleagues Thad Mirer and
Terrence Kinal assisted me in the empirical and computer work and Hiroshi Yoshikawa provided comments on some of the macroeconomic parts of the manuscript. Peter Wiles read through a working paper
covering the material on unemployment valuations and wrote out a number of reservations, comments and stimulating questions that influenced my later work and thinking on the subject. My students in
the graduate labor course also tolerated lectures heavily laced with the results from this monograph and gave me good comments and revealing questions. I also received comments from Solomon Polachek
and participants of seminars at Miami University, Michigan State University, the State University of New York at Albany, Colgate University, the European Econometric Society Meetings, the University
of Amsterdam and the London School of Economics. In Denmark, I benefited greatly from seminars and workshops given by Kenneth Burdett, Nicholas Kiefer, Dale Mortensen and George Neumann at the
University of Aarhus and at the Workshop on Labor Market Dynamics at Sandbjerg, Denmark. I greatly appreciate discussions with Burdett, Mortensen and Lars Muus on the contents of the monograph and
comments from my colleagues in Aarhus at workshops where I presented some of the results. John Warner, Carl Poindexter, Jr. and Robert M. Fearn greatly assisted me in the early stages of this project
by making available to me a rare tape with household data from the U.S. Census Employment Survey. Their willingness to share their data is in the best academic tradition. I am indebted to Eileen
Tervay, Teri Lupi and Kirsten Stentoft for typing many of the drafts.
The version appearing here was typeset using the computer typesetting facilities of the State University of New York at Albany. The author is indebted to Stephen Rogowski for assistance in
typesetting and typography and to William Schwarz and Stephen Shapiro for help doing the make-up.
List of Figures
List of Tables
1. Introduction 1. Subject Matter
2. Previous Work 3. Choice in Labor Markets 4. Outline of Remaining Chapters
2. Search in Labor Markets l. Introduction
2. 3. 4. 5. 6. 7.
Worker Search Behavior Impacts of Labor Market Conditions Extensions Distributions of Workers and Jobs Behavior of the firm Summary
3. The Valuation of Unemployment 1. Introduction
2. 3. 4. 5. 6. 7. 8.
Alternative Valuations Previous Estimates Direct Estimates Aggregate Approach Labor Force Participation Approach Cross-Section Estimates General Conclusions Appendix: Data Sources
4. The Distribution of Employment 1. Introduction
2. 3. 4. 5. 6. 7.
Previous Work Sources of Inequality Distribution of the Time Spent Unemployed Employment Inequality Heterogeneous Transition Rates Conclusions
Xlll 1
5. The Distribution of Wage Rates 1. 2. 3. 4. 5. 6.
Introduction Truncated Wage Offer Distributions Accepted Wage Distribution The Contribution of Choice The Source of Wage Rate Dispersion Summary
6. Inequality 1. 2. 3. 4. 5. 6. 7.
Introduction Statistical Relations Observed Earnings Inequality The Role of Choice and Uncertain Outcomes Unemployment-Compensated Wage Rates Inequality by Source Summary
7. The Operation of Labor Markets 1. 2. 3. 4.
Introduction Dual Labor Markets The Business Cycle Wage Rigidity, Wage Resistance and the Aggregate Supply Curve
8. Chronic Underemployment and Regression Towards the Mean 1. 2. 3. 4. 5. 6. 7.
Introduction Advantageous Trades and the Economic Role of Unemployment Definition of Underemployment Tests for Underemployment Regression Towards the Mean Chronic Underemployment Comparison with
Previous Theories
9. Summary 1. 2. 3. 4.
Ten Theoretical Conclusions Ten Empirical Results Six Remaining Tasks Three New Directions
Author Index
Subject Index
List of Figures
2.1 2.2 3.1 3.2 4.1 4.2 8.1
Unemployed Worker Cumulative Distribution Function Job Vacancy Cumulative Distribution Function Consumer Choice Worker Choice Representation of Markov Process Cumulative Distribution for Markov
Process Regression Towards the Mean
List of Tables
3.14 3.15
Effect of Reservation Wage on Employment; Household Data Effect of Reservation Wage on Hourly Wage; Household Data Coefficients for Weeks Worked and Wage Rate; Household Data Determinants of
Reservation Wages; Household Data Unemployment Valuations, White Males Unemployment Valuations, White Females Unemployment Valuations, Black Males Unemployment Valuations, Black Females Correction
Factors for Earnings Unemployment Valuations Implied By Labor Force Participation Regressions, Males, 1960 Unemployment Valuations Implied By Labor Force Participation Regressions, Females, 1960
Unemployment Valuations Implied By Labor Force Participation Regressions, Males, 1970 Unemployment Valuations Implied By Labor Force Participation Regressions, Females, 1970 Cross-Section Estimates
of Unemployment Valuations, 54 Cities Unemployment Costs and Trade-Offs from Cross-Section Estimates
4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 4.10 4.11 4.12 4.13 4.14 4.15
Distribution of Unemployed by Transition Rates, by Sex and Age Distribution of Unemployed by Transition Rate, 1970 to 1979 Distribution of Employment to Unemployment Transition Rates, Males
Distribution of Employment to Unemployment Transition Rates, Females Density Functions for Time Spent Unemployed, U = 5 Per Cent Density Functions for Time Spent Unemployed, U = 10 Per Cent Density
Functions for Time Spent Unemployed, U = 20 Per Cent Proportions Unemployed More Than One Month in Year Proportions Unemployed More Than Three Months in Year Proportions Unemployed More Than Six
Months in Year Gini Coefficients for Distribution of Employment Coefficients of Variation for Distribution of Employment Actual Versus Calculated Employment Inequality, White Males, 1970 Ratios of
Actual to Calculated Proportion of Unemployed Employment Inequality Among Major Groups of Workers, 1970
5.1 5.2
Distribution of Wage Offers Generated by Bivariate Normal v(w,g) Truncated Wage Distribution, Normal with Coefficient of Variation 0.471 Truncated Wage Distribution, Lognormal with Parameters p. =
0.0, (J2 = 0.2
3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.10 3.11 3.12 3.13
List 0/ Tables
5.4 5.5 5.6
Truncated Wage Distribution, Pareto with Parameter a = 2.5 Truncated Wage Distrbution, Exponential with Parameter A = 0.4 Ratios of Wage Offer to Accepted Wage Inequalities
Joint Distribution of Workers By Employment and Yearly Earnings, 1970 Employment and Earnings Inequality, Males, 1970 Employment and Earnings Inequality, Females, 1970 Sources of Inequality, White
Males Sources of Inequality, White Females Sources of Inequality, Black Males Sources of Inequality, Black Females Sources of Inequality, Major Groups Choice and Inequality, White Males Choice and
Inequality, White Females Choice and Inequality, Black Males Choice and Inequality, Black Females Ratios of Reservation Wages and of Weekly Earnings
6.2 6.3 6.4 6.5 6.6 6.7 6.8 6.9 6.10 6.11 6.12 6.13
Chapter 1
Introduction 1. Subject Matter This monograph analyzes how unemployment and job search affect the distribution of labor earnings and contribute to inequality. Unemployment and job search generate
inequality both through unequal time spent employed and through the unequal wage offers facing unemployed workers. This monograph studies the choices that workers make between earnings and
unemployment, and calculates how much inequality this choice generates. It further studies the contribution of random outcomes to inequality and relates this inequality to the labor market assignment
problem solved by search. The major empirical conclusion of the monograph is that job search and unemployment account for up to half of earnings inequality among groups of workers. Tentative results
show that worker choices of which jobs to accept do not account for large amounts of inequality for females, blacks and younger white males. For older white males, choices account for up to half of
the inequality in their earnings. Unemployment primarily fattens the lower tail of the distribution of earnings, while the upper tail is determined by the aggregation of upper tails of wage offer
curves. Data on reservation wages applied to the theoretical job search model indicate that the cost of unemployment exceeds workers' foregone earnings. Most theories of the distribution of earnings
are theories of wage dispersion: wage rates are determined in markets with heterogeneous workers and perhaps heterogeneous jobs. Workers are then employed full-time and earnings are proportional to
wage rates. Yet empirical work shows that inequality in weeks worked is one of the clearest and strongest determinants of inequality. The best theoretical accommodation to this empirical fact has
been to tack on weeks woriced as a determinant of earnings. This monograph replaces such an ad hoc approach with a theory of the joint distribution of employment and earnings, based on individual
behavior. Empirical work also demonstrates that substantial earnings differences are unrelated to any observable worker characteristics. That is, a group of identical workers will have unequal
earnings, even after adjusting for weeks worked. The residual differences are usually attributed to luck and are generally unexplained in deterministic theories. Unanswered is the question of how
luck could play such a large role in labor markets: through what mechanism does luck enter? An answer is provided by the theory of job search. Unemployed job seekers face dispersion in wage offers.
Instead of accepting the first job offer that comes along, he or she formulates a strategy in which offers with wage rates below a certain level (called the reservation wage) are turned down while
those above that level are accepted. The consequence of this search procedure is that identical workers could end up with unequal wage rates. Luck, the source of so much inequality, operates through
the wage offer distribution. To explain this source of inequality, we must therefore explain why wage offers vary. The unequal accepted wage rates must then be combined with the unequal times taken
to find an acceptable job in assessing the contribution of
unemployment and consequent job search to inequality. An essential feature of job search is the presence of choice. An unemployed worker, by lowering the reservation wage, can reduce the expected
time spent looking for a job, but in doing so lowers the expected wage. Heterogeneous choices of reservation wages among a group of otherwise identical workers would influence the distributions of
unemployment and wage rates and have a net effect on earnings inequality. This raises the possibility that one source of inequality is unequal choices of reservation wages. An important question
examined in this monograph is the amount of choice and its contribution to inequality. Choice also alters the shape of the distribution of wage rates. Since lower wage offers are more likely to be
rejected, the distribution of accepted wage rates will differ from the distribution of wage offers. The foregoing questions are primarily descriptive. Additionally, the study of unemployment and
inequality raises questions that concern the distribution of economic welfare and that involve value judgments. Three distinct sources of inequality can be identified. These are unequal worker
characteristics, such as ability and education; the random outcomes of unemployment and job search; and choices of reservation wages by workers. These sources carry very different implications for
the nature of inequality. Random outcomes are a source of inequitable differences as well as uncertainty for workers as individuals; this inequality should be a greater cause for concern than
inequality from abilities or education. The inequality from different choices of reservation wages is certainly a less serious cause for concern. An additional issue is the valuation of unemployment
in terms of earnings. The trade-off between expected earnings and expected unemployment, described by search theory, provides a means of finding the costs of unemployment. If unemployment costs more
than the foregone earnings, the earnings of workers with some unemployment will overstate their economic welfare. Then economic welfare will be more unequally distributed than earnings alone. A major
point of this monograph is that inequality arising from random outcomes is large. While such earnings differences appear as luck from the point of view of the individual worker, they playa
substantially different role in the labor market as a whole. Previously, this author and others have emphasized the allocative role of earnings differences (Jan Tinbergen, 1951, 1956, 1975; Sherwin
Rosen, 1978; Michael Sattinger, 1975, 1977, 1978, 1979, 1980). Earnings are not simply factor rewards for the possession of innate ability or for investments; instead they are equilibrium prices that
assign heterogeneous workers to heterogeneous jobs. To explain the earnings distribution, one must then describe the assignment problem present in the labor market and find the equilibrium wages that
solve that problem. The models developed by Tinbergen, this author and others have been deterministic. That is, the k-th worker (in the order of some decreasing worker characteristic) gets assigned
exactly to the kth job, with no informational difficulties or search. This monograph extends the assignment point of view to labor markets in which search is needed to place workers in jobs.
Superficially, earnings differences would not appear to play an allocative role, since identical workers can receive unequal earnings. However, this paradox can be resolved by looking at the labor
market as a whole. The exact assignment, with the kth worker going to the k-th job, cannot be achieved in a labor market with incomplete information since the search costs would be prohibitive.
Instead, the k-th worker must be prepared to accept a job at a range of firms, and the firm with a vacancy must be willing to accept a range of workers. The result is a diffuse assignment of workers
to jobs that differs in important respects from the deterministic assignment. In this labor
market, wage offer dispersion is necessary to guide workers to the appropriate jobs. Earnings differences therefore continue to play an allocative role. Also, unequal earnings to identical workers
occur in lieu of even greater search costs that would arise with a more accurate assignment. The monograph develops these ideas further in relating inequality to assignment. In summary, the three
major goals of this monograph are to: a. Describe how unemployment, wage offers and reservation wages combine to determine the distribution of earnings; b. Explain the relation between earnings and
economic welfare in labor markets with unemployment; c. Relate earnings inequality to the assignment problem solved by search procedures in the labor market.
2. Previous Work Employment or unemployment has certainly entered into previous work on the distribution of earnings. Its inclusion is unavoidable. Weeks worked is a stronger determinant of earnings
than schooling or experience. The human capital literature includes a term for weeks worked in most estimates of the earnings function of ineqUality. Jacob Mincer (1974) adds the logarithm of weeks
worked to the standard human capital terms, schooling and experience, with the result that the R2 statistic rises from about 0.31 to at least 0.525, depending on the form of the earnings function.
Mincer concludes that employment variation contributes about a fourth of total inequality (1974, p.119). He presents the argument that employment is an effect of human capital investment and
therefore the inequality attributable to employment variation should be credited to the distribution of human capital. Barry Chiswick also includes weeks worked in his estimates of the variance of
logarithms of earnings or incomes (1974). He also argues that weeks worked and human capital are correlated and finds in his Tables 8.1 to 8.5 that the variance of logarithms of weeks worked
significantly influences earnings inequality. The Mincer and Chiswick time-series study of inequality (1972) finds that the variance of logarithms of weeks worked alone contributes 0.216/0.6483 =
0.333 of observed income inequality in 1959, substantially more than the variance of logarithms of schooling alone, 0.0637/0.6483. Irwin Garfinkel and Robert Haveman (1977b) and Thad Mirer (1979)
study the relation between unemployment and inequality from another point of view. They divide the determination of income or earnings into two parts, earning capacity and the utilization of that
capacity. Earning capacity is essentially the earnings a worker would get if he or she worked full time. Utilization is the ratio of actual earnings to earning capacity. Garfinkel and Haveman find
that inequality in capacity accounts for 80 per cent of pre-transfer income inequality, using Gini coefficients. They conclude that at most one fifth of inequality is attributable to differences in
tastes for income versus leisure, effects of transfer programs on labor supply, and impediments to labor market activity. Garfinkel and Haveman regard the latter source as responsible for two-thirds
of all variation in capacity utilization. They seem to treat all differences in capacity utilization as arising from workers' decisions, although perhaps in response to unavoidable circumstances.
(The earning capacity and utilization approach will be further discussed in Chapter 4.) Additionally, there is a substantial literature on the relation between aggregate
unemployment and inequality (Horst Menderhausen, 1946; Melvin Reder, 1955, 1964; C.E. Metcalf, 1969; Thad Mirer, 1973a, 1973b; Edward M. Gramlich, 1974; Charles M. Beach, 1977; and Edward C. Budd and
T.e. Whiteman, 1978, among others). The literature on screening and job market signaling (A.M. Spence, 1974; loop Hartog, 1981) relates imperfect information, present in job search, to wage
differences. The main shortcoming of this literature with respect to the goals of this monograph is that is does not distinguish labor supply decisions from the random incidence of unemployment. Few
meaningful statements about the relation between the distributions of employment and earnings may be made without knowledge of the nature of employment variations. Unlike the previous literature,
this monograph regards the distinction between choice and random outcomes as central to the study of inequality. Another important distinction (and one that is not always made in this monograph) is
between labor force participation and unemployment as determinants of weeks worked. The inequality arising from unemployment cannot be credited to human capital investments. The correlation of human
capital variables with the mean or variance of unemployment is irrelevant. This may be seen simply by observing that even if all workers had the same level of education or human capital, the unequal
distribution of employment would still generate an unequal distribution of income. The inequality generated in this way could not be attributed to human capital investments, since all human capital
investments were the same. Finally, the inequality attributable to variations in weeks worked is only part of the inequality generated by job search. Wage dispersion also arises from job search. The
figures of one fifth to one third of inequality attributable to the distribution of employment therefore underestimate the contribution of job search to inequality. A number of other labor market
theories carry implications for income distribution but are not developed here. Many theories assume some form of semi-permanent attachment of workers to firms. In Walter Oi's theory (1962), labor is
a quasi-fixed factor. The presence of fixed costs of employment reduces the likelihood that firms layoff workers. Similarly, the presence of specific training in Gary Becker's theory (1962) raises
the worker's marginal product above his or her wage, reducing the incidence of layoffs. Everything else the same, reducing layoffs will reduce both unemployment and inequality in the distribution of
unemployment. However, a decline in the transition rate into employment may accompany the decline in layoffs, leaving the unemployment rate unaffected or higher. The net effects on the distributions
of employment and earnings are unclear. Semi-permanent attachment also characterizes the theories of Costas Azariadis (1975), M.N. Baily (1974) and Martin Feldstein (1976).In the Azariadis and Baily
theories, the firm finds combinations of wage levels and employment for alternative states that take account of workers' risk aversion. The firm sets a constant wage rate for all alternative states
and may avoid employment variations among the workers choosing employment at the firm in question. These workers will experience much less wage rate and employment variation than in an auction labor
market. Again, however, the implications of such firm behavior for the labor market as a whole are unclear. Wage rate and employment can vary greatly between firms or for workers without an
attachment to a firm. In Feldstein's theory of temporary layoffs, subsidy of unemployment compensation leads firms to expand the layoffs imposed on workers. Layoffs are then part of the benefits or
conditions of the job. If unemployment were caused mainly by temporary layoffs, it would be more equally distributed than if randomly determined.
Robert Hall's theory of the natural rate of unemployment (1979a) assumes that workers know the likelihood of getting a job at a firm at which they are seeking employment. Firms then choose a
combination of wage rate and job-finding rate that corresponds to workers' trade-offs between the two. Since the firm incorporates worker preferences for earnings versus unemployment, the result is
an efficient jobfinding rate and an efficient level of unemployment. In all of these theories, the partial attachment of workers to firms allows firms to alter the wage rate in response to a change
in some other unemployment condition, such as the expected time spent unemployed. The firm then achieves a higher economic efficiency by offering labor market conditions that conform to workers'
trade-offs. This may be contrasted with the search model of worker behavior, in which the worker adjusts his or her reservation wage until the worker's trade-off corresponds to the opportunities
available in the labor market. That is, the worker does the adjusting instead of the firm.
3. Choice in Labor Markets Why isn't the work of this monograph a simple application of arithmetic and statistics? It would seem simple enough to plug weeks worked into an earnings function and use
the coefficient to calculate the contribution of employment or unemployment to earnings. But we would be unable to interpret the inequality caused by the distribution of weeks worked. Is it imposed
by the accidental distribution of unemployment or is it the result of labor supply decisions, i.e., choice? It is the possible presence of choice which gives the subject its economic complexity. The
amount of choice individuals have in their economic circumstances is one of the most important considerations in describing the extent of economic inequality. Differences arising from choices of
education investments, riskiness of occupation or level of job satisfaction are less a source of social concern than differences arising from circumstances that are beyond the control of the
individual, such as inherited ability, social or racial classification, inherited wealth or luck. In his discussion of the personal distribution of earnings (1973), Harry Johnson's major point is
that observed inequality in earnings is consistent with substantial equality of economic conditions. The source of the observed inequality is the alternative choices of workers. In former times, the
distinction relevant to ideology was between labor and property (capital) incomes; currently the relevant distinction appears to be between incomes determined by choice and by circumstances outside
the control of the individual. Choice plays a prominent role in previous theories of income distribution. The earliest is Adam Smith's theory of compensating wage differentials. Less desirable jobs
pay higher wages. Differences in wages therefore result from choice of job satisfaction. Unequal wages do not indicate inequality; instead they are necessary for equality of economic well-being among
a population of identical workers. In a paper on choice and chance, Milton Friedman (1953) demonstrates that choice of lotteries with unequal outcomes is a possible source of inequality. Individuals
have cardinal von Neumann-Morgenstern utility functions which do not exhibit risk aversion over the entire range of possible incomes. Individuals therefore purchase lottery tickets which would move
them from their expected income to either a lower or higher income, determined by where risk aversion stops and starts up again on the utility function. The resulting distribution of income is a
combination of distributions (depending on whether individuals won or lost their lotteries) around the two incomes. This paper has been boiled down to the observation that inequality is caused by
choice of
risky occupations, a point far removed from the original paper. The popularity of Friedman's paper cannot be attributed to the artificial and unlikely model presented; instead it is due to the
explicit treatment of choice, a sign of its ideological importance. A third model is Jacob Mincer's human capital theory (1974). Workers choose levels of education on the basis of the present
discounted value. The consequence is that observed incomes are unequal but present discounted values are forced into equality. Again, observed inequality is consistent with actual equality because of
the presence of choice. Choice is clearly present in labor markets, where a major source of unequal earnings is variation in the amount of time employed. In a perfectly competitive labor market
without search or constraints, the quantity of labor supplied would be the result of choice on the part of the worker. While some workers may seek to work only part of the year or part-time, in many
labor markets workers seek full-time employment for the whole year. For such workers, variation in time employed arises from the unequal distribution of unemployment among workers in labor markets
that are not described by the simple perfectly competitive model. Even when workers seek to work all year and avoid unemployment, choice is present in the amount of unemployment and earnings they
experience. The theory of job search suggests that a worker can affect the expected wages he or she will receive and the expected amount of unemployment by raising or lowering the reservation wage,
the wage at which a worker is just willing to accept a job. The particular type of choice studied in this monograph is the choice of reservation wage and its consequences. Despite the importance of
choice, there is no standard framework for analyzing the contribution of choice to inequality. Suppose we are interested in the distribution of food consumption among a number of people in a market
system. Each individual chooses how much food to consume; is the inequality in the distribution of food therefore a result of choice? If people can choose how much food to consume, is there no cause
for concern about maldistribution of food? Superficially, one might suppose that if unemployment is a matter of choice, earnings differences which arise from unemployment are not an important matter
for social concern. Going further, some economists hesitate to accept job search theory because it seems to legitimize the existence of unemployment and to trivialize its costs. This is a
misconception. The moment of choice for an unemployed worker occurs when he or she is faced with a job offer with a low wage, a wage that inevitably is below the expected wage for the worker in the
labor market. The worker can accept the job and move from unemployment to employment, but only by accepting a sacrifice in earnings. This sacrifice constitutes a cost to the worker of reducing the
unemployment faced. In the context of the standard job search model, the worker, by reducing the reservation wage, obtains a reduction in the expected unemployment at the cost of a reduction in the
expected earnings. By exploiting the results of job search theory, it is therefore possible to estimate a valuation of unemployment in terms of earnings. The above point (unemployment is costly to
workers even when influenced by choice) may be made in another way. Depending on the current prices for locks and burglar alarms, and the value of my household possessions, I choose an optimal amount
of burglary protection. But if; a burglar strikes, it is costly to me. One measure of that cost may be found by estimating the marginal reduction in burglar likelihood that I can achieve with an
extra dollar spent on house security. This example suggests another point that must be made. A group of workers may
start out with equal expected wages and incidence of unemployment. But in the actual outcomes, a few workers may experience substantial unemployment while the others experience none. A worker may
choose the expected unemployment level, but any actual unemployment is not chosen: it is undesirable and a loss. I may choose to bet two dollars on a horse, but I don't choose for the horse to come
in last. The lack of a standard framework for dealing with differences caused by choice is reflected in economists' ambivalence towards unemployment: is it voluntary or involuntary? The natural
unemployment rate hypothesis and the job search literature treat unemployment as by and large voluntary, whereas the Keynesian literature regards unemployment as mostly involuntary if certain
conditions hold. These labels affect our perception of the desirability of relieving unemployment through public policy. The problem, however, is not to label unemployment one way or another but to
determine the extent of the voluntary nature of unemployment. That is, we need a qual1titative way of describing the magnitude of choice in unemployment. The proper way to analyze choice in labor
markets is to separate the inequality in choice sets from the actual choices and outcomes. The choice set is the set of combinations of expected wage rates and expected levels of unemployment that a
worker can achieve. Choice then shows up as a selection of a particular reservation wage, which determines the combination of expected levels of wage rate and unemployment. The contribution of choice
to inequality may be analyzed by finding how much less the inequality would be if everyone made the same choice. In much of the statistical work, workers in a narrow group are assumed to have the
same transition rates between employment and unemployment when their choices of reservation wages are the same.
4. Outline of Remaining Chapters The major conclusions of this monograph are based on the behavior of unemployed workers in seeking jobs. Chapter 2 develops a model of this behavior. The model is an
extension of the standard search model in which movements between employment and unemployment are described by a Markov process. The description of the earnings distribution (goal a. of section 1) is
broken into three parts. Chapter 4 describes the distribution of employment, Chapter 5 the wage rates and Chapter 6 the earnings, which are the product of employment and wage rates. The various
questions relating to the distribution of economic welfare (goal b.) are treated in several chapters. Chapter 3 shows how the valuation of employment in terms of earnings may be found using the
search theory and data on reservation wages. Chapters 5, 6 and 7 analyze the contribution of choice and random outcomes to inequality. The third goal, relating inequality to the search procedures
assigning workers to jobs, is treated mainly in Chapter 8. That chapter also describes the assignment brought about by search and shows that a particular form of underemployment arises from
regression towards the mean. The source of inequality is also discussed in Chapter 5, in the section on the source of wage offer dispersion. Chapter 7 presents a number of conclusions about labor
markets that arise from the monograph. These include results that are relevant to the dual labor market hypothesis, the adjustment of wages and unemployment over the business cycle, and whether wage
adjustment would eliminate increases in unemployment in labor markets when demand declines. Chapter 9 provides a summary of the monograph's conclusions.
Chapter 2
Search in Labor Markets 1. Introduction This chapter describes the job search model that will be used to study the distribution of earnings. The Markov process describing movements between employment
and unemployment will be the basis of the distribution of unemployment in Chapter 5. The wage offer distribution will be used to explain the distribution of wage rates in Chapter 6. The impacts of
labor market conditions will yield the valuations of unemployment in terms of earnings in Chapter 4. The behavior of workers will explain the joint distribution of unemployment and wage rates in
Chapter 6. The chapter accomplishes several specific tasks. Section 2 develops the worker search behavior in a Markov process. This is used to find the worker's trade-off between unemployment and
earnings, the basis of the valuations in Chapter 4. The impacts of labor market conditions are developed in section 3. Section 4 presents an apparatus for describing the distributions of workers and
jobs in labor markets. This is used to derive the distribution of accepted wage rates in Chapter 6. Section 5 analyzes the firm search problem. The results explain the value of the marginal product
for a worker at a particular firm and the source of the wage offer distribution. In the standard search model, the prospective worker incurs a cost of search and receives offers at a fixed rate (see
S. Lippman and J. McCall, 1976, for the standard model). These offers are distributed according to a density function that is known to the worker and stable over time. The optimal behavior for the
worker is to accept any job offer with a wage equal to or exceeding a certain level, called the reservation wage. This reservation wage has the characteristic that the extra cost of searching one
more time equals the expected benefit from doing so. The reservation wage also equals the expected net gain from searching. While the standard search model could be developed to analyze inequality,
it cannot adequately explain the distribution of employment for workers. An important element of labor markets is the possibility of multiple transitions at uncertain times between employment and
unemployment. This element is best described by the two-state Markov model assumptions used here. Two fundamental questions arise in selecting search theory to analyze unemployment. First, what
assumptions are we implicitly making about the conditions facing individual workers when we describe unemployed workers as engaged in search? Search theory, as a framework to study unemployment, is
not without alternatives. One could instead regard unemployment as a constraint on individual choice, a point of view taken by 1. Abowd and O. Ashenfelter (1979) and Shelly Lundberg (1982). With
unemployment, workers are unable to supply as much labor as they wish at the prevailing wage rates. Available jobs are then rationed among applicants. E. Malinvaud (1977) and R.J. Barro and H.
Grossman (1971) in particular have developed macroeconomic models in which rationing occurs in both goods and labor markets. But describing unemployed workers as facing a constraint is consistent
with workers conducting search. The constraint takes the form of a choice set consisting of com-
Search in Labor Markets
binations of expected wages and expected unemployment. In the extreme case, workers cannot influence the expected unemployment. Rationing occurs through the random manner in which workers come across
job vacancies. A given job vacancy is rationed among potential applicants on the basis of who happens upon the job, and meets the qualifications, first. Search theory is therefore sufficiently
general to incorporate the alternative views of unemployment as a constraint on labor supply. As will be demonstrated, the assumptions of search theory in no way imply that workers benefit from
unemployment, or even that they have much control over the level of unemployment. The second question concerns why search takes place. Search occurs because of wage dispersion. Turning down a low
wage offer may payoff for a worker because of the possibility of higher wage offers in the future. In G. Stigler'S original development of the subject (1962), wage dispersion arises through
incomplete and costly information. In the model developed in this chapter, wage dispersion arises from the heterogeneity of firms. Not only are workers engaged in search for jobs based on wages, but
at the same time firms are engaged in a symmetric search for workers that satisfy firms' minimum grade (quality) requirements. Because firms differ in the extra output they would obtain from a given
worker, they offer different wages: this is the source of the wage dispersion facing an individual worker. Wage dispersion therefore plays an active role in the assignment of workers to jobs, and
inequality arises as a consequence of the allocation problem solved by the economy's search procedures.
2. Worker Search Behavior Suppose that a worker's movements between employment and unemployment are determined by a finite state continuous time Markov process (D. Isaacson and R.W. Madsen, 1976,
p.229; S. Karlin and H.M. Taylor, 1975, p.150; L. Takacs, 1962, p.35; other work on job search, Markov models of unemployment and reservation wages may be found in Kenneth Burdett, 1978; Burdett and
Dale Mortensen, 1978; Christopher J. Flinn and James J. Heckman, 1982a, 1982b; Nicholas Kiefer and George Neumann, 1981a, 1981b; R. S. Toikka, 1976; John T. Warner, Carl Poindexter, Jr., and Robert
Fearn, 1980; and Niels Westergaard-Nielsen, 1980, 1981a, 1981b). In a small time period T, an unemployed worker's search has a probability of AT of yielding an acceptable job and a probability 1 - AT
of not finding one, neglecting terms of higher order in T. Similarly, in a small period T, an unemployed worker faces a probability of IJ.T of losing his or her job, and a probability of 1 - /1-T of
remaining employed. The parameter A depends upon which jobs the worker accepts. Suppose that in the period T, the worker receives an offer with probability (JT. Assume the worker sets a reservation
wage of Wo and suppose wage offers have a cumulative distribution function F. Assume F is continuous and has a continuous derivative. Let f be the probability density function, so that f(x) = dF(x)/
dx. Then 1 - F(wo) is the probability that a wage offer exceeds Wo and (JT(1 - F(wo» is the probability the worker gets and accepts a wage offer in the period T. The transition rate from unemployment
to employment is therefore given by:
A = (J(1 - F(wo»
In the usual development of continuous time Markov processes, the parameters A and /1-, referred to as transition intensities, are derived as the limits of transition
Search in Labor Markets
probabilities (e.g., the probability of being in statej at time t given that the system was in state i at time t - to). However, it is simpler to begin with the transition intensities themselves. The
assumption of these transition intensities implicitly incorporates the Markov property and stationarity. From the Markov property, the probability in a given time period that an unemployed worker
becomes employed or that an employed worker becomes unemployed does not depend on the previous work history of the worker. Stationarity is reflected in the fact that the transition intensities or
transition probabilities for a given period of time do not change over time. Under the assumptions of the Markov property and stationarity, Karlin and Taylor (1975, p. 154) show that the probability
of being employed at time t given that a worker was employed at time is:
+ _A_ e-o.. + /L)I A+P.
Similarly, the probability of being unemployed at time t given that the worker is unemployed at time is: ~
+ _p._ e-(A + 1')1 A+P.
In the long run, a worker can expect to be unemployed a proportion p./ (A + p.) of the time and employed A/(A+ p.) of the time. An important result of the search literature is that the solution to
the standard search problem possesses the reservation wage property. That is, the optimal strategy for the worker is to select a reservation wage. Thomas Coleman (1983) has demonstrated that the
search problem in a Markov model also possesses the reservation wage property. The problem facing the worker is then to choose the optimal reservation wage. The answer will be obtained by forming
recurrence relations and solving to find the expected value of being in a particular state, as in dynamic programming (see Richard E. Bellman and Stuart E. Dreyfus, 1962, p.302; Ronald Howard, 1960,
Chapters 7,8; Hisashi Mine and Shunji Osaki, 1970, Chapter 2). The expected value of being in a state can then be maximized with respect to the worker's reservation wage. Now consider the worker's
expected values of being in the state of employment. Let L(w) be the expected present discounted value of the worker's future stream of income and unemployment costs and benefits if the worker is
currently employed at wage rate w. Let M(wo) be the present discounted value of future income and unemployment costs and benefits when the worker is unemployed and has a reservation wage of woo The
values of L(w) and M(wo) will be obtained as limits of corresponding expressions for finite periods, L(w;r) and M(wo;r). Consider first the employed worker's prospects. In a small period of time r he
or she earns wages wr and at the end of the period either remains employed or loses the job. If he or she remains employed, with probability 1 - p.r, the present discounted value of continuing
employment is L(w;r)e- iT, where e- iT is the discounting factor that arises because the expected present value L(w;r} occurs at a time r in the future. If the worker loses the job, with probability
p. r, the present discounted value of his labor force participation from that point on is M(wo;r)e- ir • For an arbitrary wage w, the expected present discounted value of a worker's future income and
net benefit stream is: L(w;r)
Rearranging (2.4) yields:
= Wr+ (1
- p.r)L(w;r)e- iT + p.rM(wo;r)e- iT
Search in Labor Markets
L (W;T)
= W_+---,p.c. . M---:(,-W.::..;O;,:-1"),-e_-_IT p.+(1- e-1T)IT
Now consider the value of L(w;1") as 1" approaches zero. Using the series expansion for e-iT, the limit of e- iT as 1" approaches zero is one and the limit of (1 - e-i~IT as T approaches zero is i.
Therefore the limit of L(w;1") as 1" approaches zero is: L(w)
= w+p.M~wo)
Next, suppose a worker is unemployed. During the period of unemployment, the worker incurs search costs of c. Further, the worker receives nonemployment benefits of b per period, which may exceed or
fall short of the search costs c or may be negative. These nonemployment benefits arise from unemployment compensation or other transfer payments arising from unemployment, or they may be the
monetary valuation of nonpecuniary losses arising from unemployment or of nonpecuniary gains arising from nonemployment activities such as leisure, household work or selfimprovement. In the time 1",
the unemployed worker therefore receives net benefits of (b - C)T. At the end of this time, the worker may remain unemployed, with probability 1 - AT and present discounted value M(wo;1")e- iT, or
may find employment, with probabilty A1" and present discounted value E(L(w;T»e-iT, where E(L(w;T» is the expected value of L(W;T). The present discounted value for an unemployed worker is therefore:
= (b -
C)T+ A1"E(L(w;T»e-iT + (1 - A1")M(wo;1")e-iT
Rearranging and taking the limit yields: M(
c+AE(L(w» A+i
where E(L(w» is the limit of E(L(w;1"» as 1" approaches zero. Let We = E(w). The term We is the expected wage given that the wage exceeds the reservation wage Woo Using the density function for wage
offers f(x), We
= E(w) =
1"" xf(x)dx 'i' _F(wo)
C+ AL(We) A+i
Then: .~(
)=b -
Substituting L(we) from (2.5) into (2.8) yields: M(wo)
p.+i = -:I A+P.+l. We+ A+P.+l .(b -
The expected value of being unemployed, M(wo), is calculated as the present value of a weighted average of the benefits of being in the two states, We and b - c. The weights in the calculation are
roughly the long-run proportions of time spent in the two states but differ because of discounting. The expression (2.9) always holds no matter which reservation wage the worker chooses. Now consider
the problem of choosing the reservation wage. Two approaches
Search in Labor Markets
may be taken. First, the reservation wage that maximizes M( wo) can be found; second, the wage at which the worker is indifferent between the two states of employment and unemployment can be
determined. The two approaches yield the same level for the reservation wage. Following the first approach, the first order condition for the optimal reservation wage is: (JM/(Jwo
=0 =! ( i
(Jwe (JWo
~ _ (Jwo
~) (Jwo
From (2.7), (Jwe/(Jwo = (we - wo)f(wo)/(1 - F(wo». Furthermore, from (2.1), (JX/(Jwo = - Of(wo). Therefore: (2.11)
Substitution of (2.11) into (2.10) now yields: (JM (Jwo
=0 =
Wo - iM(wo) ~ i(X+~+i) (Jwo
or: Wo =
The second order condition for a maximum of M(wo) is that (J2M/(JW02 < O. It can be shown that: (J2M (Jwo2
X+ ~ + i (Jwo
The second order condition is always satisfied since in the above X declines as Wo increases. The result in (2.12) states that the optimal reservation wage equals the expected flow of benefits from
being currently unemployed. The intuition of this result is clear: anytime the worker can do better than the long-run average by taking a job, he or she should do so according to the criterion.
Substituting (2.12) into (2.5) yields this further result: L(wo) = M(wo)
At a wage offer of Wo, the worker is indifferent between current employment at Wo and continued search. At a wage greater than Wo, L(w) > M(wo) and the worker will achieve a higher present value by
moving to the state of employment. Maximization of the present value M(wo) therefore yields a solution where the worker need only compare present values at each moment in time. One can also begin
with the second approach to the choice of reservation wage, finding the wage at which L(w) = M(wo). Then it can be shown that Wo = iM(wo) and the first order condition (2.10) is satisfied. These
results repeat, in alternative terms, the fundamental conclusions of the standard search theory. The result in (2.11), obtained in deriving (2.13), is central to the analysis of unemployment
valuations in the next chapter. By raising the reservation wage, the worker achieves a higher expected wage at the cost of a higher expected level of unemployment. The expression in (2.11) shows the
trade-off between the expected wage We and the transition rate X that the worker is able to achieve in the labor
Search in Labor Markets
market. In the next chapter, this trade-off will be shown to equal the trade-off the worker is willing to undertake. The actual operation of labor markets and behavior of individuals depart from the
assumptions used to derive the above results in a number of ways. The Markov property implies that the only information about a worker relevant to future movements, aside from the grade and
reservation wage, is the current state. Past employment or unemployment does not influence current transition rates. Instead, there are some conditions which would violate the Markov property. These
conditions are described in a paper on heterogeneity and state dependence by James J. Heckman and George J. Borjas (1980; see also Heckman, 1981). Three forms of non-Markovian state dependence are
distinguished. Occurrence dependence arises when the number of previous unemployment spells affects the likelihood that a worker loses his or her job or stays unemployed. For example, employers may
use past work histories in employment decisions. Another form of state dependence is lagged duration dependence, in which current transition probabilities depend on the time spent unemployed in
previous spells rather than the number of spells. During unemployment spells, workers may lose work experience or on-the-job training, reducing their current attractiveness. Alternatively, and less
likely, workers may develop their job hunting skills from previous unemployment episodes. Both occurrence dependence and lagged duration dependence violate the Markov property but they do not
necessarily violate the stationarity property of constant transition rates. The Markov property holds in the models developed here if we assume that past work history affects current transition rates
through the grade of the worker. Further, this grade changes slowly so that for a period sufficiently short relative to the worker's employment history, the grade may be treated as constant. A third
form of state dependence is duration dependence, which violates the stationarity property. This arises when the transition rates change over time as the time spent unemployed increases. Duration
times no longer satisfy the exponential distribution. Positive duration dependence arises when the transition rate rises as the time spent in the state increases. The most important case where this
arises is when worker assets decline as a result of unemployment. In calculating the future benefits of employment, the worker will use his or her own discount rate. This discount rate will typically
exceed the market rate of interest because of imperfect capital markets and the riskiness of lending to an unemployed worker. Wealth or debt undoubtedly affect the discount rate also. As unemployment
continues, workers are forced to dissave, reducing their wealth, and face progressively greater difficulty in borrowing. The discount rate used by the worker will rise, reducing the present
discounted value of future benefits of employment, and the reservation wage will decline. The worker may also run out of alternative activities, leading the worker to lower further the reservation
wage as time spent unemployed increases. The imminent loss of unemployment benefits will also induce workers to lower their reservation wages. The consequence of the decline in the reservation wage
is that the transition rate from unemployment to employment increases over time. A force tending to lower transition rates is that employers may also be dissuaded from offering jobs to workers
unemployed longer times. Negative duration dependence may arise in time spent employed. If workers acquire specific on-the-job training, their employers may be less likely to lay them off. Transition
rates out of employment would then decline with time. The more complicated job matching argument (Boyan Jovanovic, 1979) is that firms take some time to learn about the value of a particular match.
The transition rate first rises and then
Search in Labor Markets
declines after firms learn about the value of a match. Duration dependence cannot be observed directly from aggregate duration data. When workers in a group have heterogeneous transition rates out of
unemployment, the transition rate will seem to decline with duration even through each worker has a constant transition rate. Heckman and Borjas develop models of labor market dynamics which allow
for duration dependence and provide statistical methods for testing for various kinds of state dependence. Undoubtedly the labor market process is characterized by duration dependence. But this
phenomenon does not affect the results derived here, which do not concern the work histories of particular workers or the estimation of structural parameters but the trade-offs at one point in time
and the influence of the unemployment rate on the reservation wage.
3. Impacts of Labor Market Conditions Combining (2.9) and (2.12), one obtains an expression showing the value of the reservation wage: A p.+i A .We+ A .(b-c) +P.+I +P.+I
Wo =
This expression for Wo is not entirely explicit; the transition rate A on the right-hand side also depends on Woo Nevertheless the expression may be used to find the impacts of labor market
conditions on the reservation wage, Wo, and the welfare of an unemployed worker, M(wo). The envelope theorem greatly simplifies the calculation of these impacts. Suppose K is some parameter affecting
the welfare of the worker (e.g., p., A, i, b or c). Then bWol bK must satisfy the following condition, obtained by implicit differentiation of (2.15):
bwo=~( bK
A + p. + i
b (A
A+ P. + i e
w+ e
p.+i A + P. + i
+ p.+i (b -c) ) -bwo A+ p. + i bK
However, the second line is zero: the magnitude in parentheses is iM(wo), and bMI bwo = 0 from the first order condition. Therefore the impact of a change in labor market conditions on Wo can be
calculated from the right-hand side of (2.15) as if the reservation wage remained constant. Once bWol bK is obtained, the change in the welfare of the worker can be found using the result that Wo =
iM(wo). Then bMlbK = (bwolbK)/i, unless the parameter is the discount rate. Using this procedure, the following results may be derived. a. Transition Rates. bWolbp.
= ibMlbp. = (b - c - ~o) A+P.+I
Since Wo > b - c, this derivative is negative. In evaluating the effects of Aon Wo, it must be remembered that Ais a function of Wo and not a simple parameter. However, it is possible to consider
shifts in A at constant values of WOo From (2.1), a shift in Acan be represented as a change in 0, the offer rate. Then:
Search in Labor Markets
( We - wo) IfA
Rearranging yields the following: aWol ao
We -
b. The Unemployment Rate. The long-run unemployment rate for a worker is u = p.1 (}.. + p.). It increases with p. and declines with }... The difference between the weight of b - c in (2.15) and the
unemployment rate is: p.+i p. }..+p.+i - }..+p.
= }..+p.
i ) }..+p.+i
Let e = iI(}..+p.+i). Then: Wo =
(1- u)(l +e)we+(u+(1- u)e)(b - c)
For a constant value of e: aWo/au
we+b - c- e(we+b - c)
The reservation wage falls by more than the difference between the benefits of being employed, We, and the net benefits of unemployment, b - c, when unemployment u rises. c. The Discount Rate. !lI
1!lI. _ b - c - Wo . I\+P.+I
vWo vI - "
Inspection of (2.20) reveals that an increase in the discount rate has the same effect in reducing the reservation wage as an increase in the transition rate p.. This suggests an important potential
connection between financial and labor markets. If discount rates follow monetary interest rates, an increase in the interest rate will reduce wage rates via a shift in supply, as reflected in lower
reservation wages, rather than through demand for goods. d. The Nonemployment Benefit. awo/ab
= iaM/ab = }.. }..+i
If the nonemployment benefit b rises by one dollar, the reservation wage rises by a lit-
tle more than one dollar times the unemployment rate. For example, if the unemployment rate is 10 per cent, the reservation wage may rise 15 cents in response to an increase in unemployment benefits
of one dollar. Considering the attention given to the disincentive effects of unemployment benefits, this increase is rather moderate. The expression (2.21) confirms the reasonableness of estimates
by T. Lancaster and A. Chesher (1983) of the effects of b on woo Using British data, they find that for all workers the elasticity of the reservation wage with respect to the unemployment benefit is
0.135. e. The Expected Wage. The expected wage is somewhat more difficult to deal with than}.. since a shift in the distribution of wage offers alters both We and }... For simplicity, let We = ww
Search in Labor Markets
this is a parameter wtimes a function w(wo) of the reservation wage. Then we may consider the effects on Wo of a change in w:
We - Wo a}../ aw }..+p,+i + }..+p,+i aWe/aw
aWol aw = } . .
The first term on the right-hand side in (2.22) is slightly less than the long-run proportion of time spent employed; this will typically be close to one for most groups. The second term arises
because the likelihood of getting an acceptable job offer increases when the wage offer distribution shifts upwards. Depending upon the wage offer distribution, the reservation wage can increase more
or less than the shift in the expected wage. An immediate implication of this result is that any upward shift in wage offers will be mostly absorbed in increased reservation wages, so that there
would only be a negligible supply response or change in the unemployment rate. This conclusion is discussed further in Chapter 7 and is roughly consistent with findings on the constancy of the
natural rate of unemployment (R.Hall, 1979a). Changes in the wage offer distribution can be considerably more complex than the change in the shift parameter w described here. Kenneth Burdett (1981)
demonstrates that an increase in the mean of the wage offer distribution can produce a greater or smaller change in the expected wage when the reservation wage is held fixed, depending on the
log-concavity of the wage offer distribution. Burdett's paper deals in more detail with worker responses to changes in wage offer distributions.
4. Extensions Several extensions of the foregoing model are possible. Workers may leave the labor market if by doing so they can avoid the search costs. Then the unemployed workers will quit whenever
iM falls below b, assuming b is the same in and out of the labor force. Labor force participation regressions can then be used to find the tradeoff between earnings and unemployment that leave labor
force participation and hence M(wo) unaffected. This will be done in Chapter 3. Search intensity can be incorporated by assuming that the cost of search c has increasing costs and yields decreasing
returns to search intensity. The introduction of search intensity does not modify any of the results obtained in the preceding section. The reason for this is that the mixed partial derivative a2 iM/
awoas is zero, where s is search intensity. In working out the effects of a parameter change on Wo, any adjustments in search intensity may be disregarded since it will have no effect on Wo or iM
(wo). Similarly, in working out the effects of any parameter changes on search intensity, the adjustments in Wo may be disregarded. The major result is that search intensity depends on the difference
between the expected wage We and the nonemployment benefit b. Search intensity can be shown to increase with We and decrease with b and the unemployment rate. A marginal participant, with b = iM(wo),
will engage in no search; only when iM(wo) exceeds b will search arise. Risk aversion may be incorporated by assuming that the worker possesses a convex utility function of outcomes, U(w).
Uncertainty of one type is already included in the analysis of the previous sections. Uncertainty regarding times of transition between states and occupancy of states is already part of the
calculations of L(w) and M(wo). But these calculations value an extra dollar as the same no matter what the current income. With the introduction of a utility function U(w), the reservation wage
equals a weighted average of the expected utilities of being in the two states. As in the standard model, the reservation wage is reduced in the presence of risk aversion.
Search in Labor Markets
5. Distribution of Workers and Jobs The previous sections described the behavior of a single worker. This section describes the distributions of aU workers and all firms. One characteristic of a
worker is his or her reservation wage, determined by the worker on the basis of wage offers and nonemployment costs and benefits. Additionally, the worker is here characterized by a one-dimensional
attribute, g, for the grade of labor. At any point in time, the grade of labor may be taken as given, although clearly in the long run education and training will modify a worker's grade. This grade
is observable to prospective employers, who can evaluate the contribution of the worker's labor to production. There is therefore no uncertainty or imperfect information in the observation or
evaluation of abilities. The distribution of unemployed and searching workers is described by H(w,g), the proportion of unemployed workers with reservation wage less than or equal to W and grade
greater than or equal to g. The function H is a cumulative distribution function, although it is obtained over a peculiar domain, as illustrated in Figure 2.1. Let h(w,g) be the joint density
function of reservation wages and grades. Intuitively, it may be thought of as the proportion of unemployed workers with reservation wage equal to W and grade equal to g. The cumulative distribution
function is then simply the double integral of the joint density function over the appropriate domain: H(w,g)
Io rOh(X,Y)dYdx W
We shall also have occasion to use the marginal density functions of H: H.(w,g)
= 8H(w,g)/8w = {h'(w,Y)dY >0 g
and: H 2 (w,g)
= 8H(w,g)/8g = -lwh(X,g)dX < 0
The sources of these marginal distributions are indicated on Figure 2.1. Let the total number of unemployed be fl. Corresponding to the job search problem facing the worker, there is a worker search
problem facing the firm. Workers arrive randomly at the firm seeking jobs. Assume that the firm is constrained to pay all entering workers the same wage without regard to grade. Charles Wilson (1980)
proves in another model that a firm will offer a constant wage to all workers whose grade exceeds a minimum standard. The proof demonstrates that such a policy dominates (Le., yields a higher profit)
than a policy where the wage offer varies with grade. However, the assumptions for this result differ significantly from the labor market assumptions used here. Wilson's model uses job matching, in
which the value of a worker to a firm is a random variable. Neither the firm nor the worker have prior information about the value of the match. Workers of different grades will then be paid the same
upon entry, with differences only arising among firms or after some period of time at the firm. Another justification for this constraint is that paying different wages to workers on entry would
generate dissatisfaction and feelings of inequity. Whether this assumption is valid is an empirical question, but if it is true then attempts to observe wage differentials, estimate the contribution
of worker characteristics or test for the presence of
Search in Labor Markets
screening using within-firm data are doomed to failure. Given an invariant wage offer on the part of the individual firm, the search problem facing the firm is to specify a minimum grade requirement.
If the worker's grade exceeds the grade requirement, an offer is extended; otherwise the worker is shown the door. The details of the firm's problem are developed in section 6. For the moment, let us
describe the distribution of job vacancies. Let V(w,g) be the proportion of job vacancies with wage offer greater or equal to wand grade requirement less than or equal to g. Again, V is a cumulative
distribution function, the domain of which is illustrated in Figure 2.2. Let v(w,g) be the joint density function, so that: V(w,g) =
[0 t v(x,y)dydx w
The marginal density functions of V are then given by: V,(w,g)
= lW(w,g)/(lw = -
The ways in which these marginal distributions are generated are illustrated in Figure 2.2. Let the total number of vacancies be V. The two cumulative distribution functions, H(w,g) and V(w,g),
summarize the prospects facing firms and unemployed workers, respectively. They are of course derived from underlying behavior of firms and workers, but at this level a number of conclusions can be
drawn. A visit by an unemployed worker to a firm results in a job
Grade /\.Domain of H(w,g)
Source of H,(w,g)
Source of H2 (w,g)
Reservation Wage
Figure 2.1: Unemployed Worker Cumulative Distribution Function
Search in Labor Markets
offer being extended and accepted only when the worker's grade equals or exceeds the grade requirement of the firm and the wage offer of the firm equals or exceeds the reservation wage of the worker.
The probability that a worker with reservation wage wand grade g gets a job from an interview is therefore V(w,g), the proportion of job vacancies with wage offer greater than or equal to wand grade
requirement less than or equal to g. Similarly, the probability that a firm acquires a new worker is H(w,g). The desire to express the probabilities of employment now explains the peculiar choice of
domains for H(w,g) and V(w,g). The expressionsf(w), A and We used in section 2 can now be reexpressed in terms of V(w,g). Let 9 be the worker's grade. Then the distribution of wage offers facing this
worker isf(w) = - V,(w,g)/ V(O,g), where V(O,g) is the proportion of vacancies with grade requirement less than or equal to g. The expected wage given that it exceeds the reservation wage Wo is:
w e-
txv(x,Y)dYdX 0
The transition rate A is simply 'Y V(wo,g), where'Y is the rate at which job interviews take place. The offer rate (} is 'Y V(O,g). Using H and V, it is possible to construct some mixed
distributions. First, the joint density function of new hires by reservation wage and grade of worker is proportional to V(w,g)h(w,g). This distribution is obtained by multiplying the joint density
function of workers by reservation wage and grade, h(w,g), by the likelihood that a worker is offered and accepts a job at an interview, V(w,g). Similarly, the joint density function of new hires by
wage and grade requirement is proportional to H(w,g)v(w,g), obtained by multiplying the joint density function for vacancies times
Grade Requirement
/source of V.(w,g)
Source of V,(w,g)
Domain of V(w,g)
Figure 2.2: Job Vacancy Cumulative Distribution Function
Wage Offer
Search in Labor Markets
the likelihood that the vacancy is filled. The joint density function of new hires by wages and grades is somewhat more complicated. Suppose we want to find the number of new hires per period for
which the wage paid by the firm is W and the grade of the worker is g. First, consider the number of eligible vacancies for which workers interview per period. These are vacancies for which the wage
paid is exactly W and the grade requirement is less than or equal to g. Let q be the number of workers interviewed by firms for each vacancy in a period. Then the number of eligible vacancies is q
Vf~v(w,y)dy, where Vis the total number of vacancies available at one point in time. Next, consider the proportion of interviews for these vacancies that result in a job taken and for which the
worker's grade is exactly g. The reservation wage must be less than the wage offered by the firm, W, so that this proportion is gh(x,g)dx. The product of the number of eligible vacancies times the
proportion accepted yields the desired joint density function by wages and grades:
I v(w,y)dy fW.h(x,g)dx = q V VI (w,g)Hz(w,g) i
One could similarly have begun with the eligible workers and found the proportion that would have been offered jobs. Let -y again be the number of interviews per time period for a worker, the same
for all workers. Then the joint density function is -yHV1(w,g)Hz(w,g), where q V = -yH from the requirement that the total number of interviews be the same per period from the point of view of either
firms or workers. From the above derivation, the joint density function of wages and grades is simply the product of two marginal density functions. It would be interesting if one could reconstruct
the cumulative distribution functions V and H or the joint density functions v and h from the observed joint distribution of wages and grades, but this does not appear possible.
6. Behavior of the Firm The previous sections have proceeded from individual worker behavior to aggregate distributions without first describing the other set of agents in the labor market, firms.
This section now turns to the behavior of firms in choosing which workers to offer jobs and which wages to pay. The results will be useful for two purposes. First, to understand the source of unequal
wage offers facing a worker, we need to know the determinants of the wage offer distribution, summarized by V(w,g). Second, in investigating the efficiency of search as an assignment mechanism, we
need to know the alternative values of a worker's labor. This will be used to study search distortion in Chapter 8. Consider a firm operating in perfectly competitive product markets. Suppose the
firm does not face a perfectly competitive labor market, however, in the sense that the firm can hire immediately all the labor it wants at the prevailing market wage. Instead, the firm faces a
limited hiring rate, which it can influence through its wage and grade policies. Nevertheless, the firm will be shown to satisfy the neoclassical marginal productivity conditions, although with some
modification. The hiring policies of the firm are described by its wage offer w, its minimum grade requirement go and its interview rate q. Acceptable applicants are offered the same wage rate w,
regardless of their grade. Essentially, the search behavior of firms is symmetric to the search behavior of individuals. While workers offer a grade and search for an acceptable wage, firms offer
wages and search for acceptable grades.
Search in Labor Markets
This mutual search generates an assignment of workers to jobs that will be discussed in Chapter 8. The model of firm behavior developed here abstracts from several features of firm hiring behavior
that are important in other problems. The model does not take account of internal promotion of workers, hiring of different grades of labor in separated markets, the important dynamic problem of
adjusting to new employment levels or the influence of the capital stock on the optimal hiring policies of the firm. Suppose the firm under consideration faces a short-run production function Q(n,ge)
which depends on the number of workers n and the average grade of labor ge. This kind of production function, in which the average grade of labor enters as an argument, has been analyzed extensively
in Sattinger (1980), Chapter 4. An important feature of this production function is that the marginal product of a particular worker is a linear function of the worker's grade. The firm sells the
output at a market price p over which it has no influence and incurs wage costs of wn, with the wage rate w determined by the firm itself. The firm also incurs interview or search costs of qq, where
Cf is the firm's cost per interview and q is the number of interviews chosen by the firm. The objective of the firm is then to maximize its short run profits, pQ(n,ge) - mn - cfq. However, the firm
faces a constraint in its hiring policies. Workers leave the firm at the rate m per worker per period, so that the number of workers leaving in a period is mn. The number of workers coming into the
firm during a period is q Z, where q is the number of interviews chosen by the firm and Z is the likelihood that an interview results in the worker taking the job. This likelihood z is given by the
proportion of employees with reservation wage less than or equal to the firm's wage offer wand grade greater than or equal to the firm's grade requirement go. It therefore equals H(w,go). If the firm
is neither growing nor contracting, the new hires q z must equal the separations m n. Adding the LaGrangian multiplier rb for this constraint one obtains as the objective function K of the firm: K =
pQ(n,ge) - wn - cfq+rb(mn - qz)
Differentiation of this objective function with respect to the variables n, w, go, q and cP yields: aKlan
= pQI - w+cPm = 0
age az aKlaw = pQ2 aw - n - cPq aw = 0 age az aKlago = pQ2-S;- - cPq-s;ugo ugo
In these expressions, QI workers is given by:
aKlacP = mn - qz = 0
Cf- cPz
aQI an and Q2
g -
aQI age. The expected grade of hired
- ['''XH2(W,x)dx --",g,,-,= - : - - - - : - - -
In this expression, - H 2 (w,x) is the density of interviewed workers with reservation wage less than or equal to wand grade exactly equal to x. The ratio
Search in Labor Markets
- Ha(w,x)1 H(w,go) is then the density of hired workers with grade exactly equal to x. The expected grade is the integral of each grade from go up times the proportion of hired workers with that
grade. This calculation presumes that workers hired previously faced the same labor market conditions, in particular the wage offer and minimum grade requirement. Differentiation of ge with respect
to the minimum grade requirement go yields a result analogous to (2.11): age ago
_ ge - go az z lJgo
The pay behavior of the firm is not immediately obvious from the first order conditions (2.25) through (2.30). From the fIrst order condition for the optimal wage, the fIrm receives two benefIts from
raising the wage offer. The average grade of labor rises for a given minimum grade requirement because more of the higher grade workers are likely to accept; this increase in ge yields higher levels
of output. The second benefit of a higher wage is that the likelihood of acceptance z goes up, reducing the search or interview costs of the firm. From these considerations, and the constraint that
the wage paid to the individual workers is unrelated to their grades, it seems unlikely that traditional neoclassical marginal productivity conditions would hold. To investigate the neoclassical
conditions, consider the marginal product of a worker with grade g. This marginal product is obtained by taking the total differential of output with respect to n and ge: dQ = QI dn + Qadge. In this
expression, for one worker, dn is simply 1, while: dge = ~gi + g _ ~gi = g - ge "" n+ 1 n n+ 1
In the above, L:gi is the sum of the grades for other workers at the firm. The marginal product of a worker of grade g is therefore a linear function of the worker's grade: (2.33) From this
expression, the marginal product, MP, of a worker with the average grade ge is QI' so that the value of the marginal product, VMP, is p QI. Substituting cP from (2.29) into (2.26), one obtains:
(2.34) The term CfmlZ is the fIrm's search costs per employed worker per period. The result in (2.34) therefore states that the value of the marginal product for a worker of the average grade will
equal the wage rate plus the search costs per worker per period. This result modifIes the standard neoclassical condition, that the value of the marginal product equal the wage, by adding in the
search costs. Next, consider the value of the marginal product for a worker with the minimum grade requirement go. From (2.33), this equals pQI +pQa(go - ge)/n. However, substituting (2.32) into
(2.28) yields pQa(go - ge)/n = - CfmlZ. Therefore: (2.35) That is, the value of the marginal product for a worker with the minimum grade requirement is w, the wage offer of the fIrm.
Search in Labor Markets
The hiring practices of the firm can now be reexpressed in terms of the value of the marginal product for workers. Once the firm has interviewed a prospective employee, the interview costs are sunk.
The decision to hire a particular worker is therefore based on a comparison of the worker's value of the marginal product with the firm's wage offer. The grade requirement go is the grade at which
the value of the marginal product just equals the wage. On such workers, the firm receives no extra return in terms of an excess of the value of the marginal product over the wage to cover the sunk
costs of the interview. On the average, however, the value of the marginal product exceeds the wage by an amount sufficient to cover the costs of search. In place of the single marginal product
condition in the standard neoclassical labor market, there are now two conditions that arise when the labor market is characterized by search. The first condition is relevant to the marginal decision
to offer a job to a particular worker, and neglects the fixed search costs; this condition is expressed in (2.35). The second is relevant to the average outcome of the hiring policies and therefore
incorporates the average costs of search; this condition is given by (2.34). Now consider firm behavior in circumstances that are not steady state. At all times during a business cycle, the firm can
choose a grade requirement such that the condition (2.35), which concerns marginal workers, is always satisfied. If the firm maintains a fixed wage rate over the business cycle because of implicit
labor contracts or because of risk shifting, the firm can continue to satisfy this condition. A recession then shows up in the labor market as an upward revision in the grade requirements. Unemployed
workers find their job prospects and expected wage rates worsened substantially, although no firm changes its wage offer and all firms may continue to hire. The marginal product condition for the
average worker, (2.34), will not be satisfied at all times over the business cycle. During a recession, firms will tend to find that past hiring practices leave them with an average grade of labor
below what is necessary to satisfy (2.34) currently. The value of the marginal product for a worker with the average grade of labor will then fall short of the wage offer plus the average search
costs. Similarly, if the economy is in a peak, the average grade of labor will exceed the average consistent with the current wage offer and grade requirement because of past hiring decisions. Then
the average grade of labor will have a value of the marginal product which exceeds the wage offer and search costs. Only on average, over the whole business cycle, will the marginal product condition
(2.34) be satisfied. Other implications of fixed search and training costs for labor markets have been developed by O. Becker (1962) and W. Oi (1962). These authors have been mainly concerned with
quit and layoff behavior and have attempted to show that marginal productivity conditions are not satisfied. Although the firm pays a single wage, there is a grade differential associated with the
policies of the firm. This grade differential measures the value to the firm of an increase in a worker's grade. It equals the output price times the slope of the linear relation between marginal
product and grade and is given by pQ2/n. From previous results, this grade differential is also given by Cfm/[Z(ge - go)], the search costs per worker divided by the difference between the average
grade and the minimum grade requirement. A few comparative statics results may be obtained from the first and second order conditions for profit maximization, at the expense of some rather tedious
and straightforward calculations which are not included here. The calculations rely on the signs of some mixed partial derivatives, for which reasonable arguments can be
Search in Labor Markets
presented. Given the signs of these partial derivatives, it can be shown that IJnl IJcf < 0, IJw I IJcf > 0 and IJgol IJcf < O. In response to increased search costs, the firm responds by reducing
the size of the work force; in addition, it attempts to compensate for the higher search costs by increasing the hiring rate z. It does this by raising the wage rate and reducing the grade
requirement. The results also show that an increased supply, arising from an increase in the hiring rate z, has effects on the firm's behavior which are equivalent to a reduction in search costs. In
particular, IJnllJz > 0, IJwllJz < 0 and IJgollJz > O. Again the firm responds as we would expect, increasing the work force and absorbing the greater hiring rate by reducing the wage and raising the
grade requirement.
7. Summary This chapter establishes several results that are essential to the study of the relation between unemployment and inequality. The model of worker search behavior provides a means of
describing worker choices in labor markets. The analysis of these choices will be used to explain the relation between the distributions of unemployment and earnings, the valuation of unemployment in
terms of earnings, and the contribution of dispersion in reservation wages to inequality. The expressions for the present value of being unemployed, iM(wo), yields a measure of an unemployed worker's
welfare that can be compared with expected wages and used to find the impacts of labor market conditions. The aggregate distributions V(w,g) and H(w,g) are suitable for describing the distribution of
accepted wage rates in Chapter 5. The theory of the firm provides a foundation for analyzing the source of wage offer dispersions. Additionally, the chapter develops several results that are
interesting in themselves. Worker behavior in a Markov process is an extension of the standard model and leads to a neat form for the reservation wage. Trade-offs between unemployment and earnings
may be inferred from search behavior. Section 3 shows how the envelope theorem may be applied to the expressions for the reservation wage to find worker reactions to changes in labor market
conditions. The model of firm behavior shows that firms satisfy two marginal productivity conditions. The first applies to marginal applicants, with grade equal to the grade requirement of the firm.
The firm varies the wage offer and grade requirement to satisfy this condition at all times. The second condition is relevant to the average worker and only holds on the average over the business
Chapter 3
The Valuation of Unemployment 1. Introduction This chapter develops methods of measuring the value workers set upon a period of unemployment. In existing studies of the distribution of earnings,
there is already an implicit valuation of unemployment. If a given worker is employed one week less, his or her earnings are reduced by the weekly wage rate. If only earnings are examined in studying
inequality, then a week's unemployment is valued at a week's less earnings. There are a number of reasons why this valuation will be incorrect. First, workers incur search costs when unemployed and
also receive nonemployment benefits. These benefits may be positive, in the case of unemployment compensation, increased transfers or the ability to undertake alternative activities; or they may be
negative, as in the case of a loss of self-esteem, social embarrassment, stigma attached to being out of work, or loss of nonpecuniary benefits associated with work. Second, the wage rate can differ
systematically among workers according to the amount of expected unemployment they face. Those workers with higher nonemployment benefits choose higher reservation wages and consequently have lower
valuations of unemployment. Before obtaining various estimates of the valuation of unemployment, it is useful to discuss the meaning and significance of such a valuation. Let us first set the
discussion in the context of standard microeconomic consumer behavior. Suppose we are interested in the consumption pattern of two goods. Since unemployment and earnings are bread and butter issues,
suppose the two goods are bread and butter. Consumers receive incomes which they spend entirely on the two goods. A consumer's tastes and preferences are represented by indifference curves, each one
of which represents all combinations of bread and butter which yield a given level of well-being or satisfaction for the consumer. In maximizing their satisfaction, consumers choose combinations of
bread and butter such that their indifference curves are tangent to their budget lines, as shown in Figure 3.1 The condition of tangency between the indifference curve and the budget line for a
consumer is equivalent to the statement that the consumer"'s marginal rate of substitution of bread for butter equals the ratio of the price of bread to the price of butter. The marginal rate of
substitution of bread for butter is the absolute value of the slope of the indifference curve at any point and equals the rate at which the consumer is willing to trade off butter for bread. The
ratio of the price of bread to the price of butter is the absolute value of the slope of the budget line. It represents the trade-off the consumer can achieve in the marketplace, i.e., the amount of
extra butter the consumer can get with the money saved by buying one less unit of bread. An important feature of the market system is that the prices faced by various consumers are the same. Each
consumer therefore faces a budget line with the absolute value of the slope given by the ratio of the price of bread to the price of butter. The consumer then adjusts his or her consumption of bread
and butter until the
The Valuation of Unemployment
marginal rate of substitution of bread for butter equals the common ratio of prices. In equilibrium, all consumers have the same marginal rate of substitution and therefore the same trade-offs
between bread and butter, which is the source of the efficiency in exchange of the market system. From the above discussion, knowledge of the slope of the budget lines facing consumers gives us a
substantial amount of information. We will then know the common marginal rate of substitution for consumers, even though they differ in their incomes and tastes and preferences. Second, we know what
tradeoffs the consumers are able to achieve on the marketplace; along with the consumers' incomes, this essentially describes the choices that are available to the consumers. Finally, we can infer
how much better off (measured in terms of butter) any consumer will be if he or she is given an extra unit of bread. With some qualifications, the labor market choices suggested by job search theory
may be viewed in the same way. However, instead of fixed and certain quantities of bread and butter, the individual faces choices between a distribution of wage outcomes and a distribution of
possible unemployment outcomes. Corresponding to the budget line is the distribution of jobs by wage offer F(w), described in Chapter 2, section 2. Using the expression for We in (2.7), one obtains
- wof(wo) xj(x)dx aWelawo = 1 _ F(wo) - (1 - F(woW (- j(wo»
or aWel awo aA/awo
In this expression, aAI awo
We - Wo
- j(wo)/(l - F(wo». Arthur Goldberger (1980) has also
Consumer Indifference Curve
=Marginal Rate of Substitution of Bread for Butter
Figure 3.1: Consumer Choice
The Valuation of Unemployment
presented this result, derived from the expression for the mean of a truncated distribution. Goldberger studies the consequences for selection bias adjustment procedures from the assumption that
random terms are normally distributed. The expression in (3.1) describes the trade-offs workers are able to achieve in the labor market by varying the reservation wage. Note that it is a trade-off
between an expected wage and the transition rate, and not between the actual wage and actual unemployment. Unlike the price ratio in the consumer choice problem, this labor market trade-off is not
constant but varies with the reservation wage chosen. Corresponding to the consumer's utility function is the state value function M(wo). As in the definition of an indifference curve, there exist
combinations of expected wage and transition rate that yield the same value of M(wo). Treating We and A as variables, one obtains through total differentiation: (3.2)
Setting diM
= 0 yields: we- iM A
This is the trade-off between the expected wage and the transition rate such that M(wo) stays the same. As in Chapter 2, this trade-off may also be derived from the first order condition for the
maximization of M(wo). It corresponds to the marginal rate of substitution in consumer choice theory. As in consumer theory, the worker equilibrium occurs when the trade-offs the worker is able and
willing to achieve are equal. Comparing (3.1) and (3.3), the worker optimum occurs when Wo = iM. This value of Wo also maximizes the state value M(wo). In both the consumer and worker problems, the
optimum occurs at a point of tangency between a line describing choices the worker or consumer can achieve in the market (the budget line) and a line describing combinations among which the worker or
consumer is indifferent (the indifference curve). For small values of Wo, the flow of benefits iM exceeds the reservation wage. Also, the ratio (We - iM)/A, the rate at which workers are willing to
lower their transition rate into employment for a higher wage rate, exceeds the trade-off they are able to achieve in the labor market, given by (We - Wo)/A. As the reservation wage rises, it
approaches iM, which continues to rise. When Wo = iM, the trade-offs are equal and iM is maximized. After that, iM declines as Wo continues going up. The trade-off attainable in the labor market, (we
- Wo)/A, exceeds the rate at which workers are willing to trade transition rate for wage rate, (We - iM)/A.
2. Alternative Valuations Three types of valuations of unemployment may now be considered. The first is the simple trade-off between the expected wage rate and the expected level of either
unemployment or the transition rate A. The second valuation is the implied trade-off between expected earnings and expected unemployment. This second valuation, called the unemployment premium, takes
account of both the change in wage rate and time spent unemployed on the earnings of an individual. It measures the amount of extra earnings a worker must receive in order to be willing to accept an
increase in the time spent unemployed. The third valuation of unemployment is the cost of unemployment, the total loss to a worker from a week's unemployment. This loss includes the
The Valuation oj Unemployment
foregone wages as well as the unemployment premium that the worker would have had to receive in order to be willing to accept the week of unemployment. The first valuation is the trade-off that
workers are willing and able to make between the transition rate and the expected wage rate. It is expressed in (2.11), (3.1) or (3.3). This valuation may be reexpressed in terms of the unemployment
rate, U = ",/('A+",). Then 'A = ",(1- u)/u and (}u/(}wo = - (u 2 /",)(}'A/(}wo. Substituting into (3.1) yields: (}We/ (}Wo (}u/ (}Wo
We - Wo u(1 - u)
The expressions in (3.1) and (3.4) reflect the trade-offs workers are able and willing to achieve in the market place, but they are not very revealing in describing the economic impacts of those
trade-offs. Changes in unemployment and wage rates must be combined with current levels of expected unemployment and wages in order to infer the gain or loss from a given change. In describing
workers' valuations of unemployment, it is more revealing to examine the trade-off between expected earnings and expected unemployment, where earnings are the product of the proportion of the time
employed times the wage rate. This is given by: (}(l - u)we/(}wo (}u/(}w o
(1 - u)we - Wo u
This valuation may be called the unemployment premium, since it measures the extra earnings a worker must receive to induce him or her to accept greater unemployment. In (3.5), the term (1 - u) We is
the expected proportion of the time employed times the expected wage rate. Ordinarily, this would not equal the expected labor market earnings per period, since in general the expected value of a
product is not equal to the product of the expected values. However, because the proportion of the time employed and the wage rate are independently distributed random variables for an individual,
the term (I - u) We will be the expected labor market earnings per period for a worker with constant transition rates. Using (3.5), it is possible to find the cost to the worker of a week's
unemployment, the third valuation of unemployment. From the expression (3.5), the worker is willing to accept an increase in unemployment of /:, u if his or her earnings go up by /:'u«1 - U)We - Wo)/
U. If there is no such adjustment in earnings, the loss to the worker from the week's unemployment is the lost ·earnings plus the missing adjustment, or (if wages are given per week): we+«I- U)We -
= (we
- Wo)/U
This cost of a week's unemployment may be greater than or less than the expected wage We, depending on whether (1 - u) We is greater or less than Wo, i.e., whether the adjustment in earnings to
compensate a worker exactly for a change in unemployment is positive or negative. The valuations of unemployment in (3.5) and (3.6) arise when the worker makes the optimal choice of reservation wage.
As in the consumer choice problem, the valuations therefore reflect simultaneously the trade-offs attainable in labor markets and the subjective trade-offs of the worker. However, as observed before,
a major difference with the consumer choice problem is that the curve corresponding to the budget line will not be straight. To see this, consider how the unemployment premium in (3.5) changes as b
The Valuation of Unemployment
~ «(1 - U)We - Wo)/U) (}b U
(1 U
U (}We _ We - Wo (}U ) (}Wo (}Wo
ab - u ab
1 (}Wo
= -uab
The magnitude of the derivative may be seen more intuitively by considering an alternative expression for the valuation: (1-u)we-
1-u = C- b+ U
) .( We - b +c
The right-most term in (3.8) is a correction for the discount rate, which arises because any benefits of the expected wage are deferred until employment starts. The size of this correction may be
gauged using a numerical example. Suppose that /t = 0.2, A = 1.2, i = 0.10 and u = 0.10. Then «(1- U)/U)(i/(A+/t + i» = 0.64. An increase in b of one dollar lowers the unemployment premium by about
$0.36 = $1 - $0.64. If there were no discounting, the unemployment premium would be given simply by c - b, and this valuation would decline by one dollar for every dollar increase in b. Also, this
valuation would be invariant to changes in labor market conditions; the distribution of job vacancies by wage offers and grade requirements would adjust to this exogenously determined distribution of
unemployment premiums. With discounting, however, the unemployment premium can be altered by labor market conditions. An increase in the unemployment rate will tend to reduce the unemployment
premium, apparently by reducing the present discounted value of deferred employment. The fact that individuals with the same grade will have, after equilibrium is reached, different valuations of
unemployment is the source of the important difference from the consumer choice problem that has been discussed in the previous section. Whereas different consumers face budget lines with identical
slopes no matter which combinations of goods they choose, workers will face a curved "budget line" or frontier of their choice set between the good of earnings and the bad of unemployment. This
choice set frontier will be an envelope of the individual worker indifference curves between earnings and unemployment. At each point of tangency, the slope of the choice set frontier will be the
same as the slope of the individual worker indifference curve. Therefore, in equilibrium, the choice set frontier will have a declining slope; those workers with higher values of b will choose higher
values of expected unemployment and have lower valuations of unemployment. Unlike the consumer choice problem, valuations will vary among workers. This situation is illustrated in Figure 3.2. The
upper curve in that figure is the worker indifference curve. It is slightly concave, reflecting the result that if the expected unemployment is higher, the valuation of unemployment, given by the
slope of the indifference curve, declines. The worker faces a fixed concave choice set frontier and chooses the point on that frontier which yields the highest indifference curve. This occurs when
the worker's trade-off between unemployment and earnings, given in (3.5), equals the trade-off he or she can achieve in the labor market. In Figure 3.2, this is at point A. For workers with the same
grade but higher values of b, the indifference curve will be rotated to the right, and the point of tangency will occur at a higher level of unemployment. A worker whose unemployment premium is zero
would choose point B, where the slope of the choice set frontier is zero. In the absence
The Valuation of Unemployment
of discounting, such a choice would occur when the nonemployment benefit b just equals the search costs c. The point B is also the point where the expected earnings are maximized. A worker with a
negative unemployment premium (that is, a worker who is willing to suffer some decline in earnings in order to work less or have some more time unemployed) will choose a point beyond B. If all the
individual worker indifference curves were drawn in, the choice set frontier would appear as an envelope of those indifference curves. A change in the shape of the frontier would induce workers to
change the expected levels of unemployment they choose. For example, if the frontier were flattened out slightly to the left of B, workers with previous choices to the left of B would choose points
of tangency still further to the left, thereby accepting lower levels of expected unemployment. A change in the shape of the choice set frontier therefore alters the availability of workers to firms,
in general terms. Now let us consider the meaning of a particular valuation of unemployment in terms of earnings. First, suppose the worker's unemployment premium is zero, so .that he or she chooses
point B on Figure 3.2. This does not mean that unemployment is costless to the worker. Rather, it means that the worker is indifferent as to how many weeks are worked, as long as total earnings are
unaffected. If unemployment goes up by !::,.u, the worker would need an increase in the wage rate of w !::"u/(1 - u) to keep earnings the same. In the absence of this increase in the wage rate, the
cost to the worker of greater unemployment is (1 - u) w !::,. u/ (1 - u) = w!::" u. That is, the cost of a week's unemployment to such an individual is simply the earnings foregone. This is the
implicit valuation of unemployment if only earnings are considered in measuring inequality in the presence of unemployment. Consider next a worker who chooses a point on the choice set frontier to
the left of B in Figure 3.2. Such an individual will have a positive unemployment premium.
f ~iVldual I:;'';~erence
Expected Earnings, {1-U)W.
.--=.::::;:::=-Slope = (1 - u)w, u
Worker Curve
Choice Set Frontier
Expected Unemployment,
Figure 3.2: Worker Choice
The Valuation oj Unemployment
To be compensated for a higher level of unemployment, the worker must receive an increase in total earnings. The wage rate must then increase by a greater proportion than the decrease in employment.
In the absence of this compensation, the cost to the worker is greater than the wages foregone. Finally, suppose a worker chooses a point of tangency to the right of B. Such a worker will still
suffer a loss from any resulting unemployment, but he or she is willing to suffer a reduction in total earnings in order to reduce employment. The cost of a week's unemployment to such an individual
is still positive but less than the foregone weekly earnings. This situation arises when the nonemployment benefits are positive and exceed the search costs by a sufficient amount to cover the
present discounted value of deferred employment. The practical consequence of differing valuations for workers of the same grade is that a single slope is insufficient to characterize the cost of
unemployment for a particular group. However, we know roughly the relation between the nonemployment benefit b and the valuation; therefore if we can identify the valuation for a welldefined
nonemployment benefit, we can find the valuation for other values of b. For example, we may determine the valuation of unemployment for the marginal labor market participant, or the median labor
market participant; the valuations for workers with different values of b could then be inferred. A significant departure from this chapter's description of worker choice occurs if a minimum wage is
binding. Then wage offers will be concentrated at the minimum wage and the wage offer density will not be continuous. A worker would not set the reservation wage at less than the minimum wage.
Raising the reservation wage above the minimum wage causes a discontinuous drop in the likelihood of finding an acceptable job offer. Many workers with low wage expectations will therefore set their
reservation wages at the minimum wage. Such workers will have only a small difference between the expected and reservation wages, We - WOo and will appear to have low valuations of unemployment.
These low valuations will underrepresent the costs of unemployment to the workers. Essentially, their reservation wage is a boundary solution to the worker choice problem. The minimum wage forces
them to accept a trade-off between earnings and unemployment which is substantially below the rate at which they are willing to exchange the two. This qualification should be kept in mind when
examining the empirical results that follow.
3. Previous Estimates Before considering calculations of the costs of unemployment using procedures developed here, let us first look at previous work on the costs of unemployment by Robert J. Gordon
(1973), Edward Gramlich (1974) and John Abowd and Orley Ashenfelter (1979, 1981). (See also M. Hurd, 1980.) Gordon's estimate of the average cost of unemployment for a worker is based on the
expression Wu = (1 - h)y - b, where Wu is the price of unemployed time, h is the marginal rate of taxation and costs of employment (for example, transportation and uniforms), y is the lowest
acceptable wage (corresponding to Wo in the notation used here), and b is the value of unemployment compensation benefits (the term b in Chapter 2 includes subjective nonemployment costs and
benefits). Using estimates by others of the various parameters, Gordon calculates the price of unemployed time, Wu , to be 34.2 per cent of the previous after tax wage, or $20.93 in 1971. The lowest
acceptable pay, y, is an average obtained from previous estimates of the ratio of the acceptance wage to the mean wage for workers (Gordon, 1973, Table 1, p.148). The
The Valuation oj Unemployment
ratio of unemployment compensation benefits to earnings is calculated to be 0.383 for 1971. Gordon's result may be interpreted as saying that the average worker should continue looking for a job
whenever the present discounted value of the expected gain from searching an extra week exceeds $20.93. For several reasons, Gordon believes this figure overstates the value. In deciding whether to
continue searching, the worker would consider the direct costs of search, corresponding to c in Chapter 2. Gordon cites work by Stanley Stephenson, Jr., (1973, p.181,187) which indicates that the
direct search costs were $85.14 per week in 1971. Adding this estimate to the price of unemployed time, one obtains an estimate of the cost of unemployed time of 156 per cent of the wage. In terms of
Figure 3.1, the average worker is located to the left of point E, on the upward sloping part of the choice set frontier. Gordon's estimates therefore indicate that the costs of unemployment exceed
the foregone wages, even without consideration of the subjective costs of unemployment. Gramlich's paper examines the effects of a given percentage change in the aggregate unemployment rate on the
unemployment rate and family income of various demographic groups. He considers mainly the direct loss of earned income but also mentions the reduction in the quality of jobs of those who regain
work. Gramlich uses longitudinal data from the Michigan Panel Study oj Income Dynamics to avoid problems in measuring economic status through transitory income. The responses of a family head's
unemployment to changes in the aggregate unemployment rate are presented in his Table 3 (1974, p.312). Black unemployment rates are generally more responsive to the aggregate unemployment rate than
whites, and male rates more than female rates. Responsiveness declines with economic wellbeing, so unemployment rates for higher income families vary less. Other effects of the aggregate unemployment
rate on family income include hours worked and employment of other family members. Gramlich finds that for families with male heads, secondary wage earners recover less than three per cent of the
drop in the head's earned income. The corresponding figure for families with female heads is eleven per cent. The total effects of aggregate unemployment on family personal income (excluding
transfers) are shown in his Figure 2 (1974, p.320). The percentage losses for families headed by black males exceed those for white males and decline with the ratio of family income to needs
calculated over a six year period. For families with female heads, the losses are less than one per cent. These losses rise and then fall gradually with the ratio of income to needs. Because most
income losses from higher aggregate unemployment take the form of reduced hours and wage rates, unemployment insurance only covers a small proportion of losses. Gramlich finds that for males,
unemployment insurance recovers six to eight per cent of losses, while for females the range is fourteen to eighteen per cent. For families at the poverty line, the recovery from all transfer
programs amounts to 31 per cent and 55.5 per cent for male and female family heads, respectively. The overall conclusions, summarized in Gramlich's Figure 3, show that the incidence of unemployment
is regressive. Considering the effects of a one per cent increase in aggregate unemployment on families headed by males, the average loss in income is three per cent with income at the poverty line
and only one per cent with income at five times the poverty line. Gramlich's work shows how the job opportunities or choice sets of various demographic groups shift during changes in the aggregate
unemployment rate. His estimates also provide a good idea of the direct losses in income from unemployment for the demographic groups. In contrast, the procedures in this chapter use the labor
The Valuation oj Unemployment
supply behavior of workers to observe indirectly what their subjective as well as direct losses from unemployment are. Abowd and Ashenfelter take a very different approach (1979; see also a later
version in S. Rosen, 1981). They estimate the compensating wage differentials that arise when workers face constraints on the amount of labor they can provide. With these compensating wage
differentials, workers are presumably indifferent as to whether they take a job with quantity of labor constrained or unconstrained. These results can therefore be used to obtain a measure of the
valuation of unemployment. In the multimarket equilibrium theory of compensating wage differentials (Sattinger, 1977), workers with different productive capacities are assigned to jobs with different
levels of satisfaction. The distributions of workers and jobs then determine the compensating wage differentials that arise. A similar mechanism may operate when workers differ according to their
nonmarket activities or nonemployment benefits. Those workers with higher nonemployment benefits will tend to have higher reservation wages. A higher proportion of these workers will end up in the
jobs with higher wage offers, which are presumably the jobs with greater likelihood of unemployment. Similarly, those workers with lower nonemployment benefits will have lower reservation wages; a
higher proportion of these workers will end up in the unconstrained sector with lower wages and unemployment. In this way, workers will tend to be assigned to the unconstrained or constrained sectors
depending on their nonemployment benefits. It can be shown that, taking the demand for workers as given, a more spread-out distribution of nonemployment benefits reduces the compensating wage
differentials that arise. (See Solomon Polachek, 1981, for a related analysis of the assignment of workers to jobs based on atrophy of job skills.) However, Abowd and Ashenfelter reduce substantially
the complexity of the estimation problem by assuming that all workers are identical and may take jobs in either the constrained or the unconstrained sectors. In the unconstrained sector, individuals
choose the combinations of work, leisure and commodities which maximize their welfare. They are therefore in what Hicks describes as the zone of indifference, so that small departures from the
optimum have negligible effects on utility. The loss from unemployment, and the compensating wage differential, therefore increases with the square of the departure from the optimum. The authors
argue that their view of unemployment as a constraint yields different normative implications than the job search approach, in which unemployment is to some extent the result of the worker's search
behavior. While there are certainly differences in the approaches, it is not true that unemployment is costless to the worker in the job search model. The costs of unemployment implied by job search
behavior are certainly positive, as indicated by the theoretical and empirical results of this chapter. However, the job search approach yields a cost of unemployment which is more or less the same
for the tenth week as it is for the first, rather than a cost which begins at zero and increases with the square of the unemployment as in the Abowd and Ashenfelter approach. Using household data
from the Survey Research Center's Panel Study oj Income Dynamics, the authors obtain an estimate of the total compensating differential of 13 per cent between the constrained and the unconstrained
sector, with a 90 per cent confidence interval of 5 to 21 per cent. That is, wage rates are estimated to be 13 per cent higher in jobs where hours and weeks worked were constrained. This includes a
compensating wage differential for the expected amount of unemployment as well as a risk premium. The authors also report that the expected underemployment or overemployment as a percentage of
effective hours over the data period 1970 to 1975
The Valuation of Unemployment
averaged 13.5 per cent (with no unemployment insurance). Putting these two results together, a 13 per cent increase in the wage rate compensates for a 13.5 per cent reduction in employment below the
desired level, so that expected earnings are substantially the same in both the constrained and unconstrained job sectors. In the context of the job search model represented in Figure 3.2, the point
estimate of compensating wage differentials is such that workers' preferences put them at point B on the choice set frontier, where the frontier is horizontal. Of course, compensating wage
differentials within the 90 per cent confidence interval would lead to points on either side of B. With unemployment compensation equal to 91 per cent of the wage rate, the expected underemployment
or overemployment falls to nine per cent. Using the point estimate of the compensating wage differential of 13 per cent, this would imply that expected earnings are about four per cent greater in the
constrained sector. This is consistent with workers being located on the upward-sloping part of the choice set frontier in Figure 3.2, to the left of B. Needless to say, there are several details
which interfere with the placement of the Abowd and Ashenfelter results in the context of the job search model that has been developed. The authors consider the employment constraint as applying to
both hours and weeks worked. No search costs arise in the constrained sector from layoffs. Further, workers are identical, so that there is only one worker indifference curve and it is identical to
the choice set frontier. Finally, the interpretation of their variable CERTEQ as equivalent to an unemployment rate is an oversimplification (in particular, it ignores overemployment). Nevertheless,
it is apparent that their approach can be used to obtain a valuation of unemployment.
4. Direct Estimates Now let us turn to various ways of estimating the valuation of unemployment using the results of job search theory that have been developed. Because of deficiencies in the data,
all of these approaches are to some extent experimental, but they do yield meaningful and comparable results. The first result is direct. Workers achieve a trade-off between wages and employment
through changes in the reservat.ion wage. An increase in the reservation wage raises the expected wage rate and raises the expected unemployment. If we could estimate these effects of the reservation
wage, their ratio would indicate the trade-off between the wage rate and the unemployment rate. The data that are used for these estimates are taken from Employment Projiles oj Selected Low-Income
Areas (1972), also referred to as the Census Employment Survey, conducted by the U.S. Bureau of the Census in 1970. The data arise from interviews of households in 51 cities and include information
on work history, demographic characteristics and reservation wages. The reservation wage, obtained only from part-year workers who looked for work in the past year, is the response to a question
asking for the lowest acceptable pay the last time the individual looked for work. Originally, the Bureau of the Census made tapes with the household data available to researchers. When I inquired in
1980, the tapes were no longer available (rumor had it the sprinkler system had gone off). However, John T. Warner, Robert M. Fearn and Carl Poindexter, Jr., who had used some ofthe data in a study
of reservation wages (1980), kindly made available to me the data in their possession. This consisted of the household data for six city areas, about 36,000 observations. After extensive screening
for consistency and presence of data, 1700 observations remained.
The Valuation oj Unemployment
These are the basis for the direct estimates of this section. In addition, the Census published about 60 volumes of tables of data for each area, some of which are the basis for further work in this
chapter. These data suffer from a major shortcoming in that the reservation is not determined before search begins but after. The reservation wage reported by the worker may then be influenced by the
outcome of the search. The resulting wage rate would then appear to depend more strongly on the reservation wage, and the amount of unemployment would appear to depend negatively on the reservation
wage instead of positively, as predicted by the theory. This difficulty may be overcome by eliminating that part of the reservation wage which is correlated with how much better or worse the worker
did in the labor market compared with what one would expect for his or her demographic characteristics. This is accomplished as follows. First, the yearly earnings for the worker are regressed
against the demographic variables, which include age, education and squares of these terms, a dummy variable for training, and dummy variables for the cities from which the data are taken. These
regressions are run separately for four sex and race (white and nonwhite) groups. These regressions divide the earnings into two parts, predicted earnings and residual earnings. Second, for each of
the four sex and race groups, the reservation wage is regressed against the residual earnings from the first step. This divides the reservation wage into two parts, the predicted reservation wage
(that part of the reservation wage which is correlated with residual earnings) and the residual reservation wage. It is this residual reservation wage, uncorrelated with how much better or worse the
worker did than expected, which is used in the following estimates, presented in Tables 3.1 to 3.4. Table 3.1 Effect of Reservation Wage on Employment; Household Data Male
Female Nonwhite
loVo. Adjusted reservation wage
-5.46· (1.54)
-1.36 (1.74)
G. Years of school completed
-1.04 (0.898)
-2.81 (1.52)
0.130· (0.038)
0.094· (0.044)
0.193 (0.067)
0.243· (0.097)
A. Age
0.309 (0.279)
1.02· (0.261)
-0.645 (0.427)
0.588 (0.429)
- 0.0041 (0.0037)
-0.0120· (0.0034)
0.0055 (0.0054)
-0.0053 (0.0060)
T. Training dummy variable
2.24 (1.58)
1.76 (1.48)
2.73 (2.32)
0.857 (1.98)
R2 statistic
Number of observations
Dependent variable: weeks worked in previous year. Standard errors are given in parentheses under estimated coefficients. Asterisk denotes significance at 0.05 level. Dummy variables for cities and
intercept were included in the regression but are not reported here. Data sources: see Appendix.
The Valuation of Unemployment
In Table 3.1, the dependent variable is number of weeks worked in the past year. The actual response of workers falls into seven brackets: no weeks worked, 1 to 13, 14 to 26, 27 to 39, 40 to 47, 48
to 49 or 50 to 52 weeks worked. For each bracket, the midpoint is taken as the number of weeks worked for a worker. This of course introduces some error in the measurement of the dependent variable.
It should also be emphasized that what we are looking for is the relation between the level of the reservation wage and the expected or average time employed, but the dependent variable that is used
is the actual number of weeks worked. This means that the dependent variable contains a large component of random variation already. In Table 3.1, this shows up as fairly low R2 statistics.
Nevertheless, the coefficient for the adjusted reservation wage (the residual reservation wage obtained from the preliminary estimation procedures) is negative in all cases and significant for three
of the groups. For example, the results for white males indicate that an increase in the reservation wage of one dollar per hour reduces the number of weeks worked in the past year by 2.37 on
average. Age, years of school completed and training also influence the number of weeks worked, as indicated. Additionally, dummy variables for cities were included but are not reported. Table 3.2
indicates the effects of the reservation wage on the hourly wage. This hourly wage, the dependent variable, is obtained by dividing weekly earnings by hours worked in the week. The results indicate
that the reservation wage positively and significantly affects the expected wage rate in all cases. For example, for white males, an increase in the reservation wage of one dollar raises the expected
wage by about 83 cents. Table 3.3 collects the coefficients obtained from regressing weeks worked and wage rate on the reservation wage and other variables when the data are sorted by age and
education (but not race). In all cases, the coefficients have the predicted signs. These results provide rather substantial evidence that the reservation wage Table 3.2 Effect of Reservation Wage on
Hourly Wage; Household Data Male ~o,
Adjusted reservation wage
G, Years of school completed G2 A,Age A2 T, Training dummy variable R2 statistic Number of observations
0.831 * (0.073) -0.102 (0.129)
0.630" (0.079)
0.638" (0.151)
0.798* (0.133)
0.0063 (0.112)
-0.197 (0.139)
0.138 (0.144)
0.0046 (0.0054) 0.091* (0.042) -0.0014" (0.0006)
-0.0002 (0.0051) -0.014 (0.032)
0.010 (0.0063) 0.011 (0.045)
-0.0047 (0.0062) -0.037 (0.034)
0.0002 (0.0004)
-0.0002 (0.0006)
0.0005 (0.0005)
-0.265 (0.204)
0.210 (0.158)
0.028 (0.201)
0.135 (0.142)
0.411 284
Dependent variable: hourly wage calculated as the ratio of weekly earnings to hours worked in the past week. Standard errors are given in parentheses under estimated coefficients. Asterisk denotes
significance at 0.05 level. Dummy variables for cities and intercept were included in regression but are not reported here. Data sources: see Appendix.
The Valuation oj Unemployment
operates as predicted by the theory. By raising the reservation wage, the worker brings about a trade-off between expected weeks worked and the expected wage. Ideally, it should be possible to use
these coefficients to obtain a direct estimate of the trade-off between earnings and unemployment. For example, the result for white males indicates that a week's less unemployment is worth a
reduction in the wage rate of $0.8312.37 = $0.35. This may not sound like much, but if the average amount of employment is 35 hours per week, 45 weeks a year, the cost of reducing the expected amount
of unemployment by a week a year amounts to $551, an amount much greater than the weekly wage rate. This result is consistent with the workers being located at point A in Figure 3.2, with
nonemployment benefits net of search costs substantially negative. Unfortunately, there are several difficulties which prevent the use of the results from Tables 3.1 to 3.3 in the calculation of
unemployment valuations. We only observe the effect of the reservation wage on the work experience in the past year; the effect on the expected number of weeks worked per year in the future will be
greater. That is, the estimates of reductions in weeks worked in Tables 3.1 and 3.3 are biased downwards, so that any implied valuations will be biased upwards. Another potential problem is
self-selection bias, analyzed by James Heckman (1979). The observations that are included in the data are subject to a number of selection criteria, the most important of which is that workers had
been unemployed sometime in the past year (so that their reservation wages are reported). The fact that unemployment occurred in the previous year may then be correlated with the error term,
producing a bias in the estimated coefficients. While there are procedures to correct for this bias, they have not been applied here, because the remaining difficulties would have made the
unemployment valuations unreliable anyway. The remaining results consist of estimates of the reservation wages themselves, presented in Table 3.4. The marital status dummy in the regressions is one
if the individual is married with the spouse present and is zero otherwise. A positive value indicates that being married with spouse present raises the nonemployment benefits. Table 3.3 Coefficients
for Weeks Worked and Wage Rate; Household Data Female
Male Weeks Worked
Wage Rate
Weeks Worked
Wage Rate
Age less than 26
-2.31* (0.766)
0.661* (0.0856)
-2.13 (2.45)
1.05* (0.164)
Age 26 to 45
-2.22* (0.588)
0.809* (0.0786)
-2.42 (1.75)
0.634* (0.144)
Age 46 to 65
-1.93 (1.02)
0.865* (0.138)
-3.97 (2.12)
0.534* (0.238)
No high school degree
-2.82* (0.702)
0.589* (0.095)
-3.74 (2.11)
0.822* (0.195)
High school degree or more
-1.98* (0.525)
0.813* (0.065)
-3.01* (1.50)
0.674* (0.110)
Entries are the coefficients estimated from regressions of weeks worked in the previous year and hourly wage rate on the adjusted reservation wage and other variables for the groups described on the
left. Standard errors are given in parentheses under the estimated coefficients. Asterisk denotes significance at 0.05 level. Data sources: see Appendix.
The Valuation of Unemployment
The negative coefficient for white females, although not significant, suggests that husbands reduce the value of not having a job. From the theory, the presence of welfare payments and other
transfers ought to raise the reservation wage, since they presumably increase the nonemployment benefits. However, in two cases with the welfare payments, the coefficient is negative. These results
for transfer payments may suffer from a simultaneous equations bias, since higher unemployment may raise the worker's transfer payments. Perhaps the most important conclusion arising from these
results is that the reservation wages reported by individuals are not well-explained by the variables we might have expected to be important determinants. The R2 statistic is low for males and
exceeds O.S only for white females. Apparently, there is a wide variation in the reservation wages of individuals, generated by unequal nonemployment benefits. In turn, these variations are generated
by heterogeneous opportunities or tastes and preferences for nonemployment activities. A potentially major source of variation in labor market outcomes is therefore inequality in nonemployment
5. Aggregate Approach The next method of obtaining measures of the valuation of unemployment is to use the expressions given in (3.4) and (3.S). These are obtained from the first-order conditions for
the maximization of a worker's well-being. With information on the Table 3.4 Determinants of Reservation Wages; Household Data Male
Female Nonwhite
-0.115 (0.0773)
-0.284· (0.0678)
-0.334· (0.0634)
0.0087· (0.0025)
0.0179· (0.0027)
0.104* (0.024)
0.0581* (0.0203)
0.0637· (0.0125)
-0.0009· (0.0002)
0.277* (0.085)
-0.0303 (0.110)
0.0893 (0.586)
M, Marital status dummy
0.132 (0.104)
-0.177 (0.095)
0.0611 (0.0544)
Welfare payments
1.03 (1.61)
-1.37 (1.80)
0.001 (0.038)
Other transfers
1.10 (0.860)
0.0593 (0.044)
R2 statistic
Number of observations
Dependent variable: reservation wage (lowest acceptable pay). Standard errors are given in parentheses under estimated coeffiCients. Asterisk denotes significance at 0.05 level. Dummy variables for
cities and intercept were included in regression but are not reported here. Data sources: see Appendix.
The Valuation oj Unemployment
reservation wage, wage rate and unemployment, we can use the expressions to infer the unemployment valuations. Using similar theoretical relations, Lancaster and Chesher (1983) have previously used
data on expected and reservation wages to deduce instead of estimate the structural parameters of worker search behavior. Their data consist of British worker responses to questions about the amount
the worker would expect to earn in a new job and the lowest amount the worker would be prepared to accept. The data are used to infer the elasticity of the reservation wage with respect to the
unemployment benefit. The data used here are taken from the published volumes of Employment Profiles oj Selected Low-Income Areas (U.S. Bureau of the Census, 1972). They provide summary data for the
urban areas on the median weekly earnings, median reservation wage and unemployment rate for various age, education, sex and race groups. Tables 3.5 to 3.8 present the data and implied unemployment
valuations using (3.4) and (3.5). The results are intended to describe the valuations for the median workers in the groups. Column 4 presents the simple trade-off between the wage rate and
unemploythat workers face in the market. For example, ment, given by (We - wo)/(u(l white males aged 16 to 21 can bring about a reduction in the expected unemployment rate of 0.01 (equivalent to a
reduction in the unemployment rate from 17.3 per cent to 16.3 per cent) at the cost of a decline in the expected wage of 0.01($99 - $77)/(0.173 x 0.827) = $1.54. The valuations given in column 4
generally rise with age and then decline, except that the valuation continues to rise for white males until the age of 65. They also increase with educational level and are sharply higher for workers
with four years of high school or one year of college or more. The valuations are not simply proportional to median earnings or reservation wages since they also depend on the unemployment rate faced
by individuals. For example, the median weekly earnings for white males aged 22 to 34 are $136 and for white males aged 55 to 64 are $135. But the valuations are $4.90 and $7.68, respectively. Column
5 presents the exchange between expected earnings and expected unemployment, or the unemployment premium. The figures show the sacrifice in expected earnings (wage rate times proportion of the time
employed) that a worker is willing and able to make to achieve a reduction in unemployment. For example, white males aged 22 to 34 are willing to face a one per cent increase in unemployment if their
yearly earnings go up by $165, according to the results. This is calculated as follows:
(1 - 0.075)$136 - $102 0.01 52 weeks 0.075 1070 unemployment year
$165 per year 1070 unemployment
In most cases, the change in earnings required for a worker to accept a one per cent increase in unemployment is positive. This is consistent with point A on the upward sloping part of the choiCe set
frontier in Figure 3.2. For some of the groups, the required change in earnings is negative, indicating that for the median worker in these groups, nonemployment benefits net of search costs are
positive. Negative values are consistent with a point on the choice set frontier to the right of B, on the downward sloping portion. Alternatively, they may arise from an artificially low
unemployment trade-off caused by the minimum wage. The valuations in column 5 pretty much follow the pattern of the wageunemployment trade-offs in column 4. They rise with age and then decline,
except for white males; they are substantially higher for family heads than for other categories of family status for males but are not much higher for female family heads; and there are sharp
increases for four years high school and for one year or more of college. The
Median Reservation Wage, Wo
7.0 7.1 8.9 5.9 5.8
4.6 15.3 8.4
17.3 19.0 7.5 6.9 5.8 5.1 4.5 4.4 5.1
Unemployment Rate, U
3.07 4.09 4.32 7.38 9.70
7.29 2.08 4.29
1.54 1.17 4.90 5.29 6.04 6.20 7.68 4.04 6.41
14.65 -3.78 165 183 223 233 311 143 244
Unemployment Unemployment Premium Trade-Off
Unemployment Cost
Unemployment rate in column (3) is expressed in per cent. Units of measurement in columns (4) and (5) are dollars per year per one per cent unemployment. Units of measurement in column (6) are
dollars per week. Data sources: see Appendix.
7 years or less 8 years 1 to 3 years high school 4 years high school 1 year or more college 112 124 133 145 167
Family status Family head Family member Unrelated individual
16 to 21 16 to 21, not in school 22 to 34 25 to 34 35 to 44 45 to 54 55 to 64 Over 65 40 to 64
Median Weekly earnings, w.
Table 3.5 Unemployment Valuations, White Males
Education 7 years or less 8 years 1 to 3 years high school 4 years high school .1 year or more college 68 71 69 76 101
Median Reservation Wage, Wo
10.7 9.1 10.9 6.0 5.4
8.5 8.1 10.1 5.4
14.1 15.2 8.4 8.3 7.3 5.5 4.0 2.8 5.4
0.73 1.57 1.85 4.43 5.68
2.83 2.55 2.42 4.89
1.65 1.47 3.51 3.55 2.96 4.04 4.69 7.35 3.92
-5 31 41 164 212
Unemployment Unemployment Unemployment Unemployment Rate, U Trade-Off Premium Cost
Unemployment rate in column (3) is expressed in per cent. Units of measurement in columns (4) and (5) are dollars per year per one per cent unemployment. Units of measurement in column (6) are
dollars per week. Data sources: see Appendix.
Family head Family member Other member Unrelated individual
Family status
Age 21 21, not in school 34 34 44 54 64 Over 65 40 to 64
16 to 16 to 22 to 25 to 35 to 45 to 55 to
Median Weekly Earnings, we
Table 3.6 Unemployment Valuations, White Females
Median Reservation Wage, Wo
6.1 7.9 12.1 8.3 6.0
5.1 22.9 8.6
28.1 30.6 10.2 8.5 5.7 4.8 4.3 4.3 4.7
Unemployment Rate, U
2.27 2.75 2.54 4.34 8.33
5.17 1.25 3.05
0.99 0.71 1.27 3.47 5.02 5.03 3.40 2.67 4.47
Unemployment Trade-Off
186 -7.8 80
-11.9 -25.5 70 96 176 181 107 79 154
Unemployment Premium
Unemployment Cost
Unemployment rate in column (3) is expressed in per cent. Units of measurement in columns (4) and (5) are dollars per year per one per cent unemployment. Units of measurement in column (6) are
dollars per week. Data sources: see Appendix.
7 years or less 8 years 1 to 3 years high school 4 years high school 1 year or more college 115 122 123 135 158
Family status Family head Family member Unrelated individual
21 21, not in school 34 34 44 45 to 54 55 to 64 Over 65 40 to 64
16 to 16 to 22 to 25 to 35 to
Median Weekly Earnings, we
Table 3.7 Unemployment Valuations, Black Males
8.1 9.7 15.4 11.1 6.9
11.1 9.5 20.1 6.8
29.7 31.4 13.7 12.4 7.4 5.8 3.0 3.5 5.3
Unemployment Rate, U
0.94 1.03 0.92 2.23 6.54
2.23 2.09 1.25 2.68
0.91 0.84 1.86 2.12 3.21 2.5 1.72
Unemployment Trade-Off
8 8 -2.08 52 250
-11.90 -15.96 33 45 106 88 46
Unemployment Premium
Unemployment Cost
Asterisk indicates data is unavailable. Unemployment rate in column (3) is expressed in per cent. Units of measurement in columns (4) and (5) are dollars per week per one per cent unemployment. Units
of measurement in column (6) are dollars per week. Data sources: see Appendix.
7 years or less 8 years 1 to 3 years high school 4 years high school 1 year or more college 71 77 82 98 128
Family status Family head Family member Other member Unrelated individual
Median Reservation Wage, Wo
16 to 21 16 to 21 ,not in school 22 to 34 25 to 34 35 to 44 45 to 54 55 to 64 65 and over 40 to 64
Median Weekly Earnings, we
Table 3.8 Unemployment Valuations, Black Females
The Valuation oj Unemployment
figures in column 5 may be regarded as the premiums necessary to induce the median workers to accept an increase in expected unemployment. For the worker to be just as well off as before, he or she
must receive the old earnings level plus the premium in column 5 for being out of work an extra one per cent of the time. Column 6 presents the estimates of the cost of a week's expected unemployment
for the various groups. If a worker suffers an increase in expected unemployment of one week per year, and if the premium estimated in column 5 is not received, the loss to the worker consists of the
unreceived premium plus the foregone earnings. This amount is expressed on a per week basis in column 6 to facilitate comparison with the weekly earnings. If the premium in column 5 is positive, then
the cost of a week's expected unemployment exceeds the weekly earnings; if the premium is negative, the cost will fall below the weekly earnings. For white males aged 22 to 34, the cost of a week's
unemployment is calculated as follows: We +
$136/week +
()(1 - u) wei ()Wo ()u/ ()Wo
$165 1% unemployment 1% unemployment 0.52 weeks
Setting aside specific values, the most striking point about these results is their large size. Except for some young people and other scattered categories, the cost of a week's unemployment per year
far exceeds the wage rate for the median worker. This means that the valuation of a week's unemployment as the foregone earnings, which is the valuation implicit in using the distribution of earnings
to calculate economic inequality, is an underestimate of the true cost to most workers. From a macroeconomic perspective, the current cost of unemployment borne by workers substantially exceeds the
current foregone production. One limitation to the interpetation of the unemployment valuations in Tables 3.5 to 3.8 is that one week's unemployment per year is not simply one week per year spent out
of work but the whole potential distribution of unemployment associated with an increase in the expected amount of unemployment of one week. That is, when a worker is willing to sacrifice $165 per
year in earnings to reduce the expected amount of unemployment by one per cent, the worker receives in addition a reduction in the probability that he will be unemployed 50 per cent of the year, or
25 per cent, and receives an increase in the probability that he will avoid unemployment entirely. Nevertheless, the figures allow us to infer the benefits to workers associated with an overall
decrease in the unemployment rate, with its consequent changes in the distribution of unemployment prospects facing individual workers. The large size of the valuations, and their variance with some
current beliefs regarding the costs of unemployment, make it worthwhile to check for possible biases in their calculation. Two biases associated with sorting phenomena are clear. At one point in
time, the workers in a group that are unemployed will not be a random sample from the entire group. The unemployed will have a higher proportion of those workers with larger reservation wages for a
given grade or lower job skills for a given reservation wage. The workers unemployed at anyone point in time will therefore face a higher expected unemployment rate than the aggregate unemployment
rate for the group as a whole. To deal with this bias, it would be necessary to find the expected unemployment facing those currently unemployed. In turn, this requires that the group be regarded as
heterogeneous. The problem is then to find the distribution of members of the
The Valuation oj Unemployment
group according to transition rates. While there are procedures that estimate the heterogeneity of a group (for example, the mover-stayer model that will be discussed in the next chapter), the
gross-flow data needed for these procedures are unavailable. The estimates are therefore uncorrected for the first type of bias. Turning to the second source of bias, the median weekly earnings are
reported for all workers in a particular group, whereas the median reservation wage is reported only for those workers with some unemployment in the previous year. Since these latter workers may have
systematically different expected weekly or median weekly earnings, it is important to investigate the possible differences. The relevant information is unavailable in the published data. However,
using the household data from section 4, it is possible to compare the median weekly earnings of those who were unemployed sometime in the past year and are currently employed with the median weekly
earnings of all workers in the sample. The results from this comparison are reported for several different groups in Table 3.9 in the form of ratios of part-year workers' to all workers' median
weekly earnings.
Table 3.9 Correction Factors for Earnings Group
16 to 21 22 to 34 35 to 44 45 to 54 55 to 64
0.9804 0.9321 0.8935 0.8929 0.8931
0.9167 1.00 0.7755 0.8696 0.8412
1.00 0.7920 0.8462 0.8647 0.8800 0.9259
0.8537 0.8511 0.9333 0.9263
Education 7 years or less 8 years 1 to 3 years high school 4 years high school 1 year or more college
All workers
Asterisk indicates insufficient data. Table entries are the ratios of part-year workers' to all workers' median weekly earnings. For data sources and method of calculation, see text and Appendix.
Unemployment valuations using a correction factor of 0.926 have been calculated but are not reported here. Generally, the pattern of values is the same but the values themselves are lower by about a
third. Negative unemployment premiums appear for workers aged 16 to 21 (except white females), black other members and some groups with lesser amounts of education (except for white males). Again,
these could be caused by the minimum wage. Additional corrections for the unemployment bias would lower the valuations further.
6. Labor Force Participation Approach From Chapter 2, section 4, the estimated coefficients from labor force participation regressions should yield an estimate of the unemployment valuation. For
example, W. Bowen and T. Finegan (1969, p.789) regress the labor force participation rate for married women, husband present, aged 14 to 54, against the civilian unemployment rate, the earnings of
females who worked 50 to 52 weeks and a number of other
The Valuation oj Unemployment
variables, using 1960 cross-section data from 100 cities. Holding the other variables constant, one infers from their results that the change in labor force participation equals - 0.94 times the
change in unemployment, plus 0.47 times the change in earnings. Then the trade-off between earnings and unemployment such that labor force participation stays the same, expressed in dollars per week
per one per cent unemployment, is: .6. earnings .6. unemployment
= 0.94/(1 %
unemployment) 1 year = $3.85/week 0.47/($100/year) 52 weeks
The labor force participation rate stays the same when the expected flow of benefits from being unemployed, which equals the reservation wage, remains unchanged. The expected flow of benefits is
given by i M. If this expected flow of benefits remains unchanged, then $3.85/week equals the trade-off between the wage rate and the unemployment rate that marginal participants are willing to
undertake, i.e., it describes a movement along their expected wage rate versus expected unemployment rate indifference curve. Initially, this procedure appeared to offer great promise. It seemed that
the extensive econometric work on labor force participation rates would yield many usable results. However, the approach suffers from a few severe shortcomings. The wage and unemployment rates used
in the labor force participation regressions are generally not those for the group in question. The inclusion of the group's specific unemployment rate would yield a potential simultaneous equations
bias (a high value for the random error term would produce a higher labor force participation rate, which would in turn Table 3.10 Unemployment Valuations Implied By Labor Force Participation
Regressions, Males, 1960 Wage Rate Versus Unemployment Trade-Off
Cost of Week's Unemployment
Unemployment Rate
All Variables Included
Only Significant Variables Included
All Variables Included
Only Significant Variables Included
Group, Age
Males, 25 to 54 35 to 44 45 to 54 55 to 64
4.0 3.6 4.2 4.9
12.31 7.69 7.01 10.58
9.94 6.73 6.37 8.73
Married men, Wife present, 55 to 64
Single males, 55 to 64
Weighted regression, 25 to 54 1950 data, 25 to 64
4.81 2.10
Asterisk indicates data are unavailable. The wage rate versus unemployment trade-off is expressed in dollars per week per one per cent unemployment and the cost of a week's unemployment is expressed
in dollars per week. Data source for regression coefficients used in columns (2) through (5) is W. Bowen and T. Finegan (1969). Unemployment rates are from the 1960 U.S. Census.
The Valuation oj Unemployment
produce a higher unemployment rate; the error term and an independent variable would then be correlated). The consequence is that the ratio of the estimated coefficients differs from the unemployment
trade-off by the factors which relate the group's unemployment and wage rates to the independent variables that were used in the regression. A second difficulty is that the procedure presumably
identifies the tradeoff only for the marginal participants. With higher nonemployment benefits, these workers have unemployment trade-offs which are below those for other workers in the group.
Finally, the degree of aggregation of groups undoubtedly introduces some noise into the identification of trade-offs. Some illustrative results using labor force participation regressions are
presented in Tables 3.10 to 3.13. These results are generally consistent with the results using the Table 3.11 Unemployment Valuations Implied By Labor Force Participation Regressions, Females, 1960
Wage Rate Versus Unemployment Trade-Off
Cost of Week's Unemployment
Unemployment Rate
All Variables Included
Only Significant Variables Included
All Variables Included
Only Significant Variables Included
Group, Age
Married, husband present, 14 to 54 25 to 29 30 to 34 35 to 39 40 to 44 45 to 54 55 to 64
5.2 5.9 5.6 5.0 4.6 4.2 4.0
3.85 3.33 2.56 2.90 3.37 3.54 3.61
3.33 2.47 2.77 3.37 3.12 3.65
Married, husband present, children under 6, 14 +
Married, husband present, no children under 6, 14 to 54
Never married, 25 to 64
Married, husband present, weighted regression, 14 +
Married, husband present, 1950 data, 14 +
Married, husband present, 1940 data, 14+
Asterisk indicates data are unavailable. The wage rate versus unemployment trade-off is expressed in dollars per week per one per cent unemployment and the cost of a week's unemployment is expressed
in dollars per week. Data source for regression coefficients used in columns (2) through (5) is W. Bowen and T. Finegan (1969). Unemployment rates are from the 1960 U.S. Census.
The Valuation of Unemployment
aggregate approach of the previous section. Tables 3.10 and 3.11 give the unemployment valuations using Bowen and Finegan's estimated coefficients (1969, appendix B). Column 2 gives the unemployment
trade-off derived by taking the ratio of the estimated coefficient of unemployment to the estimated coefficient of the wage rate, adjusted to give the units of measurement in terms of dollars per
week per one per cent unemployment. Column 3 shows the results when Bowen and Finegan use only the significant variables in their labor force participation regressions. In presenting Table 3.12
Unemployment Valuations Implied By Labor Force Participation Regressions, Males, 1970 Unemployment Rate
Unemployment Coefficient
Earnings Coefficient
Unemployment Trade-Off
Cost of Week's Unemployment
40 to 44
-0.222 (0.0872)
0.517 (0.119)
45 to 49
-0.275 (0.0934)
0.827 (0.127)
50 to 54
-0.419 (0.120)
1.30 (0.163)
55 to 59
-0.886 (0.171)
2.25 (0.233)
60 to 64
-2.08 (0.363)
2.34 (0.494)
Standard errors for estimated coefficients in columns (2) and (3) appear under them in parentheses. Data sources: see text and Appendix.
Table 3.13 Unemployment Valuations Implied By Labor Force Participation Regressions, Females, 1970 Unemployment Rate
Unemployment Coefficient
Earnings Coefficient
Unemployment Trade-Off
Cost of Week's Unemployment
16 to 24
-1.36 (0.398)
4.41 (1.12)
45 to 49
-1.38 (0.285)
1.40 (0.800)
50 to 54
-1.70 (0.290)
2.04 (0.815)
55 to 59
-1.78 (0.296)
2.52 (0.831)
60 to 64
-1.84 (0.302)
2.15 (0.850)
Standard errors for estimated coefficients in columns (2) and (3) appear in parentheses under them. Data sources: see text and Appendix.
The Valuation oj Unemployment
the unemployment valuations in Tables 3.10 and 3.11, only those results have been used where the estimated coefficients of both unemployment and earnings are significant. Those results with lower and
insignificant coefficients of unemployment are therefore automatically excluded. The cost of a week's unemployment, presented in columns 4 and 5, is calculated from the unemployment trade-offs in
columns 2 and 3 and the unemployment rate, taken from the 1960 U.S. Census and presented in column 1. Tables 3.12 and 3.13 present the corresponding results for 1970, using data and regression
equations similar to those of Bowen and Finegan. Again, results are only included when the estimated coefficients of both unemployment and earnings are significant; this tends to exclude results
where the valuations are low. The results from the 1960 and 1970 regressions generally indicate very high unemployment trade-offs and costs relative to the figures obtained in the previous section.
They are therefore consistent with the observation that unemployment imposes substantial costs on most groups.
7. Cross-Section Estimates The published data from the U.S. Census's Employment Profiles of Selected Low-Income Areas (1972) include median weekly earnings, median reservation wage and unemployment
rates for various demographic groups for each of 54 cities (New York is broken up into Manhattan, Brooklyn, Bronx and Queens). In (2.19), neglecting the discount rate, we therefore have all the
information for each city except for b - c. Dropping the discount rate and rearranging, one obtains: wo-(l-U)We
= (b-c)u
A regression using the city data of the left-hand side in this expression against the unemployment rate (with no intercept) yields an estimate of b - c, which in turn determines the unemployment
valuations. The estimated coefficients from these regressions are presented in Table 3.14, along with the standard errors. Neglecting the discount term, the unemployment premiums are equal to the
negative values of the unemployment coefficients in columns 1 and 3. Except for males and females aged 16 to 21 and females with less than four years of high school, the estimated coefficients are
negative, indicating that search costs exceed nonemployment benefits and workers must receive an increase in total earnings to be willing to expect to work less. The data for all races together are
used in these regressions, rather than whites and blacks separately. A variable for proportions of blacks in the cities, as well as interaction terms with the unemployment level, were used but did
not yield substantially different results. The implied unemployment valuations are presented in Table 3.15. Unlike the results for white males using the aggregate approach, the results here show an
increasing trade-off for males with age followed by a drop for the age group 55 to 64. This is consistent with the pooling of whites and blacks, for whom the trade-off declines for higher age groups.
In the education groups, males start out at substantially higher levels for the unemployment trade-offs and costs and therefore have less steep increases with educational levels. The figures are
generally lower than those in Tables 3.5 to 3.8, obtained using the aggregate approach, but are approximately the same after correction of those figures for the wage bias. The cross-section
estimation procedure assumes that the term b - c is the same from city to city for a given demographic group, so that the resulting variation in
Wo -
The Valuation oj Unemployment
(1 - U) We arises from differences in the unemployment rates faced by this group.
If nonemployment benefits and search costs vary among cities for a given group and are related to the unemployment rate, then the estimate of b - c would be biased.
Another difficulty with the procedure is that it neglects discounting. With discounting included, the coefficient of u would tend to be an underestimate of b - c. That is, considering just this
source of bias, one would expect the unbiased cross-section estimates to show lower unemployment valuations.
8. General Conclusions The various procedures and adjustments in this chapter produce many alternative estimates of the unemployment valuations for each particular group. Because of shortcomings in
the data and biases in the procedures, no single number can be taken as an exact measurement of an unemployment valuation. Nevertheless, several reliable but general conclusions arise from the
analysis. The major points that can be made are as follows. a. The direct estimates of section 3 show that the search model validly describes the choices and trade-offs faced by workers. By raising
the reservation wage, workers raise the expected wage while increasing the expected amount of unemployment. b. For most workers, the costs of a week of unemployment exceed the wages foregone. The
search costs exceed whatever nonemployment benefits the worker receives or perceives. To be willing to face an increase in the expected amount of Table 3.14 Cross-8ectlon Estimates of Unemployment
Valuations, 54 Cities Male
Unemployment Coefficient, b-c (1)
Standard Error (2)
7.8 -203 -285 -301 -255
4.5 19 39 45 51
Unemployment Coefficient, b-c (3)
Standard Error (4)
Age 16 to 22 to 35 to 45 to 55 to
Family status Head Wife of head Other member Unrelated individual
-11.4 -170
6.8 21
Education 7 years or less 8 years 1 to 3 years high school 4 years high school 1 year or more college
-101 -175 -162 -324 -610
22.8 -93 -117 -153
3.0 8 15 21 34
-53 -66 -7.5 -159
8.9 9.8 5.7 22
36 10.8 3.4
9.2 12.1 5.7 10 34
-85 -365
Dependent variable: wo-(1-u)we. Data source: Employment Profiles of Selected Low-Income Areas(1972).
The Valuation
0/ Unemployment
unemployment, most workers must receive an increase in the total amount of earnings. Job search theory does not trivialize unemployment by describing it as subject to individual choice. Quite the
contrary, the presence of choice leads to means of estimating the costs of unemployment. c. For some groups, the nonemployment benefit exceeds the search costs and the cost of a week's unemployment
falls below the wages foregone. From the aggregate results using the wage correction factor, these groups include the young (those less than 22), black and white females with eight years education or
less, and some who are not heads of the family (other members). If these results are caused by the minimum wage, then the costs of unemployment may be substantial for these groups also. d. For all
groups of workers, even those for whom the nonemployment benefit exceeds the search costs, unemployment is costly and all groups would prefer employment to unemployment. e. The unemployment
valuations are markedly higher for workers with one year of college or more. A less marked increase occurs for workers with four years of high school. The reasons for these higher figures will be
discussed shortly. f. White males have substantially different profiles of unemployment valuations than other groups. For white males, the unemployment valuations tend to increase with age (there are
some exceptions). White females show a dip in the unemployment trade-offs and costs in the age bracket 35 to 44, while blacks show a clear decline in valuations in older age groups. Among educational
groups, the valuations are much steeper for blacks and white females since white males start their valuations at higher levels for workers with seven years education or less.
Table 3.15 Unemployment Costs and Trade-Offs from Cross-Section Estimates Females
Males Group
1.18 3.69 4.50 4.59 4.02
0.80 1.86 2.00 2.18 2.46
1.55 3.23
1.62 1.70 1.16 2.66
2.34 3.25 3.26 4.98 8.12
0.39 0.74 0.91 2.00 5.20
Age 16 to 22 to 35 to 45 to 55 to
Family Status Head Wife of head Other member Unrelated individual Education 7 years or less 8 years 1 to 3 years high school 4 years high school 1 year or more college
The trade-off calculated in columns (1) and (3) is expressed in dollars per week per one per cent unemployment. The cost in columns (2) and (4) is expressed in dollars per week of unemployment.
The Valuation of Unemployment
Now let us consider the reasons why unemployment valuations are so much higher for workers with some college. The difference in valuations is substantially greater than the difference in wages. For
example, white males with one year college or more have median earnings of $167, while those with four years high school have median earnings of $145. The unemployment trade-offs (from Table 3.5) are
$9.70 and $7.38, respectively, while the unemployment premiums are $388 and $286 and the unemployment costs are $914 and $694. The unemployment rates are only slightly different. The sharp difference
in valuations arises from the relatively low reservation wages for white males with one year of college or more, 114, versus 104 for white males with four years of high school. This difference is
much smaller than the difference in median earnings between the two groups. So differences in current labor market prospects do not appear to account for the sharp jump in valuations. Furthermore, it
does not seem reasonable that the two groups differ so greatly in work attitudes or in tastes and preferences for work versus leisure, nor are unemployment benefits that different for the two groups.
Instead, the differences in unemployment valuations likely arise from the effects of current unemployment on future earnings. Workers with college degrees typically face steeper age-earnings profiles
than others, either because of greater on-the-job training or because of the greater value of their job experience. The loss of current job experience or on-the-job training through unemployment
would then reduce future earnings, and the present value of this loss would appear as a negative contribution to the nonemployment benefits. The present value of these future losses could be quite
substantial, accounting for the very high valuations on the part of workers with one year college or more. A further consequence of the high unemployment valuations is that such groups will respond
to higher unemployment with greater reductions in reservation wages than other groups. Labor markets for college-educated groups would then clear more rapidly and exhibit less unemployment. These
observations suggest that the existence of on-the-job training or valuable job experience have important implications for job search behavior and labor market adjustment. (Note also that the Markov
assumptions would no longer strictly hold if future employment prospects depend on current employment status.) A second issue that needs discussion here is the general question of whether the numbers
reported for various demographic groups describe the labor market conditions these groups face or the innate tastes and preferences of the groups involved. The unemployment trade-offs and costs are
substantially influenced by labor market conditions. That is, workers with identical nonemployment benefits and costs can have very different unemployment trade-offs and costs depending on expected
wages and unemployment. Using the approximation Wo = (1- U)We+ u(b - c), the unemployment trade-off and cost reduce to (we ~ b+c)/(1 - u) and We - b+c, respectively. These are clearly affected by
labor market conditions and are larger for better labor market conditions (for the trade-off, the increase in the expected wage will dominate the increase in the denominator from a lower unemployment
rate). For the unemployment premium, the same approximation yields c - b, so the only direct influence of labor market conditions is through the discount term, the second term on the right-hand side
in (3.8). To get some idea of the magnitude of the discount term, the figures for white males aged 35 to 44 yield: 223 = c - b +
1 - 0.058 0.05 0.058 0.05 + 0.207 + 3.36 (140 - b + c)
This calculation uses i = 0.05, Il = 0.207 and A = 3.36 (taken from figures in the next
The Valuation oj Unemployment
chapter). Solving, one obtains b - c = - $156, and the discount term, the difference between the unemployment premium and - b + c, is $67. So it appears that even in the case of the unemployment
premium, labor market conditions can substantially influence the size of the valuation. We therefore cannot conclude that the difference in unemployment premiums between two groups is attributable to
different preferences for work versus leisure. Another source of differences in unemployment valuations is in the nonemployment benefits, which incorporate the net benefits to the individual of the
time spent out of work. As mentioned previously, if a group receives on-the-job training or finds that its wage rate goes up with experience, it will have a lower nonemployment benefit than a group
which faces neither. The nonemployment benefit b is not determined purely or solely by an individual's or a group's tastes and preferences for work versus leisure but itself is influenced by future
labor market prospects. It is also conceivable that search costs can differ among groups, accounting for some of the differences in unemployment valuations. For example, a minority's search costs
might be greater, so that for given labor market conditions its unemployment valuations would be higher. From these considerations, the unemployment valuations that have been obtained, while
describing the current preferences for marginal changes in earnings versus unemployment of various groups, cannot be taken as reflecting innate attitudes towards work.
Appendix: Data Sources Tables 3.1 to 3.4. The data are taken from records of household data on Census Employment Survey tapes. The data cover Cleveland, Detroit, Baltimore, Boston and two areas of
Chicago. The documentation is described in "Demographic Surveys Division Special Projects Memorandum No. 61," Bureau of the Census, U.S. Department of Commerce, revised April 11, 1974. The data were
subjected to a number of consistency checks on weeks worked, earnings and reservation wages. Observations for part-time workers and workers out of the labor force for part of the year were
eliminated. Some of the data from the Census Employment Survey are reported in Employment Profiles of Selected Low-Income Areas (1972). Tables 3.5 to 3.8. Median weekly earnings, median reservation
wage and the unemployment rate are reported in Tables 7, 30 and 3 of Employment Profiles of Selected Low-Income Areas (1972), United States Summary, Urban Areas. The unemployment rate is the rate for
seekers of full-time work. Table 3.9. The entries are derived from the same household data used for Tables 3.1 to 3.4. Tables 3.10, 3.11. The unemployment rates reported in column 1 are taken from
Table 94,1960 U.S. Census of the PopUlation, Volume 1, Part 1, U.S. Bureau of the Census. The entries in columns 2 to 5 are calculated from coefficients published in W.G. Bowen and T.A. Finegan
(1969), Appendix B. Their data are census data for 100 Standard Metropolitan Statistical Areas. Tables 3.12, 3.13. The entries are obtained using census data on 120 Standard Metropolitan Statistical
Areas. Labor force participation rates are regressed against the civilian unemployment rate, earnings for those who worked 50 to 52 weeks, the proportion married, average income from transfers among
those receiving transfers, the proportion with Spanish surname, the proportion black, the average number of years of schooling and the average age. These data are all available by SMSA in the
The Valuation of Unemployment
1970 U.S. Census of Population. The coefficients for the unemployment and earnings variables are reported in columns 2 and 3 along with their standard errors. Tables 3.14, 3.15. The data for the
cross-section estimates consist of median earnings, median reservation wage (lowest acceptable weekly pay), and unemployment rate for each of 54 city areas. These data are in Tables 7, 30 and 3 in
the separate volumes by city in Employment Profiles of Selected Low-Income Areas (1972).
Chapter 4
The Distribution of Employment 1. Introduction The intention of this chapter is to examine the distribution of employment and unemployment among individual workers. By and large, the literature has
dealt with differences in aggregate unemployment rates by demographic groups and with the distribution of unemployed workers by duration of unemployment. In contrast with these ways of looking at
unemployment, this chapter concentrates on the size distribution of employment. This distribution is described by a cumulative distribution function for the proportion of a particular group that is
unemployed less than or equal to an amount of time x in a period of length t. While the duration of unemployment is simple to model mathematically (at least when transition rates are constant), the
proportion of the time spent employed is complicated by the possibility of multiple transitions between employment and unemployment. The appropriate probability density functions are developed in
section 4. Related to the description of the size distribution of employment is the question of how unequally unemployment is distributed. That is, how unequally is the burden of unemployment
distributed among the population, and how much of this inequality is the result of random outcomes? The next section reviews previous work on the utilization of earnings capacity, by Thad Mirer
(1979) and by Irwin Garfinkel and Robert Haveman (1977), and on the distribution of the unemployment by duration of spell, by Kim Clark and Lawrence Summers (1979). The Clark and Summers paper
addresses several questions that are central to the point of view taken in this monograph. In particular they question whether search models of behavior and Markov models of labor market dynamics can
explain the presence of workers with long spells without employment. Section 3 discusses possible sources of inequality in the distribution of unemployment. Section 4 develops, despite the Clark and
Summers paper, the probability density functions for the time spent in one state of a two state Markov process. Section 5 follows up with some numerical illustrations of the results, section 6
examines the consequences of heterogeneous transition rates, and section 7 summarizes the conclusions.
2. Previous Work In addition to Gramlich's work (1974) on the distribution of unemployment (discussed in Chapter 3, section 3), Mirer (1973b, 1979) and Garfinkel and Haveman (1977b) have examined the
utilization of earning capacity. Earning capacity is the potential earning level of a worker on the basis of his or her demographic group and employment qualifications. Mirer estimates the capacity
using earnings data on workers who were approximately fully employed. Garfinkel and Haveman estimate the capacity by including weeks worked as a dependent variable and then finding what the earnings
would be at full employment. The utilization rate is then the ratio of actual earnings to earning capacity. In this regard, the Mirer analysis excludes workers who would not
The Distribution of Employment
be available for full-time employment and thereby gets much higher utilization rates. In both studies, the capacity utilization rate rises with earning capacity. This produces a different relation
between capacity utilization rates for whites and blacks depending on whether one controls for earning capacity. At the same level of earning capacity, whites and blacks have about the same
utilization rates (Garfinkel and Haveman show generally higher utilization rates for blacks (1977b, p.30». However, because blacks have lower average earning capacity, their utilization rates are
lower. Garfinkel and Haveman also find that reductions in wives' utilization rates generally compensate for increases in husbands' rates as family earning capacity rises, thereby producing a stable
utilization rate for families over different earning capacity levels. Both the Mirer and the Garfinkel and Haveman studies consider average utilization rates for demographic groups. In contrast, this
chapter is concerned with differences in unemployment among individuals. A further difference is that the concept of an earning capacity utilization rate combines involuntary unemployment of
individuals with individual decisions to limit labor supply. Instead, this chapter is concerned with the contribution to unemployment of choice and of employment conditions beyond the control of the
individual worker. The Clark and Summers paper comes very close to the subject matter of this chapter, and in fact their article suggested some of the material here (see also comments on the Clark
and Summers paper by Charles Holt, Robert Hall and Martin N. Bailey (1979». Their concern is not directly with inequality in the distribution of employment. Their major interest is whether turnover,
search or implicit contract models of labor markets are consistent with the nature of the unemployment they observe. Their procedure is to examine the distribution of unemployment by duration rather
than the mean spell of unemployment. Their major findings are as follows. a. The unemployment rate in the economy arises mostly from workers who are out of work for long periods rather than from
workers who are on temporary layoffs or are unemployed short periods. From their Table 1 (1979, p.19), the proportion of unemployment weeks accounted for by spells of four months or more is 48 per
cent for males 20 and over, while for the same group 47 per cent of unemployment spells end within a month and the mean duration is 2.42 months. b. Unemployment is not evenly distributed among the
labor force but is concentrated among workers who suffer long spells outside employment. For example, for all groups in 1974, Clark and Summers find (pp.36,37) that 41.8 per cent of weeks of
unemployment are concentrated in 2.4 per cent of the labor force. The authors also consider an alternative concept of unemployment which includes time workers spend outside the labor force because of
inability to find work. This measure is called nonemployment. They find that 66.7 per cent of nonemployment is concentrated among the 4.9 per cent of the labor force that is classified as nonemployed
for more than six months. The Clark and Summers results therefore demonstrate that unemployment is not a predictable and calculable expense of getting a job but a major source of inequality in labor
market outcomes. c. Flows into and out of the labor market dominate flows between employment and unemployment. Many unemployed workers drop out of the labor force for a period and reenter later. In
1974, for all groups, the authors find that 47 per cent of all unemployment spells end in withdrawal from the labor force. This distorts the mean duration of unemployment spell and would lead to an
underestimate, using unemployment statistics, of the proportion of unemployment attributable to long periods without work.
The Distribution of Employment
d. The distribution of unemployment does not become more equal as one lengthens the time span under consideration. One may have expected that over a long period of time, spells of unemployment and
employment would balance out. Partly, this expectation is a result of the fallacy that after ten flips of a coin yield heads, a tail is more likely. If a worker experiences a long period of
unemployment, this does not make employment more likely. Using National Longitudinal Survey data on males aged 45 to 59, Clark and Summers find that over the four year period 1965 to 1968,
unemployment was completely concentrated among 21.1 per cent of the labor force (1979, p.40). Only about a third of total weeks unemployed for the group is attributable to persons unemployed less
than six months in the four year period. With their conclusion that aggregate unemployment generates considerable inequality in the distribution of unemployment among individuals, this monograph is
in substantial agreement. But in addition to their statistical points, Clark and Summers draw several inferences that are pertinent to the methodology adopted here. First, they question whether
search models are consistent with their results. They argue that search as an explanation for unemployment is only valid if the search pays off, in the sense of producing a job with sufficiently long
duration. But without information on the costs of search, the authors cannot really draw conclusions on the return to search. They also argue that adult men have the largest potential gain from
search because of their long job durations and yet are not responsible for much unemployment. However, the long job durations themselves should reduce unemployment. Also, high costs of unemployment
for such workers would lead them to choose on the job search rather than search during unemployment, and to select relatively low reservation wages, further reducing unemployment. Finally, they argue
that most job search does not require unemployment: most types of search can be undertaken while on the job. But workers may not know they are headed for unemployment until they lose their jobs, so
that search on the job is irrelevant for these workers. Basically, Clark and Summers are rejecting a "straw man" version of search theory, as noted by Robert Hall (1979) in his comments on the paper.
Clark and Summers view search theory as a theory of choice, which is the manner in which it is usually presented. The consequence of this presentation is that the theory seems to minimize the costs
of unemployment by making it the subject of choice. In contrast, this monograph presents search theory as also describing the influence of labor market conditions on workers' choices and outcomes. As
demonstrated in the previous chapter, there is no necessary implication that unemployment is costless. The second inference drawn by Clark and Summers is that Markov models do not explain the
dynamics of labor markets. In particular, Markov models cannot explain the fat tails that appear, the unpredictably large proportions of workers with unemployment spells exceeding six months. In
their Figure 1 (p.24), Clark and Summers show that the probability of exit from unemployment or of finding a job decline with the time spent unemployed, in contrast with the standard Markov model in
which the transition rates are constant. The problem here is that Clark and Summers are not using the right Markov model. The decline in transition rates with duration of time spent unemployed is a
well-known outcome of heterogeneity of the population. Consider a group of unemployed workers with different transition rates from unemployment to employment. After a month, the flow of workers will
be dominated by those with the highest transition rates, who get jobs the fastest. After six months, few workers with high transition rates will be left, and the flow of workers out of unemployment
will be dominated by those with low transition rates who remain unemployed. The data will therefore seem to show that the transition rate declines
The Distribution of Employment
with the duration of unemployment (see S. Salant, 1977, for a discussion of sorting of unemployed workers). Heterogeneity of workers with respect to transition rates therefore produces exactly the
results Clark and Summers describe. The dynamic behavior of a heterogeneous population has been extensively discussed in the literature (see the discussion of heterogeneity by George Akerlof and B.
Main (1981». In applying Markov models to the analysis of industrial mobility of labor, I. Blumen, M. Kogan and P. McCarthy (1955) found the same empirical regularity described by Clark and Summers:
the estimated probability as predicted by the Markov model of remaining in a state after some time fell short of the observed probability. Their explanation for the phenomenon was that heterogeneity
produced a bias in the estimates. In response to this problem, they developed a mover-stayer model, which was later improved by L. Goodman (1961). In this model, the population is assumed to consist
of two types, movers, who account for all transitions, and stayers, who never move. This procedure is found to explain much of the empirical regularity observed by Blumen, Kogan and McCarthy.
However, some bias remains. According to B. Singer and S. Spilerman (1976), this bias arises from the use of a period of arbitrary length (e.g., the time between interviews in the collection of
longitudinal data). The behavior of a Markov process differs according to the length of period after which transitions occur (see also Akerlof and Main for a discussion of this phenomenon). If
transition probabilities are estimated assuming a three month period when a one month period is appropriate, the likelihood of remaining in a state after' some time will be misestimated and the
dynamic behavior of the system will be misrepresented. Singer and Spilerman suggest an ingenious method for overcoming this period problem by assuming that the heterogeneity of the population arises
in the times between job offers. Using a gamma distribution of waiting time parameters, the matrix of transition rates reduces to a simple form (1976, p.457). A difficulty with the mover-stayer model
or the Singer and Spilerman approach (as presented in their article) is that they impose unreasonable sets of transition rates. The mover-stayer model supposes that some workers remain unemployed
indefinitely, although such workers would eventually drop out of the labor market. In the Singer and Spilerman model, workers with higher transition rates from unemployment to employment also have
higher transition rates from unemployment to employment. Instead, workers more likely to lose their jobs may be less likely to get new jobs. (The Singer and Spilerman model can apparently be adapted
to this assumption regarding transition rates.) An alternative approach using duration data is as follows. Suppose at times t" t 2,. .. tn we observe the proportions p" P2,. .. Pn of originally
unemployed workers who are still unemployed. Assume the population is divided up among n + 1 known transition rates tt" tt2, . .. ttn + I. Let r; be the proportion of the original unemployed
accounted for by the i-th group. Then by the Markov model:
+ ... +
The first equality in this system insures that the proportions add to one. Some results of using this approach are presented in Tables 4.1 to 4.4 using
The Distribution oj Employment
various sources of data. The approach proves to be trickier than one might at first suspect, since there is no guarantee that the proportions rj will all be positive. The transition rates must be
selected carefully and with considerable experimentation to yield positive group proportions. In practice, three groups are the most one can work with and get positive proportions for all
observations from a source. In some cases it may be impossible to find a combination of transition rates which generates the observed proportions unemployed. Thus we would be forced to reject the
hypothesis that the data were generated by heterogeneous but constant transition rates. For each group, the relation between the logarithms of the proportion unemployed and the time will be a
straight line. A group with a low transition rate will have a relatively flatter line with a slope lower in absolute value. The slope of the proportion of all workers unemployed will be a weighted
average of the slopes for the individual groups. As time passes the weights for groups with lower transition rates increase, so that the slope for all workers will decrease in absolute value as time
increases. That is, the time-path of the logarithm of the proportion remaining unemployed will be convex. It follows that if the time-path of the logarithm is not convex, it could not be generated by
a heterogeneous group with constant transition rates. This is one possible test on the aggregate data for non-constant transition rates. Table 4.1 uses unemployment duration data from Employment and
Earnings, U.S. Department of Labor (November 1981). The periods at which proportions unemployed are observed are 5 weeks and 26 weeks (there are also data for those unemployed 5 to 14 weeks, but
these data are ignored in order to work with three groups). Columns 1 to 3 present the proportions of the groups with transition rates 2.0, 5.0 and 10.0 from unemployment to either employment or out
of the labor market. The results are fairly robust in the sense that a change in the transition rates used leads Table 4.1 Distribution of Unemployed by Transition Rates, by Sex and Age Proportions
with Given Transition Rates ).1 2.0 ).3 10.0 ).2 5.0 Age Groups
Mean Duration (Weeks)
0.011 (0.126) 0.537 0.395 0.535 0.655
0.726 (0.771) 0.146 0.477 0.045 0.044
0.263 (0.103) 0.317 0.128 0.420 0.301
8.4 13.1 14.8 14.3 16.8 17.3
0.055 0.203 0.236 0.200 0.409 0.429
0.426 0.339 0.377 0.623 0.446 0.029
0.518 0.458 0.387 0.177 0.145 0.542
8.5 14.3 17.4 16.0 17.5 20.2
Males 16 to 20 to 25 to 35 to 45 to 55 to
19 24* 34 44 54 64
16 to 20 to 25 to 35 to 45 to 55 to
Proportions for group marked with asterisk are calculated using transition rates 1..1 = 2, 1..2 = 4 and 1..3 = 9. Data sources: Employment and Earnings, U. S. Bureau of the Census, Nov. 1981.
The Distribution of Employment
to predictable shifts in the proportions rather than radical reversals. Column 4 presents the mean duration of unemployment in weeks, a standard way of describing unemployment. It should be noted
that the original data consist of the proportions of the currently unemployed who have been unemployed for more than 5 weeks, from 5 to 14 weeks, and so on, rather than the proportion of the
unemployed at one point in time who are still unemployed 5 weeks later and so on. However, in the absence of censoring, the data should be equivalent. The possibility that the unemployed drop out of
the labor market means that the estimated transition rates are not the same as the transition rates from unemployment to employment alone. Preferable data would distinguish the two states of
employment and out of the labor force (see Stephen J. Marston, 1976, and Ralph E. Smith in R. Ehrenberg, 1977, for other work on flows between all three states of the labor market). The results
indicate considerable diversity in the distribution of transition rates among groups. Some groups, such as females 35 to 44,. have a high proportion in the middle transition rate group, while others,
such as females 55 to 64, have transition rates that are quite spread out. Generally, for both males and females, the proportions with the lowest transition rate of 2.0, who would be unemployed the
longest, decline with age, while the proportions with the other two transition rates vary considerably. Quite different distributions of transition rates yield the same mean duration of unemployment,
for example males 55 to 64 and females 45 to 54. The results demonstrate that the groups can be quite heterogeneous and are inappropriately described by a single transition rate. Table 4.2 presents
the results for all unemployed for the period 1970 to 1979. By inspection, a low proportion unemployed more than six months (column 3) arises from a low proportion with transition rate of 2.0, as one
would expect. Also, the proportion with A = 15.0 is greater in years when the proportion unemployed five weeks or longer is less (column 2). The 1975 recession shows up as a precipitous drop in the
proportion with a high transition rate out of unemployment and a sharp increase in the proportion moving slowly out of unemployment. Table 4.2 Distribution of Unemployed By Transition Rates, 1970 to
1979 Unemployment Rate
Proportion Unemployed Five Weeks Or Longer
Proportion Unemployed More Than Six Months
4.9 5.9 5.6 4.9 5.6 8.5 7.7 7.0 6.0 5.8
0.477 0.553 0.541 0.490 0.494 0.630 0.617 0.583 0.538 0.519
0.057 0.104 0.116 0.078 0.074 0.152 0.183 0.148 0.105 0.087
Proportion with Transition Rate X1 = 2.0 X2 = 5.0 X3 = 15.0 (4) (5) (6)
0.021 0.148 0.209 0.096 0.076 0.279 0.419 0.304 0.166 0.108
0.598 0.600 0.476 0.515 0.557 0.601 0.351 0.439 0.534 0.573
Data source: 1980 Handbook of Labor Statistics, U.S. Department of Labor.
0.381 0.251 0.315 0.388 0.367 0.120 0.230 0.257 0.300 0.319
The Distribution of Employment
Tables 4.3 and 4.4 use tenure data from Employment Profiles of Selected LowIncome Areas to estimate the distribution of transition rates from employment to out of work. The clearest pattern is among
educational groups. Less educated groups have higher proportions of workers with low transition rates out of employment. Family heads of both sexes have low proportions with high transition rates out
of Table 4.3 Distribution of Employment to Unemployment Transition Rates, Males
Transition Rates
(0.021) 0.413 0.749 0.804
(0.943) 0.495 0.144 0.184
(0.036) 0.092 0.107 0.012
Family status Family head Family member* Unrelated individual
0.277 (0.266) 0.427
0.705 (0.288) 0.135
0.017 (0.446) 0.437
Education 7 years or less 8 years 1 to 3 years high school 4 years high school 1 year or more college
0.438 0.463 0.446 0.210 0.095
0.389 0.422 0.257 0.616 0.664
0.173 0.115 0.297 0.174 0.241
Age 22 35 45 55
to to to to
34* 44 54 64
Groups with asterisk are described using transition rates of 1'1 = 0.01, 1'2 = 0.4 and 1'3 = 1.0. Data source: Table 35, Tenure of Current Job, Employment Profiles of Selected Low-Income Areas, U.S.
Bureau of the Census (1972), U.S. Summary.
Table 4.4 Distribution of Employment to Unemployment Transition Rates, Females 1'1
Transition Rates 1'2
Family status Wife of head Other family member Unrelated individual
0.316 0.035 0.379
0.672 0.724 0.586
0.012 0.241 0.035
Education 8 years 1 to 3 years high school 1 year or more college
0.700 0.418 0.191
0.121 0.405 0.748
0.179 0.178 0.061
Data source: Table 35, Tenure of Current Job, Employment Profiles of Selected Low-Income Areas, U.S. Bureau of the Census (1972).
The Distribution of Employment
employment, while family members have high proportions. These results demonstrate only that the observed aggregate time patterns into and out of unemployment are consistent with constant transition
rates for each worker. The results in no way rule out duration dependence. Considerable work is currently being done on the problem of estimating from work histories the parameters of timevarying
transition or hazard rates (K. Burdett, N. Kiefer, D.T. Mortensen and G. Neumann, 1984; G. Ridder, 1982; P.K. Andersen, 1982; J. Heckman and B. Singer, 1982; Kiefer and Neumann, 1981b; C.J. Flinn and
J. Heckman, 1982a; T. Lancaster and A. Chesher, 1981; Lancaster, 1979). The consequence of declining transition rates over time is that the proportions of workers with long spells will be increased
and inequality in the distribution of employment will be greater. Increasing transition rates over time, presumably because of declining reservation wages, will tend to cut short long spells of
unemployment, thereby reducing inequality. Clearly, Markov models may be elaborated to explain the tails of the duration data, but only by introducing heterogeneity of the population. (The
heterogeneity of the population may have been the point Clark and Summers wished to make rather than the invalidity of the Markov models.) This brings us to the third inference Clark and Summers
draw. The workers who account for most of the unemployment are those workers who experience long spells out of work. They argue that these workers are different from the workers usually described by
search and turnover models, and that substantially different theories and policies are needed for them. In different terms, there is a segmentation of the labor market into workers with at most short
durations of unemployment and workers with repeated spells of extensive unemployment, perhaps broken up by periods outside the labor force. The presence of workers with long spells of unemployment
and intermittent participation in the labor force is explained partly by the results of Chapter 2. Suppose labor market conditions are such that a given worker has a reservation wage only slightly
above the nonemployment benefits (the parameter b in that chapter). Relative to other members of his or her demographic group, the worker will have a lower transition rate from unemployment to
employment because of the higher reservation wage. If search intensity is subject to the control of the worker, it drops to zero as the nonemployment benefit approaches the expected wage, further
reducing the transition rate. Slight variations in labor market conditions can also induce the worker to drop out and reenter later. Finally, if labor market conditions improve, the worker will raise
search intensity and the reservation wage will fall further below the expected wage, increasing the likelihood of getting a job. This suggests that the long-term unemployed would be very sensitive to
labor market conditions, which is consistent with data provided by Clark and Summers (1979, p.19). Between the peak year of 1969 and the recession year of 1975, the proportion of unemployment arising
from periods of six months or more rose from 0.03 to 0.27. The wide variation in the proportion of unemployed with the lowest transition rates out of unemployment in Table 4.2 constitutes additional
evidence. The proportion with a transition rate of 2.0 ranges from 0.021 in 1970 to 0.419 in 1976. This demonstrates that the long-term unemployed are not chronically out of work but are susceptible
to the same labor market influences as everyone else. Contrary to the argument of Clark and Summers, it appears that the same macroeconomic policies that raise the transition rates for the short-term
unemployed would also raise them for the long-term unemployed. The results presented by Clark and Summers are consistent with quantitative differences among workers instead of the qualitative
differences associated with segmentation. For use in analyzing inequality, the Clark and Summers paper suffers from the
The Distribution of Employment
familiar shortcoming that it fails to distinguish between labor market conditions and worker choices in the determination of the distribution of unemployment. Ironically, they reject the very
methodologies, involving search theory and Markov processes, that would allow them to decompose the distribution of unemployment.
3. Sources oj Inequality The observed distribution of unemployment or of employment is neither the result of labor supply decisions alone nor of labor market conditions alone. Instead, the two are
mixed and the problem is to get some idea of the contribution of each. The approach taken in this chapter is to investigate the inequality that would arise randomly from fixed transition rates and to
compare this inequality with the level arising from a mix of transition rates. This section discusses the determinants of the distribution in a hierarchical fashion, in order to recognize the
separate contributions of choice and chance. These determinants are the labor market conditions for the particular worker's demographic group and characteristics; the decision to participate in the
labor market; the choice of reservation wage; and the random outcome of the search process. a. From Chapter 3, it is apparent that transition rates differ substantially among demographic and
educational level groups. To some extent, a worker's education and training are the result of previous choices and decisions. In a current period, however, we may take the labor market conditions
facing the individual worker as given and beyond the control of the worker. These labor market conditions are reflected in the choice set of expected wages and expected unemployment for the
individual. b. The worker decides whether to participate in the labor market. This decision is substantially influenced by the labor market conditions facing the individual worker. From the
discussion in Chapter 2, the worker decides to participate when the value of being in the labor market, reflected in the worker's reservation wage, rises above the nonemployment benefit. Neither the
labor market conditions determining the reservation wage nor the nonemployment benefit for the worker may be subject to the worker's influence, but we regard the decision as the result of choice. The
decision to participate in the labor market would not affect the distribution of employment among those in the labor market if there were no switching between participation and nonparticipation.
However, Clark and Summers point out that a great many people drop out of the labor market in response to prolonged unemployment. This behavior clearly increases the number of individuals in the
labor market with low numbers of weeks worked in a year. c. In response to the labor market conditions and nonemployment benefits, workers unemployed in the labor market set a reservation wage
according to the expressions in Chapter 2. d. For a given distribution of wage offers, the worker's choice of reservation wage determines the transition rates between employment and unemployment. In
turn, these determine the distribution of employment prospects facing the individual worker. The foregoing discussion establishes that the distribution of employment or unemployment is the result of
choice, the labor market conditions facing the worker, and chance. In order to get some idea of the magnitude of these sources of inequality in the distribution of employment, the contribution of
pure chance should be isolated. The next section turns to the problem of finding the distribution of employment for given transition rates. This provides a null model with which to compare actual
distribution statistics.
The Distribution
0/ Employment
4. Distribution of the Time Spent Unemployed This section presents the probability density functions for the amount of time spent in one state of a two state continuous time Markov process. The
details of the derivation are developed in Sattinger (1983). These probability density functions will provide a null model for the study of the distribution of unemployment. By comparing the
distribution generated by a Markov process using constant transition rates with the actual distributions, it is possible to study the amount of inequality in the distribution of unemployment
generated by heterogeneous transition rates. K. Gabriel (1959) derives an exact expression in discrete time for the number of successes in a sequence of dependent trials but does not regard the
result as usable. However, it is possible to take Gabriel's expression and divide it into two cases, depending on the outcomes of the final trials. Then by taking limits as the period of time
approaches zero, one can obtain exact expressions for the density functions in the continuous case. A less awkward procedure is to reexpress the Markov process in terms of two Poisson processes,
which describe events occurring at different points in time. Consider two states, 0 and 1, corresponding to employment and unemployment. Let A = transition rate from state 1 to state 0; JL =
transition rate from state 0 to state 1; a = time spent in state 1; t = total time; ~ = t - a = time spent in state 0; pu(a,t) = probability density function for the time spent in state lout of total
time t with system ending in statej, given that it began in state i; and P;(a,t) = cumulative probability that the time spent in state lout of total time t is less than or equal to a, given that the
system began in state i. Instead of treating the system as movement back and forth according to a Markov process, the derivation will use two Poisson processes, called the state 0 and state 1
processes. The state 0 Poisson process has parameter JL, so that the probability that exactly n events occur in time ~ is e-,.(3 {JL ~)n In!. Similarly, the state 1 process has parameter A. Now
consider Figure 4.1. Suppose the system begins and ends in state O. At the first event in the state 0 Poisson process, the system switches to the state 1 process; at the first event in that process,
it shifts back, and so on. Now suppose n events occur in the state 0 process before time ~ is reached. The system will be in state 1 an amount of time exactly equal to a if and only if the time that
passes to the n-th event in that process is a. The probability that the waiting time to the n-th event in the state 1 process is exactly a is (D.R. Cox and H.D. Miller, 1965) A(Aat-le-Ml/(n - 1)1.
Summing over all possible numbers of events in state 0, one obtains as the probability the following:
-- ~ 2a I 1 (Z) e-"",-,.(3 ,a
In this expression, Z = 2(AJLam'h.
= 2(AJLa(t 00
= r~o
a»'h. and:
(z l2)j + 2r r! U+ r)!
The Distribution oj Employment
Time in
Time In
Stata 1
State 0
"""\.. Time(J
Time a
State 0
State 1
Poisson Process
Poisson Proceas
Figure 4.1: Representation of Markov Proceas
is the modified Bessel function of the first kind of order j (E.T. Whittaker and O.N. Watson, 1963, p.372). The relation between Poisson processes, gamma density functions and Bessel coefficients is
also discussed by W. Feller (1971, p.58). If the system ends in state 1, one obtains using similar reasoning: POI (a,t)
= p,lo(z)e->.a-p.{3
By analogy one obtains: (4.3)
and: PIO(a,t)
= >..Io(z)e->.a-p.{3
There is additionally a probability of e-p.t that a = 0 when the system begins in state 0; this is the probability that the system never leaves that state. Similarly, the probability that {3 = t - a
= 0 and the system never leaves state 1 is e-M • The cumulative distribution functions may also be derived using the Poisson representation. Consider Figure 4.2. In total time t, the amount of time
spent in state 1 will be less than or equal to a if the time spent in state 0 passes {3 = t - a before the time spent in state 1 passes a. This occurs if the number of events in the state 1 process
in time a equals or exceeds the number of events in the state 0 process in time {3. Multiplying the probabilities of these two events and summing over all possible numbers of events in state 0
The Distribution
0/ Employment
Ir(z)('Aal p,{jyl2 e-Xa-l'{3
Using similar reasoning one obtains: Pl(a,t)
= 1- ~
I r(Z)(p,{jI'AayI2 e-xO/-I'{3
Since there is a probability of e-I't that a = 0 when the system starts in state 0, Po(O,t) = e-I't. Also, since there is a probability of e- Xt that (j = 0 when the system begins in state 1, P 1
(a,t) approaches 1- e- Xt as a approaches t. Therefore set P 1 (t,t) = 1.
Separate cumulative distribution functions for ending in state 0 or 1 are not available since the derivation does not distinguish between the two possibilities. Taking the derivative of the
cumulative distribution functions yields the probability density functions pij(a,t). The probability density functions can also be shown to satisfy the forward equations of the Markov process. The
cumulative distribution functions may also be derived from the Bessel coefficient expansion of the bivariate probability generating function for the events in the two Poisson processes. These results
are presented in detail in Sattinger (1983). Tables 4.5,4.6 and 4.7 present numerical values for the probability density functions for three sets of transition rates. In Table 4.5, the transition
rates are 'A = 1.9
Time In State 1
Time In State 0
"- TlmeO/
\ Event Eventr-
State 1 Poisson Process
State 0 Poisson Procaaa
Figure 4.2: Cumulative Distribution lor Markov Proceaa
0.000 0.010 0.015 0.016 0.014 0.012 0.009 0.006 0.004 0.002 0.001 0.000 (0.150)*
(0.905)* 0.137 0.108 0.084 0.064 0.048 0.036 0.026 0.018 0,011 0.006 0.003 0.000
0.090 0.079 0.069 0.060 0.052 0.045 0.039 0.033 0.028 0.024 0.021 0.018 0.015
P01(a,t) (3) 1.719 1.501 1.307 1.136 0.984 0.850 0.733 0.630 0.540 0.462 0.394 0.335 0.284
P10(a,t) (4) 0.172 0.280 0.234 0.194 0.160 0.132 0.108 0.088 0.071 0.057 0.046 0.036 0.028
p(a,t) 0.000 0.135 0.253 0.357 0.447 0.526 0.595 0.655 0.706 0.751 0.789 0.822 1.000
=5 per cent 0.905 0.925 0.941 0.954 0.965 0.974 0.981 0.986 0.991 0.994 0.997 0.999 1.000
P1 (a,f)
0.860 0.885 0.907 0.924 0.939 0.951 0.961 0.970 0.976 0.982 0.986 0.990 1.000
Asterisk indicates that probabilities are for never leaving the state. State 0 is employment and state 1 is unemployment. The calculations in this table are based on a transition rate from employment
to unemployment of 0.1 and from unemployment to employment of 1.9.
pu(a,t) (2)
Table 4.5 Density Functions for Time Spent Unemployed, u
(0.819)* 0.240 0.193 0.153 0.120 0.092 0.069 0.050 0.035 0.023 0.013 0.006 0.000
0 1 2 3 4 5 6 7 8 9 10 11 12 0.000 0.018 0.027 0.029 0.027 0.022 0.017 0.012 0.008 0.004 0.002 0.000 (0.165)*
PI1(a,t) 0.164 0.147 0.132 0.117 0.104 0.092 0.080 0.070 0.061 0.053 0.045 0.039 0.033
P01(a,t) 1.474 1.325 1.186 1.056 0.935 0.824 0.723 0.631 0.549 0.474 0.408 0.349 0.298
Pl0(a,t) 0.295 0.483 0.413 0.352 0.298 0.250 0.209 0.173 0.142 0.116 0.094 0.075 0.060
0.000 0.118 0.225 0.322 0.409 0.488 0.558 0.620 0.675 0.723 0.766 0.803 1.000
(8) 0.737 0.780 0.818 0.850 0.877 0.900 0.919 0.936 0.949 0.961 0.970. 0.977 1.000
0.819 0.854 0.883 0.908 0.929 0.946 0.960 0.971 0.980 0.987 0.993 0.997 1.000
Asterisk indicates that probabilities are for never leaving the state. State 0 is employment and state 1 is unemployment. The calculations in this table are based on a transition rate from employment
to unemployment of 0.2 and from unemployment to employment of 1.8.
Table 4.6 Density Functions for Time Spent Unemployed, u = 10 per cent
0\ 10
(0.819)* 0.115 0.100 0.086 0.073 0.061 0.049 0.039 0.030 0.021 0.013 0.006 0.000
0 1 2 3 4 5 6 7 8 9 10 11 12 0.000 0.009 0.014 0.016 0.016 0.015 0.012 0.010 0.007 0.004 0.002 0.000 (0.449)*
p,,(a,t) 0.164 0.158 0.151 0.145 0.139 0.133 0.126 0.120 0.114 0.108 0.102 0.096 0.090
p01(a,t) 0.655 0.631 0.606 0.581 0.555 0.530 0.505 0.480 0.455 0.430 0.406 0.383 0.359
P1o(a,t) 0.262 0.346 0.325 0.304 0.284 0.264 0.244 0.225 0.207 0.190 0.174 0.158 0.144
p(a,t) 0.000 0.054 0.107 0.158 0.208 0.257 0.304 0.349 0.393 0.435 0.475 0.514 1.000
Po(a,t) (6)
0.819 0.842 0.864 0.884 0.903 0.920 0.935 0.949 0.961 0.973 0.983 0.992 1.000
0.655 0.685 0.713 0.739 0.764 0.787 0.809 0.829 0.848 0.865 0.881 0.896 1.000
Asterisk indicates that probabilities are for never leaving the state. State 0 is employment and state 1 is unemployment. The calculations in this table are based on a transition rate from employment
to unemployment of 0.2 and from unemployment to employment of 0.8.
Density Functions for Time Spent Unemployed, U = 20 per cent
Table 4.7
The Distribution oj Employment
and p. = 0.1 for an unemployment rate of five per cent. For Tables 4.6 and 4.7, the transition rates are A = 1.8 and p. = 0.2 for ten per cent unemployment and A = 0.8 and p. = 0.2 for 20 per cent
unemployment. The first four columns present the values of the density functions pij(a,t) during various months of the year. The units of measurement in these columns are probability per year, so
that a value of 1.5 over a month yields a probability of 0.125. In addition, there are nonzero probabilities that a worker never leaves the employed state or that, if he or she begins the year
unemployed, never leaves the unemployed state. In Table 4.5, these probabilities are 0.905 and 0.150, respectively. Column 5 presents the density functionp(a,t) obtained by pooling all four
populations (determined by start and end states) using as weights the probabilities of starting out employed or unemployed. Therefore in Table 4.5: p(a,t)
= u[p,,(a,t)+p,o(a,t)] +(1
- u)[po,(a,t)+poo(a,t)]
or, for month 6: 0.108
= 0.05(0.009 + 0.733) + 0.95(0.036 + 0.039)
Columns 6 and 7 present the cumulative distributions Po(a,t) and P,(a,t). Column 8 presents the cumulative distributions calculated for the whole population, P(a,t)
uP,(a,t) + (1 - u)Po(a,t).
Tables 4.8,4.9 and 4.10 present the proportions of a group of workers with given transition rates that are unemployed for longer than one month, three months or six months during the year.
Alternative values of A are presented on the left side in the first column and the values of p. are presented at the top in the first row. The entries are essentially one minus the amounts listed in
column 8 of Tables 4.5 to 4.7. Tables 4.8 to 4.10 provide a good description of the influence of the transition rates on the proportions of the population unemployed more than a particular amount.
Note that these figures are not equivalent to duration data, which show the proportion of the unemployed that remain unemployed after a certain time. Visual inspection of these tables suggests that
the proportions unemployed more than one month are most sensitive to the transition rates from employment to unemployment, while the proportions unemployed more than six months are most sensitive to
the transition rates from unemployment to employment. This may be seen by comparing the proportions moving across the tables versus moving down. The results of this section provide us with a model of
the distribution of unemployment for given transition rates, using the Markov process. These results are intended to describe the employment prospects facing an individual worker or the distribution
among a group of workers with the same transition rates. The actual distribution of employment for a group will arise from a mix of transition rates, the consequences of which remain to be analyzed.
5. Employment Inequality This section investigates the inequality in the distribution of employment generated by the Markov process with constant transition rates. Even though two workers have
identical transition rates, the random outcome of the search process will generate differences in the amount of time spent employed within a year. The results of this section will therefore indicate
the amount of inequality in the distribution of employment that is generated by the random outcomes of search. This inequality can then be compared with the amount generated with heterogeneous
transition rates, arising from differences in reservation wages, and further with the actual inequality in
0.126 0.083 0.067 0.058 0.052 0.047 0.044 0.041 0.038 0.036
0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 0.231 0.158 0.129 0.112 0.101 0.092 0.086 0.080 0.075 0.071
0.100 0.318 0.225 0.185 0.163 0.147 0.135 0.126 0.118 0.111 0.105
0.150 0.391 0.285 0.238 0.210 0.191 0.176 0.164 0.154 0.145 0.137
0.200 0.454 0.339 0.286 0.254 0.232 0.214 0.200 0.188 0.178 0.169
0.250 0.508 0.389 0.331 0.296 0.271 0.251 0.235 0.221 0.210 0.199
0.300 0.555 0.473 0.373 0.335 0.307 0.286 0.268 0.253 0.240 0.228
0.597 0.474 0.411 0.371 0.342 0.319 0.300 0.284 0.269 0.256
Values of p.. Transition Rate from Employment to Unemployment
0.500 0.665 0.546 0.481 0.438 0.406 0.380 0.359 0.341 0.325 0.310
0.450 0.633 0.512 0.447 0.406 0.375 0.350 0.330 0.313 0.297 0.284
Table entries show the proportions of the labor force unemployed more than one month in a year as a result of movement between unemployment and employment described by a Markov process with
transition rate I' from employment to unemployment and ). from unemployment to employment.
Table 4.8 Proportions Unemployed More Than One Month in Year
0.110 0.065 0.047 0.037 0.030 0.025 0.021 0.018 0.D16 0.014
0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 0.202 0.124 0.091 0.072 0.060 0.050 0.043 0.037 0.032 0.027
0.100 0.279 0.177 0.133 0.186 0.088 0.074 0.063 0.055 0.047 0.041
0.150 0.345 0.226 0.172 0.139 0.115 0.098 0.084 0.073 0.063 0.055
0.200 0.402 0.271 0.209 0.170 0.142 0.121 0.104 0.090 0.079 0.069
0.250 0.452 0.313 0.244 0.199 0.168 0.143 0.124 0.108 0.094 0.083
0.300 0.496 0.351 0.276 0.228 0.193 0.165 0.143 0.125 0.110 0.097
0.534 0.387 0.307 0.255 0.217 0.187 0.163 0.142 0.125 0.111
Values of p., Transition Rate from Employment to Unemployment
0.569 0.419 0.337 0.281 0.240 0.208 0.182 0.159 0.141 0.124
0.600 0.450 0.365 0.307 0.263 0.228 0.200 0.176 0.156 0.138
Table entries show the proportions of the labor force unemployed more than three months in a year as a result of movement between unemployment and employment described by a Markov process with
transition rate p. from employment to unemployment and A from unemployment to employment.
Values of ).
Table 4.9 Proportions Unemployed More Than Three Months in Year
0.089 0.044 0.027 0.018 0.013 0.009 0.007 0.005 0.004 0.003
0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 0.163 0.064 0.053 0.036 0.026 0.019 0.014 0.010 0.008 0.006
0.100 0.227 0.121 0.077 0.053 0.038 0.028 0.021 0.016 0.012 0.009
0.150 0.281 0.155 0.101 0.070 0.051 0.037 0.028 0.021 0.016 0.012
0.200 0.329 0.187 0.123 0.087 0.063 0.047 0.035 0.027 0.021 0.016
0.250 0.371 0.217 0.145 0.103 0.075 0.056 0.043 0.033 0.025 0.019
0.300 0.409 0.245 0.166 0.119 0.087 0.066 0.050 0.038 0.030 0.023
0.442 0.272 0.186 0.134 0.099 0.075 0.057 0.044 0.034 0.027
Values of p.. Transition Rate from Employment to Unemployment
0.473 0.297 0.205 0.149 0.111 0.084 0.065 0.050 0.039 0.030
0.500 0.320 0.224 0.164 0.123 0.094 0.072 0.056 0.044 0.034
Table entries show the proportions of the labor force unemployed more than six months in a year as a result of movement between unemployment and employment described by a Markov process with
transition rate p. from employment to unemployment and A from unemployment to employment.
Values of ).
Table 4.10 Proportions Unemployed More Than Six Months in Year
The Distribution oj Employment
the distribution of employment. One conclusion of this section is that a higher unemployment rate in the economy generates greater inequality in the distribution of employment and hence in the
distribution of earnings. As revealed by column 8 of Tables 4.5 to 4.7, most workers remain employed the entire year. The inequality in the distribution of employment therefore arises from the
minority of workers who are out of work part of the year. The distribution of employment is highly skewed, with no right tail and all dispersion arising from the minority in the left tail. Most
measures of inequality are sensitive to those observations in the lowest brackets, so that the inequality will be primarily determined by the numbers of workers with very low employment. The
proportions unemployed longer than six months, presented in Table 4.10, are undoubtedly the major source of inequality. One method of describing the distribution of employment would be to construct a
Lorenz curve. However, the resulting Lorenz curve is not very interesting. By dividing the year up into twelve months and calculating the number with employment in each bracket, it is possible to
generate twelve points on the Lorenz curve. But these points all lie close to the lower left corner. For example, using the data from Table 4.5, with A = 1.8 and p, = 0.2, the highest point within
the diagram shows 26 per cent of the population with 18 per cent of the weeks worked. From this point, the rest of the Lorenz curve consists of a straight line to the upper right corner. This
peculiar shape is caused by the fact that most workers in the group are employed all year. Gini coefficients for a range of transition rates are presented in Table 4.11. By inspection, the Gini
coefficients follow the same pattern as the proportions of the population with unemployment greater than a month, three months or six months out of a year. The inequality measures increase for
greater values of p, and smaller values of A. Because the unemployment rate is given by p,/ (A + p,), greater values of p, or smaller values of A yield greater unemployment for the group. In general,
then, a higher unemployment rate also results in a more unequal distribution of employment. (This statement requires some qualification since different transition rates can yield the same
unemployment rate but unequal Gini coefficients.) An alternative to the Gini coefficient is the coefficient of variation, the standard deviation of a distribution divided by the mean. Values for the
coefficient of variation are presented in Table 4.12. These are calculated using the distributions developed in the previous section. An alternative to the exact distribution for the continuous case
is the use of the simpler approximate formulas developed by Gabriel (1959) for the two state Markov process in discrete time. These expressions yield almost identical values and are therefore not
reported here. Again, an immediate result appearing in Table 4.12 is that an increase in p, or a decrease in A, which result in higher levels of unemployment, raise the inequality in the distribution
of employment. By way of comparison, the coefficient of variation of the distribution of income in 1970 was 0.708 (this figure is taken from Chapter 6). The figures in Table 4.12 reflect the amount
of earnings inequality that would be generated if all individuals in a group with common transition rates were paid the same wage rate and the only source of inequality were differences in weeks
worked. The entries therefore provide an estimate of the contribution through time employed of random search to inequality. For the purposes of this monograph, the Gini coefficient is an
unsatisfactory measure of inequality. While the Gini coefficient has an interpretation in terms of mean difference in income or employment among members of the economy, the contribution of subgroups
or factors to the Gini coefficient is unclear. The recent literature on decomposable inequality measures (F. Bourguignon, 1979; Frank
0.087 0.048 0.035 0.029 0.026 0.023 0.022 0.021 0.020 0.020
0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 0.157 0.087 0.062 0.049 0.041 0.037 0.033 0.030 0.028 0.027
0.100 0.214 0.122 0.086 0.068 0.050 0.049 0.044 0.039 0.036 0.034
0.150 0.262 0.154 0.109 0.085 0.071 0.061 0.054 0.048 0.044 0.041
0.200 0.303 0.182 0.130 0.102 0.084 0.072 0.063 0.056 0.051 0.047
0.250 0.338 0.208 0.150 0.117 0.097 0.082 0.072 0.064 0.058 0.053
0.300 0.367 0.231 0.168 0.132 0.109 0.093 0.081 0.072 0.065 0.059
0.393 0.252 0.184 0.145 0.120 0.102 0.089 0.079 0.072 0.065
Values of '"'. Transition Rate from Employment to Unemployment
0.416 0.272 0.200 0.158 0.131 0.111 0.097 0.086 0.078 0.071
0.435 0.289 0.215 0.170 0.141 0.120 0.105 0.093 0.084 0.076
Entries are Gini coefficient measures of inequality in the distribution of employment arising from Markov movements between unemployment and employment with transition rates A and /-I.
Values of ).
Table 4.11 Gini Coefficients for Distribution of Employment
The Distribution of Employment
Cowell, 1980; A.F. Shorrocks, 1980) indicates that the square of the coefficient of variation, among other measures, may be decomposed into contributions within groups and between groups. While this
feature will not be used here, the square of the coefficient of variation has other decomposable properties which render it suitable for a discussion of the sources of inequality, as will be
demonstrated in Chapter 6, section 2. The next section will relate the inequality in the distribution of employment among a group with heterogeneous transition rates to the inequality for constant
transition rates.
6. Heterogeneous Transition Rates The foregoing section demonstrates that a group of workers who start out with equal transition rates between employment and unemployment will end up with unequal
amounts of employment. An additional source of employment inequality arises because workers' transition rates are also unequal. The distribution of employment for all workers is a mixture of the
distributions for groups of workers with the same transition rates. This mixture will generally have greater inequality than the individual groups, since the means for the groups are unequal. The
task of this section is to compare the distribution of employment calculated from constant and equal transition rates with the actual distributions observed in labor markets. The difference in
inequality between the two distributions arises from changes in transition rates over time (duration dependence) and from heterogeneous transition rates. The latter can be caused by unobserved
differences in workers or different choices of reservation wages. If workers raise or lower their reservation wages over time, choice can also contribute to duration dependence. The difference in
inequality between the calculated and actual employment distributions constitutes an upper limit to the contribution of choice to inequality. First, let us analyze how differences in transition rates
contribute to inequality. Consider n distributions of employment p,(X),P2(X), • . . Pn(X), where x is the amount of employment in a year. Suppose these distributions are generated by the two state
continuous time Markov process with different transition rates from unemployment to employment. For the moment, suppose the transition rate from employment to unemployment is the same for all. These
distribution functions could then arise from workers with the same grade but different reservation wages. Now suppose that within the group the proportion of workers with distribution function Pi(X)
is ri. The density function for the entire group is: p(x)
= i
2:= riPi(x) 1
This density function is known as a mixture or as a contagious distribution (Mood, Graybill and Boes, 1974, p.122). An example of such a mixture is the alternative to the mover-stayer model developed
by Singer and Spilerman (1976). Their model with heterogeneous transition rates is a gamma mixture of Poissons. Now consider the coefficient of variation for the density function p(x). Let M'i be the
mean for Pi(X) and let M2i be the second moment around the origin for the same function. Then the mean, M" and second moment around the origin, M 2 , for p(x) are: M,
2:r;MIi i
-J -J
0.290 0.191 0.146 0.119 0.100 0.087 0.077 0.069 0.062 0.057
0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 0.407 0.268 0.205 0.167 0.141 0.122 0.108 0.097 0.086 0.080
0.100 0.495 0.326 0.249 0.203 0.172 0.149 0.132 0.118 0.107 0.098
0.150 0.567 0.374 0.286 0.233 0.198 0.171 0.152 0.136 0.123 0.112
0.200 0.630 0.415 0.318 0.259 0.220 0.191 0.169 0.151 0.137 0.125
0.250 0.685 0.452 0.346 0.283 0.240 0.208 0.184 0.165 0.149 0.137
0.300 0.734 0.485 0.372 0.304 0.257 0.224 0.198 0.177 0.161 0.147
0.779 0.515 0.395 0.323 0.274 0.238 0.211 0.189 0.171 0.157
Values of p., Transition Rate from Employment to Unemployment 0.821 0.543 0.416 0.341 0.289 0.251 0.222 0.200 0.181 0.166
0.859 0.568 0.436 0.357 0.303 0.264 0.233 0.210 0.190 0.174
Entries are the coefficient of variation measures of inequality in the distribution of employment arising from Markov movements between unemployment and employment with transition rates X and I!.
Values of X
Table 4.12 Coefficients of Variation for Distribution of Employment
The Distribution of Employment
= .L;riM2i i = I
The square of the coefficient of variation is then:
cv2 = (M2-MI2)/MI2
The difference in the means among the distributions Pi(X) contributes an unknown amount to the inequality in employment that is described by p(x). In the case where there are only two distributions,
rl + r2 = 1 and:
cv2 =
ri(M2i-M1l)+rlr2(M11 -M12)2]1MI2
i = I
Thus the increase in the coefficient of variation depends roughly on the difference in means. A numerical example will shed some light on the determination of the coefficient of variation for a group
with heterogeneous transition rates. Consider a group with p. = 0.2 composed of four equal subgroups with ri = 0.25 and transition rates Ai from unemployment to employment equal to 0.5, 2.0, 3.5 and
5.0. The unemployment levels for these subgroups are 0.286, 0.091, 0.054 and 0.039, and the overall unemployment rate for the group is 0.118. The cv 2 , following (4.7), is then 0.0902. Rather
surprisingly, the cv2 for this heterogeneous group is only moderately greater than the square of the mixture of the coefficients of variation for the four subgroups, 0.774, calculated from: 4
[ .L;
ri(M2 i - M 1l)]lMI 2
i = I
The average of the cv2 for the four groups is 0.103. (The first group, with the lowest transition rate into employment, dominates the calculation.) The difference in means in this case does not
contribute substantially to the inequality; most of the inequality is within groups rather than between groups. Apparently, the peculiarly skewed distributions of employment generate means which,
relatively speaking, are close together. Whether the actual distribution of transition rates yields inequality levels which are close to the mean inequality level remains to be seen. The implication
of the numerical example is that heterogeneity in transition rates, arising from differences in reservation wages, does not itself generate substantial inequality, i.e., the inequality for the group
is only slightly greater than the average inequality for the subgroups. However, the various demographic groups will have wider differences in means. Differences in educational level, age or
experience may therefore raise the level of inequality above the inequality in prospects facing an individual worker. Tables 4.13 to 4.15 present evidence on the upper limit of the contribution of
choice to employment inequality. This evidence arises from a comparison of calculated and actual distributions. The calculated distribution is based on transition rates that are consistent with
observed aggregate movements between employment states. A difficulty arises in the definition of those states. Empirical work, including that of Clark and Summers, shows thiit movements between
employment and unemployment are greatly affected by movements into and out of the labor force. However, many workers out of the labor force would not consider employment, and
-.l \0
11.5 10.9 12.4 8.3 8.2
7.1 19.7 12.7
31.8 11.9 8.2 8.2 11.0
0.255 0.220 0.261 0.223 0.244
0.198 0.321 0.325
0.633 0.287 0.207 0.188 0.181
1.966 1.803 1.844 2.464 2.719
2.573 1.313 2.224
0.0467 0.0445 0.0502 0.0335 0.0328
0.0289 0.0751 0.0511
0.116 0.0480 0.0335 0.0339 0.0437
(4) 1.355 2.119 2.322 2.104 1.469
Calculated Inequality
Transition Rate to Employment
0.0571 0.0480 0.0586 0.0475 0.0493
0.0342 0.1060 0.0681
0.200 0.0640 0.0357 0.0376 0.0458
Actual Inequality
0.818 0.927 0.857 0.705 0.665
0.845 0.708 0.750
0.580 0.750 0.938 0.902 0.954
Ratio of Calculated To Actual Inequality
Unemployment and out of work rates in columns (1) and (2) are measured in per cent. Inequality in columns (5) and (6) is measured by the square of the coefficient of variation. Data sources and
methods of calculation: see text.
7.0 7.1 8.9 5.9 5.8
17.3 7.5 5.8 5.1 4.5
Education 7 years or less 8 years 1 to 3 years high school 4 years high school 1 year or more college
4.6 15.3 8.4
Family status Head Other member Unrelated individual
16 to 22 to 35 to 45 to 55 to
Unemployment Out of Work Rate Rate
Transition Rate to Out of Work
Table 4.13 Actual Versus Calculated Employment Inequality, White Males, 1970
The Distribution of Employment
the extension of the Markov model to three or four states would greatly raise the level of complexity of the problem. A practical compromise is to extend the state of unemployment to include workers
who are currently out of the labor force but who were employed sometime in the previous year. This extended state can be called out of work. With the states of unemployment and out of the labor force
combined, the transition rates for each group are determined as follows. Column 1 gives the unemployment rates used in the calculation of unemployment valuations in Chapter 3. The out of work rate in
the second column is calculated by including the status out of the labor force. The numerator is the number of workers who are either currently unemployed or employed in the last year but out of the
labor force now. The denominator consists of those who are currently in the labor force plus those who worked in the past year but are out of the labor force now. For example, the out of the work
rate for white males aged 16 to 21 is (0.173 x 164+35)/(164+35) = 0.318. The transition rate from employment to out of work is then calculated from the result that e-,.I will be the proportion of
workers who start out employed that are still employed after time t. The number of workers employed after a year is the sum of those currently in the labor force plus those who worked sometime in the
past year but are not out of the labor force, minus all part year workers. For white males aged 16 to 21, the transition rate p. is then calculated from:
-,.1 _ 164+35 -
- (1- 0.173)164
The transition rate from out of work to employment, A, is then calculated from the long run condition that the out of work rate equals p./(A + p.). The transition rates p. and A, given in columns 3
and 4, may be compared with those calculated by Stephen Marston (1976). (Other work on labor force flows and transition rates is by Ralph E. Smith in R. Ehrenberg, 1977, pp.259-303; George L. Perry,
1972; Robert E. Hall, 1972; Smith, Jean Vanski and Charles C. Holt, 1974.) Marston's work indicates a combined transition rate out of employment of 12(0.0374 + 0.1205) = 1.89, about three times the
corresponding figure for p. in Table 4.13. The rate for transition into employment is calculated less directly but is similarly greater than the figure derived here. However, Marston estimates the
transition rates using gross flow data by month. As mentioned in section 2, workers with high transition rates dominate the estimation of transition rates using short term flows. The transition rates
in columns 3 and 4 are used to estimate the predicted number of workers for each bracket of weeks worked, according to the probability distributions derived in section 4. However, the actual
distributions do not include the workers with no employment during the previous year. The measure of calculated inequality in column 5 therefore is based on a distribution in which the workers with
no employment are eliminated in order to make the result comparable with the actual inequality. Table 4.14 shows how the calculated distribution of employment compares with the actual distribution.
For each bracket of weeks worked, the entry is the ratio of the actual number employed to the number calculated on the basis of the constant transition rates. The results indicate systematic
divergences between the two distributions. Compared to the distribution of employment with constant transition rates, the actual distribution has more workers employed 1 to 13 weeks and fewer workers
employed 50 to 52 weeks. These differences arise from the presence of some workers with transition rates above and below the average. Further, the actual numbers with employment
The Distribution oj Employment
in the second and fourth brackets systematically exceed the calculated numbers, while the actual number in the third bracket falls short of the calculated number. The reasons for these differences
are not clear. Returning to Table 4.13, the results show that calculated inequality is always less than the actual inequality, given by the ev 2 in column 6. The ratio of calculated to actual
inequality, shown in column 7, ranges from a low of 0.580 to a high of 0.954. The calculated inequality is the inequality that is generated when the workers in the group all have the same transition
rate into and out of employment. Transitions caused by movements out of or into the labor force are substantially at the discretion of the worker, although perhaps motivated by circumstances outside
the worker's control. One possibility would be to regard all movements into or out of the labor force as resulting from choice. This would implicitly treat full labor force participation as the norm
against which individual labor force outcomes could be compared. Then even with constant transition rates into and out of employment, part of the generated inequality in the distribution of
employment would be attributable to choice. Instead, the average transition rates into and out of employment (incorporating movements between unemployment and out of the labor force) are taken as the
norm, and choice appears as a deviation in transition rates from the norm. With this interpretation, the calculated inequality in column 5 of Table 4.13 is the amount of inequality generated by the
random outcomes of the labor market operation. The difference between the calculated and the actual level of inequality is then produced by the heterogeneous transition rates and is therefore an
upper limit to the contribution of choice. From Table 4.13, choice accounts for at most 0.046 to 0.42 of the total inequality in the distribution of employment among workers, using ev 2 as the Table
4.14 Ratios of Actual to Calculated Proportions of Unemployed Weeks Worked During Year 1 to 13
14 to 26
27 to 39
40 to 49
50 to 52
2.55 1.80 1.17 1.32 1.07
1.17 1.17 1.09 1.10 1.10
0.661 0.864 0.880 0.824 0.859
0.847 1.16 1.33 1.30 1.38
0.788 0.950 0.965 0.972 0.969
Family status Head Other member Unrelated individual
1.47 1.79 1.63
1.17 1.22 1.41
0.901 0.874 0.838
1.30 1.16 1.04
0.960 0.904 0.959
Education 7 years or less 8 years 1 to 3 years high school 4 years high school 1 year or more college
1.30 1.20 1.39 2.10 2.33
1.39 1.07 1.09 1.28 1.37
0.892 0.875 0.879 0.829 0.876
1.21 1.37 1.29 1.16 1.01
0.947 0.959 0.952 0.961 0.970
Group Age 16 to 22 to 35 to 45 to 55 to
Entries are the ratios of actual numbers of workers with weeks worked in the bracket to the number calculated using the set of transition rates derived in Table 4.13. Figures for actual numbers of
workers are from Employment Profiles of Selected Low-Income Areas (1972).
The Distribution of Employment
inequality measure. For half of the cases, choice accounts for less than 20 per cent of the inequality. These results are in line with the numerical example worked out in this section, in which
heterogeneity in transition rates into employment did not add appreciably to the inequality. The major difference between the numerical example and the results in Table 4.13 is that the groups in
Table 4.13 differ also by transition rates out of employment, and some inequality is generated by movement into and out of the labor force. Also, the exclusion of those workers with no employment in
the previous year reduces the measured inequality. Results for white females, black males and black females are also available but are not reported here; they follow substantially the same patterns
as for white males. Table 4.15 provides the same information as in Table 4.13 for aggregate groups. Because each group, e.g., white males, combines workers with different age and education brackets,
the transition rates should be even more dispersed than for the more narrowly defined groups in Table 4.13. Therefore the inequality levels should be higher, and the difference between calculated and
actual inequality should be greater. For white males, the calculated and actual inequality appear to exceed the corresponding values for all groups except those aged 16 to 21 and other members of
families, for whom the unemployment levels are much higher than for the group as a whole. However, the difference between calculated and actual inequality is only about 22 per cent of actual
inequality. The difference is about the same for all groups. For all workers, 23.7 per cent is the contribution to inequality in the distribution of employment from two sources. The first source is
choice, working through different reservation wages to produce heterogeneous transition rates. The second is the pooling of different demographic groups facing unequal employment prospects and
therefore unequal average transition rates. Dale Mortensen and George Neumann (1984) have developed an alternative method of measuring the contribution of choice. The transition rate out of
unemployment may be regarded as the product of the offer arrival rate times the probability that an offer will be accepted. The former is the result of chance while the latter is determined by the
worker's choice of reservation rate.
7. Conclusions This chapter has examined the size distribution of employment among workers. By studying the distribution of employment in isolation from the distribution of wage rates and earnings,
we have been able to examine one mechanism by which job search contributes to inequality. The points made in the chapter are as follows. a. The fat tails observed in unemployment duration data (high
proportions of workers unemployed more than six months) are not necessarily evidence of the invalidity of search models and Markov processes in the description of unemployment. Instead, they can
arise from heterogeneous transition rates generated by heterogeneous reservation wages. This is demonstrated by the distributions of transition rates in Tables 4.1 to 4.4 that are consistent with the
observed duration data. These distributions are found using the estimation procedure developed in section 2. b. The duration of unemployment follows a simple form for constant transition rates, the
exponential distribution. However, the statistic that is relevant to the study of inequality is the time spent in one state of a two state continuous time Markov process, and this distribution is
substantially more complicated. The probability density functions and their cumulative forms are developed in section 4 and the results are presented in (4.1) to (4.6).
7.1 7.7 9.2 11.7 8.8
White Males
White Females
Black Females
24.9 18.4
22.1 15.4
0.259 0.251
0.313 0.218
0.784 1.11
1.10 1.19
Transition Rate to Employment
0.0812 0.681
0.0812 0.0586
Calculated Inequality
0.110 0.0876
0.109 0.0745
Actual Inequality
0.738 0.7n
0.745 0.787
Ratio of Calculated To Actual Inequality
Unemployment and out of work rates in columns (1) and (2) are measured in per cent. Inequality in columns (5) and (6) is measured by the square of the coefficient of variation. Data sources and
methods of calculation: see text.
All Workers
Black Males
Unemployment Out of Work Rate Rate
Transition Rate to Out of Work
Table 4.15 Employment Inequality Among Major Groups of Workers, 1970
The Distribution 0/ Employment
c. The inequality generated by the distribution of employment is substantial. For all workers, the square of the coefficient of variation of employment is 0.0876, a substantial proportion of the
total inequality of 0.708. Unemployment is not evenly distributed. It is not a fIxed and predictable cost of finding a job. d. Most of the inequality in the distribution of employment is the result
of the random outcomes of job loss and job search. It is a source of uncertainty and variation over which the worker has substantially no control. To some extent, the worker can reduce the inequality
in the distribution of employment he or she faces by reducing the reservation wage and thereby increasing the transition rate from unemployment to employment. But the reduction is limited and the
cost is a lower expected wage rate. e. Choice generates some of the inequality in the distribution of employment, but not much. Unequal reservation wages produce unequal transition rates, which in
turn yield an inequality in the distribution of employment which exceeds the inequality generated by common and equal transition rates. But the contribution turns out to be moderate, about 25 per
cent in most cases. The reason is mathematical. The distributions of employment are skewed to the left, with most of the weight at the upper end of the interval. Merging groups with unequal
transition rates does not raise the inequality much with these distributions. The numerical calculation and the empirical results in Tables 4.13 and 4.15 support this conclusion. f. Higher
unemployment rates in the economy produce more unequal distributions of employment. As the transition rate out of employment increases and the rate into employment decreases, inequality rises, as
demonstrated in Table 4.11 using the Gini coefficient and in Table 4.12 using the coefficient of variation. g. The treatment of the labor force participation decision is clearly inadequate. The
contribution to the distribution of employment of decisions to participate needs to be examined together with transitions between employment and unemployment. Further, the movement from employment to
unemployment needs to be examined with regard to whether the movement is voluntary or involuntary (e.g., quits versus layoffs).
Chapter 5
The Distribution of Wage Rates 1. Introduction The previous chapter develops one ingredient in the final distribution of earnings: the distribution of employment. This chapter develops the other
ingredient, the distribution of wage offers facing the individual and the resulting distribution of accepted wage rates. Several important questions are introduced into the study of the distribution
of earnings at this point. In very simple theories, the distribution of wage rates is related directly to the distribution of some individual characteristic, such as education, ability or experience.
In more complex models, it arises from the assignment of workers to jobs in a deterministic model with full employment, in which differences in wage rates are related to trade-offs between product
outputs or preferences in equilibrium. With job search, however, there are several intermediate steps. Even for a group facing identical labor market conditions, the distribution of accepted wage
rates will resemble neither the distribution of wage offers nor the distribution of reservation wages for the group but will depend upon an interaction between the two. The first task of this chapter
is to study the behavior of truncated wage distributions, which arise when workers set reservation wages. The mean, coefficient of variation and valuation of unemployment vary systematically as the
reservation wage is raised. The second task is to explain how the distribution of accepted wage offers is generated. In particular, how does dispersion in reservation wages contribute to inequality
in accepted wage rates? The third task is to explain the source of differences in wage offers. If the workers under consideration are identical, why should different wage rates be offered? The point
of view taken in previous work by this and other authors on the distribution of earnings is that differences play an allocative role in the economy, i.e., they assign workers to jobs. But if
different wages face the same worker, how can the role be allocative? Only after we explain the role played by wage differences can we explain why they arise and consider what policies would affect
the differences.
2. Truncated Wage Offer Distributions A consequence of the job search theory developed in Chapter 2 is that the wage rates a worker could receive are generated by a truncated distribution. Since the
worker will not accept any wage below the reservation wage, the bottom tail or bottom part of the wage offer distribution is cut off. The expected wage and inequality in wage rates for the remaining
distribution differ from the expected wage and inequality for the entire wage offer distribution. By raising the reservation wage, the worker raises the expected wage, alters the dispersion in wage
rates he or she faces and reduces the likelihood of acceptance. The changes in these economic and statistical magnitudes as the reservation wage is raised are the subject of this section. A major
question concerns where the reservation wage occurs in relation to the distribution, i.e., at the bottom, middle or top end.
The Distribution 0/ Wage Rates
Table 5.1 Distribution of Wage Offers Generated by Bivariate Normal v(w,g) Grade of Worker Wage Bracket
9 = 10
0.1759 0.1844 0.1752 0.1508 0.1176 0.0831 0.0532 0.0308 0.0162 0.0077 0.0033 0.0013 0.0005 0.0001
0.0669 0.0845 0.1000 0.1111 0.1160 0.1138 0.1047 0.0900 0.0721 0.0536 0.0369 0.0233 0.0136 0.0072 0.0035 0.0016 0.0006 0.0002 0.0001
0.0339 0.0436 0.0530 0.0615 0.0687 0.0744 0.0783 0.0805 0.0808 0.0788 0.0743 0.0672 0.0579 0.0471 0.0360 0.0256 0.0169 0.0103 0.0058 0.0030
0.0243 0.0312 0.0379 0.0440 0.0492 0.0534 0.0566 0.0588 0.0603 0.0610 0.0610 0.0603 0.0588 0.0566 0.0534 0.0492 0.0440 0.0379 0.0312 0.0243
Likelihood of offer
Mean wage offer
o to 1
1 to 2 2 to 3 3 to 4 4to 5 5 to 6 6 to 7 7 to 8 8to 9 9 to 10 10 to 11 11 to 12 12 to 13 13 to 14 14 to 15 15to 16 16 to 17 17 to 18 18 to 19 19 to 20
CoeffiCient of variation of offers
Entries are proportions of wage offers in each bracket for a given grade.
Flinn and Heckman point out that Kiefer and Neumann impose an unnecessary restriction on the search model by supposing that workers receive one offer per period (l982c). In a separate paper, Flinn
and Heckman (l982b) argue further that Kiefer and Neumann do not formulate or estimate the correct likelihood function and fail to include a restriction. When the rate of job offers is an unknown
parameter, alternative distributions of wage offers may sometimes be consistent with the observed data. In such a case, the distribution of wage offers could not be recovered. For example, a low job
offer rate combined with a reservation wage that is close to the bottom of the wage offer distribution may yield the same observed data as a high job offer rate and a reservation wage at the high end
of the wage offer distribution. Flinn and Heckman present an example using Pareto distributions of a situation in which the parameters of the model could not be recovered. However, if the wage offer
distribution's derivatives of every order exist and are continuous, then one can describe unambiguously a distribution over its entire domain, including the part below the reservation wage where
there are no observations. Sufficiently strong assumptions about the behavior of the function therefore allow it to be recovered. The Pareto distribution used in Flinn and Heckman's example violates
this condition since its value drops discontinuously to
The Distribution of Wage Rates
zero at some positive wage. But the normal and lognormal distributions, being two tailed, drop continuously to zero as the wage falls and therefore satisfy the recoverability condition. Tables 5.2 to
5.5 present results on the behavior of truncated distributions using numerical integration. Table 5.2 presents the results for a normal distribution with coefficient of variation, CV, equal to 0.471.
The set of parameters for this distribution is chosen so that the variance of logarithms, 0.2, is the same as for the lognormal distribution used in Table 5.3. Flinn and Heckman estimate the
parameters of a normal wage offer distribution; the cv for their estimated distribution is 0.393, slightly less than the value for Table 5.2. Column 1 in Table 5.2 gives the reservation wage, the
wage at which the distribution of wage offers is truncated. Column 2 shows the expected wage given the reservation wage; it rises with the reservation wage. In Table 5.2 as well as for normal
distributions with higher values of the CV, the difference between the reservation and expected wages decreases as the reservation wage goes up. Column 3 shows the ratio of the reservation wage to
the expected wage. As the reservation wage increases, this ratio decreases in the normal case. Column 4 is the probability that a wage offer will exceed the reservation wage. A consequence of raising
the reservation wage is that this probability declines. The relative effects of the reservation wage on the expected wage and on the likelihood of acceptance are the basis for the valuation of
unemployment in Chapter 4. Column 5 shows the changes in the trade-off between the expected wage and the transition rate as the reservation wage rises. These calculations use the probability that an
offer will be accepted as the denominator, so that the trade-off corresponds to (2.11) in Chapter 2. However, we do not know the rate at which offers are received. Let 8 again be the offer rate. The
transition rate from unemployment to employment is the product of this offer rate, 8, and the likelihood of acceptance. Therefore the entries in column 5 are the product of the offer rate and the
trade-off, rather than the trade-off alone. The entries show how the trade-off changes as the reservation wage increases but do not show the absolute level of the trade-offs. Table 5.2 Truncated Wage
Distribution, Normal with Coefficient of Variation 0.471
Valuation Of Unemployment
Coefficient Of Variation
0.174 0.334 0.476 0.594 0.687 0.758 0.812 0.852 0.882 0.904 0.921
0.959 0.912 0.834 0.721 0.580 0.428 0.285 0.171 0.091 0.043 0.018
0.993 0.872 0.791 0.759 0.785 0.894 1.136 1.630 2.661 4.970 10.66
0.412 0.372 0.325 0.276 0.229 0.188 0.153 0.125 0.103 0.086 0.072
Probability Reservation Wage, Wo
Expected Wage, we
0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 2.2
1.153 1.196 1.260 1.347 1.456 1.582 1.724 1.878 2.041 2.212 2.388
Entries are calculated using numerical integration. Column (5) gives the rate at which offers are made times the trade-off between the wage rate and the transition rate A.
The Distribution of Wage Rates
Truncated distributions are an important topic in the statistics literature. Moments and behavior of the truncated normal received early attention. J. Aitchison and J.A.C. Brown (1957) study the
behavior of truncated lognormal distributions and consider the problems of estimating the parameters of a truncated lognormal distribution, including the point of truncation. Christopher J. Flinn and
James J. Heckman (1982c) and Nicholas M. Kiefer and George R. Neumann (1979b, 1981a) have considered the problems of estimating the parameters of wage offer distributions; their results will be
considered later in this section. The distribution of wage offers facing a given worker may be derived from the distribution of all wage offers, v(w,g), used in Chapter 2, section 2. This
distribution is a joint density function of number of vacancies by wage offer and grade requirement of the firm. Offers are only extended when the potential worker's grade exceeds the grade
requirement of the firm. The density function of wage offers for a worker of grade g is then obtained by integrating v(w,g) over values of g less than the worker's and normalizing this function by
dividing by the probability of receiving an offer at an interview:
[,V(W,Y)dY V(oo,g)
Table 5.1 shows how this procedure generates wage offer distributions for different grades. The density function v(w,g) is bivariate normal (truncated for positive values of wand g) with mean and
standard deviation for wage offer of 5 and 10, mean and standard deviation for grade requirement of 5 and 10, and correlation coefficient of p = 0.95. Workers of different grades face differently
shaped wage offer distributions. Those with higher grades have greater likelihoods of offers and greater means of wage offer distributions and have lower coefficients of variation of offers. The wage
offers facing those workers with grade 7 include the offers facing those with grade 4. We therefore have overlapping labor markets. Also, while each worker may face a two tailed normal distribution
of wage offers, reasonable reservation wages would place workers on the upper tails of the distributions. The truncated part of the wage offer distribution would then be single tailed. The low wage
offers in the distribution are from firms with low grade requirements, and no workers with higher grades will tend to take those offers. Despite the fact that nobody with, say, grade 7 takes those
offers, they are not withdrawn and continue to be part of the wage offer distribution. Therefore workers are likely to choose reservation wages which place them fairly far up in the distribution.
Kiefer and Neumann (1979a,1979b,1981a) and Flinn and Heckman (1982c) study the problem of estimating the distribution of wage offers for workers from the distribution of accepted offers and
unemployment spells. Kiefer and Neumann attempt to decompose the variance in wage offers into a variance in offers facing a given worker and a variance in the means of wage offer distributions facing
different workers. They find that the former variance is only about a tenth of the size of the latter. That is, most of the inequality in accepted wage rates arises because workers face such unequal
wage offer distributions. The wage offer distribution facing an individual worker or a group of identical workers has relatively little dispersion. Essentially, Kiefer and Neumann estimate that
reservation wages lie close to the accepted and hence the expected wage rates. An inference from this result is that unemployment valuations are rather low. Workers are willing to search and remain
unemployed for a long time in order to achieve a small increase in the wage rate.
The Distribution oj Wage Rates
The result in (2.11) may be confirmed using the entries from columns 4 and 5. For example, using the entries for reservation wages 1.2 and 1.4: 6. We
l.724 - l.5
- () 6.)" = 0.428 _ 0.285 = 0.993,
a value that lies between the corresponding entries in column 5. Another feature of the unemployment trade-off in column 5 is that it first falls and then rises as the reservation wage goes up. This
also occurs with the lognormal distributions. For all distributions, the conditions under which the absolute value of the unemployment trade-off declines may be stated simply. Whenever awei awo is
less than one half, the valuation declines, and whenever it exceeds one half, it rises. This may be seen by calculating the derivative of the trade-off:
(2 awe -1)/)" awo
This is positive when awelawo > 0.5. For example, in Table 5.1, moving from a reservation wage of 0.4 to 0.6: 6. We = l.260 - l.l96 = 0 32 0 5 6. Wo 0.2 . ~----"'2(wolwe) - 1 This condition could
conceivably hold at the upper end of a distribution in which the reservation wage approaches the expected wage. The consequence of a declining cv is that an increase in the reservation wage has
opposite effects on the distributions of employment and wage rates. An increase in the reservation wage raises inequality in employment and reduces it in wage rates. The net effect can be ambiguous.
Table 5.3 presents the corresponding calculations for a lognormal distribution of wage offers. This distribution has a variance of logarithms of 0.2 and a parameter J1. of 0.0 (this parameter should
not be confused with the transition rate). The mean is therefore: ef' + 0.50 2 = l.105 and the coefficient of variation is: (eoz-1)Yz = 0.471
(Aitchison and Brown, 1957, p.8). These are the same values for the mean and coefficient of variation as in the normal distribution in Table 5.2. The major differences between the results are that
the valuation of unemployment increases more rapidly in the
The Distribution of Wage Rates
range and the coefficient of variation falls more slowly for the lognormal distribution. The latter result arises from the greater numbers of wage offers high in the upper tail, as reflected in the
slower decline in the probability of acceptance in Table 5.3, column
Tables 5.4 and 5.5 present the results for two single tailed distributions, the Pareto and exponential. While these appear to be similar, they are in fact very different in their behaviors. The
Pareto distribution, derived from Pareto's Law, has a probability density function CIt! w-a - 1 , with w 2! 1 and CIt! > 2. (In this and later chapters, CIt! will represent the parameter in the
Pareto distribution, rather than the time spent unemployed, as in the previous chapter.) This distribution has an extremely long upper tail. From the series expansions, an exponential distribution or
a distribution based on a power of e will eventually decline faster than any power of the variable itself. In high ranges, the Pareto density declines less slowly than the exponential, normal or
lognormal densities. If 1 < CIt! < 2, the variance will be infinite, although the mean will be finite and equal to CIt!/(CIt! - 1). Two important features of Pareto distributions show up in Table
5.4. If the Pareto distribution is not truncated from above, the ratio of reservation to expected wage (column 3) and the coefficient of variation (column 6) are constant and unaffected by the
reservation wage. The reason columns 3 and 6 show changes is that the Pareto distribution was also truncated from above, at 100, to facilitate numerical integration. The constancy of the ratio wo/we
means that the parameter CIt! can be inferred from it, if we assume that wage offers follow the Pareto distribution. This feature of the Pareto distribution has been noted and exploited elsewhere by
Lancaster and Chesher (1980). Table 5.5 presents the results for an exponential distribution with probability Table 5.3 Truncated Wage Distribution, Lognormal with Parameters p. = 0.0, u2 = 0.2
Reservation Wage, Wo
Expected Wage, we
Ratio WO/we
Probability Of Acceptance
Valuation Of Unemployment
Coefficient Of Variation
0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 2.2 2.4 2.6 2.8 3.0
1.105 1.121 1.195 1.324 1.487 1.668 1.860 2.057 2.260 2.464 2.671 2.878 3.087 3.296 3.506
0.181 0.357 0.502 0.604 0.673 0.719 0.753 0.778 0.797 0.812 0.824 0.834 0.842 0.849 0.856
1.000 0.980 0.873 0.691 0.500 0.342 0.226 0.147 0.094 0.061 0.039 0.025 0.016 0.011 0.007
0.905 0.736 0.681 0.758 0.974 1.369 2.034 3.116 4.870 7.663 12.085 19.030 29.85 46.56 72.15
0.470 0.458 0.415 0.361 0.315 0.279 0.250 0.230 0.210 0.196 0.184 0.174 0.166 0.158 0.152
Entries are calculated using numerical integration. Column (5) gives the rate at which offers are made times the trade-off between the wage rate and the transition rate },.
The Distribution
0/ Wage Rates
density function Ae-(l-W), where w ;:;: 1 (the parameter A should not be confused with the transition rate in the Markov process). The exponential distribution describes waiting times for transitions
in Markov processes or transitions for events in a Poisson process. An immediate characteristic of the exponential distribution is that the difference between the reservation and expected wages is
constant. In the general case, this difference is given by 1lA, where A is the parameter of the exponential distribution. The value of 0.4 chosen for this parameter in Table 5.5 produces slowly
declining probability of acceptance and coefficient of variation. Eventually, at much higher Table 5.4 Truncated Wage Distribution, 2.5 Pareto with Parameter a
Reservation Wage, Wo
Expected Wage, we
1.0 1.2 1.4 1.6 1.8 2.0 2.2 2.4 2.6 2.8 3.0
1.665 1.997 2.330 2.661 2.993 3.324 3.655 3.985 4.316 4.645 4.975
Probability Of Acceptance
Valuation Of Unemployment
Coefficient Of Variation
0.601 0.601 0.601 0.601 0.601 0.602 0.602 0.602 0.602 0.603 0.603
1.000 0.634 0.431 0.309 0.230 0.177 0.139 0.112 0.092 0.076 0.064
0.665 1.258 2.156 3.437 5.186 7.491 10.446 14.149 18.703 24.213 30.789
0.789 0.779 0.770 0.761 0.752 0.744 0.737 0.730 0.723 0.716 0.710
Entries are calculated using numerical integration. Column (5) gives the rate at which offers are made times the trade-off between the wage rate and the transition rate X.
Table 5.5 Truncated Wage Distribution, Exponential with Parameter A 0.4
Probability Of Acceptance
Valuation Of Unemployment
Coefficient Of Variation
3.5 3.7 3.9 4.1 4.3 4.5 4.7 4.9 5.1 5.3 5.5
0.286 0.324 0.359 0.390 0.419 0.444 0.468 0.490 0.510 0.528 0.545
1.00 0.923 0.852 0.787 0.726 0.670 0.619 0.571 0.527 0.487 0.449
2.500 2.708 2.934 3.178 3.443 3.730 4.040 4.377 4.741 5.136 5.564
0.714 0.676 0.641 0.610 0.581 0.556 0.532 0.510 0.490 0.472 0.455
Reservation Wage, Wo
Expected Wage, we
(1) 1.0 1.2 1.4 1.6 1.8 2.0 2.2 2.4 2.6 2.8 3.0
Entries are calculated using numerical integration. Column (5) gives the rate at which offers are made times the trade-off between the wage rate and the transition rate X.
The Distribution of Wage Rates
values of the reservation wage, the probability of acceptance will decline more rapidly than for a Pareto distribution. The valuation of unemployment starts out at a high value and increases
gradually. A major difference among the distributions is that in some cases the expected wage increases by more than the reservation wage as the latter goes up, i.e., awe/awo> 1. This occurs for
Pareto distributions and, in Table 5.3, for the lognormal case when the reservation wage exceeds 1.6. According to A. Goldberger (1980), this occurs when the wage offer distribution is not strictly
logconcave. The exponential distribution provides the dividing line between the two cases. It is log-linear and the difference between the reservation and expected wages is constant. When the density
declines more slowly than the exponential, as in the Pareto and upper tail of the lognormal, than an increase in the truncation point shifts the mean up by an even greater amount. Burdett (1981)
demonstrates that whenever aWe/awo exceeds one, an increase in the mean of the wage offer distribution will reduce the expected wage if the worker holds the reservation wage fIxed. If the worker
makes an optimal adjustment of the reservation wage in response, however, then the expected wage would only decline for an unreasonably large discount rate. This section has described the behavior of
four different possible wage offer distributions. The natural questions at this point are which of these is the correct one, and where is the reservation wage in relation to these distributions.
However, we must first investigate the manner in which the distribution of reservation wages combines with the distribution of wage offers to produce the distribution of observed accepted wages. Then
reasonable considerations may be brought to bear on the appropriate functional forms.
3. Accepted Wage Distribution This section investigates how the distributions of reservation wages and wage offers combine to yield the distribution of accepted wage~. As discussed in the
introduction, this distribution will differ in shape from the distribution of wage offers. Essentially, the reservation wages add a lower tail if the distribution of wage offers does not have one.
From Chapter 2, section 5, the joint distribution of accepted wage rates and grades is proportional to H(w,g)v(w,g), where H(w,g) is the proportion of workers with reservation wage less than or equal
to W and grade greater than or equal to g, and v(w,g) is the density of wage offers by wage rate and minimum grade requirement. This expression may be simplifIed by assuming that we are limiting
consideration to a group of workers with identical grade. Therefore let us now defIne H(w) as the cumulative distribution function for reservation wages, with h(w) = dH(w)/ dw, and let v(w) be the
probability density function of wage offers, as derived in the previous section in (5.1). Let V(w) be the cumulative distribution of wage offers, defIned in analogy to V(w,g) so that V(w) is the
proportion of wage offers greater than or equal to w. In relation to the cumulative distribution F(x) used in the initial sections of Chapter 2, V(w) = 1 - F(w) and v(w) = f(w). Note that because of
the way V(w) is constructed, v(w) = - dV(w)/ dw. The functions V(w), H(w), v(w) and h(w) are assumed to be continuous. With a single grade, the probability density function of accepted wage rates is
now proportional to H(w) v(w). It is the product of wage offers at a given wage times the likelihood that offers at that wage are accepted. Now let us consider how this distribution is related to H
(w) and v(w). It is essentially a contagious distribution or a
The Distribution of Wage Rates
mixture (Mood, Graybill and Boes, 1974, p.123). The distributions of wage offers facing individual workers have a parameter wo, the point of truncation, and H(w) describes the distribution of those
parameters. The cumulative distribution H(w) will start out at zero at the lower range of the reservation wage and rise to one as w approaches its upper limit. Therefore, assuming v(w) is positive
for values of w where H(w) is positive, the density function H(w) v(w) must have a lower tail, i.e., it must start out at zero and rise gradually as w increases, just like such two tailed
distributions as the normal and lognormal. At the upper range of w, H( w) will approach one and the product H( w) v( w) will behave like the function v(w).
For example, suppose reservation wages are exponentially distributed with cumulative distribution function H(x) = 1 - el:!(l-x)and suppose wage offers take a Pareto distribution with probability
density function v(x) = ax-a - 1 , where the interval for x for both functions is from one to infinity. Then the distribution of accepted wages is given by aH(x) v(x) , where a is a normalization
factor. This is an exponential mixture of Pareto distributions. This density function is zero for values of the wage less than one. At one, the function has a positive slope of aa{3, which can be
seen by differentiating the function with respect to x and taking the limit as x approaches one. The distribution therefore hits the horizontal axis at an angle, somewhat like the gamma distribution
for some parameter values, instead of approaching the axis asymptotically like the normal or lognormal distributions. If in a given period each worker receives one offer, the proportion of workers
accepting offers is II a = f,(,H(x) v(x)dx and the average accepted wage is af,(,xH(x)v(x)dx. In the study of the size distribution of income, interest has alternated between one tailed and two
tailed distributions. The first empirical investigation by Pareto, reflected in Pareto's Law, produced support iQr a one tailed distribution. However, Pareto only used data on taxed individuals at
upper income levels. When data on lower income levels is included, it is apparent that the distribution of income instead satisfies the Brontosaurus theorem (it starts out small, gets rather large in
the middle, and then gets small again at the other end). The powerful central limit theorem also supports the two tailed view. In two papers that run against the two tailed view, Benoit Mandelbrot
(1960, 1962) presents arguments in favor of the Pareto distribution. An important feature of the normal distribution is that the sum of two independently distributed normal random variables will
again be normally distributed. Similarly, the product of two lognormally distributed random variables will again be lognormally distributed. These distributions are said to be stable, since their
functional forms do not change. Mandelbrot points out (1960) that there exist distributions besides the Gaussian (normal) which are stable. The non-Gaussian distributions fall into a family of
Pareto-Levy distributions which behave in the upper taillike the Pareto Law. Like the Pareto distribution with 1 < a < 2, the variance is not finite for these distributions, and they do not have an
explicit functional form. The theoretical reason for a Pareto-Levy distribution is that the income used for testing Pareto's Law is typically the sum of income from various sources. Mandelbrot
observes that the behavior of the distribution of income does not depend on how the income is measured, i.e., which sources are included. The only distributions for which the functional form will not
depend on the inclusion or exclusion of particular sources are Pareto-Levy distributions. In a second paper, Mandelbrot argues that the wage rates facing workers will have a Pareto distribution
(1962). Mandelbrot's theory is not set in a search context
The Distribution oj Wage Rates
but the results are still applicable. Individuals have an indissoluble bundle of abilities or characteristics which have different values in different occupations. The worker chooses an occupation or
job on the basis of income maximization. Mandelbrot shows using a form of factor analysis that the wage rates facing different workers will take a Pareto distribution. If workers search randomly for
jobs, then the wage offers they face will also have a Pareto distribution. The significance of the procedure described in this section is that it provides a mechanism by which the Pareto distribution
of wage offers gets converted into a standard two tailed distribution. A Pareto distribution of values for individual characteristics is therefore consistent with the observed distribution of income.
It should be noted that Mandelbrot himself provides an informal explanation of how a Pareto distribution of wage offers produces a two tailed distribution of income (1962, section VIII). Also, the
Pareto-Levy distribution is itself two tailed, although it is highly skewed and has a short left hand tail. Now let us consider what possible functional forms the distributions of reservation wages
and wage offers could take. Flinn and Heckman estimate the parameters of both an exponential and a normal distribution of wage offers under the assumption that all workers have the same reservation
wage. The values of the wage offers are generated by the value of the match between the worker and the employer. Using data from the National Longitudinal Survey of Young Men on white males aged 21
to 24 and not in school, they estimate the parameter of the exponential distribution to be 0.339 (1982c, Tables 1 and 2). Flinn and Heckman assume that the wage offers take all positive values
(instead of from 1.0 up as in Table 5.5). With a reservation wage of $1.50, the likelihood of acceptance of an offer is then 0.6 (it would be about 0.84 if the distribution began at 1.0). The
expected wage is then $4.45. It can also be shown that the trade-off between the wage rate and the transition rate is 4.9 and the coefficient of variation is 0.665. Assuming a normal distribution of
wage offers, Flinn and Heckman estimate a mean and variance of 3.325 and 1.709 for the wage offer distribution. The reservation wage of $1.50 yields an acceptance rate of 0.92. Numerical integration
shows that the expected wage is $3.54, the unemployment trade-off is 2.22, and the coefficient of variation is 0.32 Flinn and Heckman regard the normal model as better fitting the data. The mean and
variance are closer to the mean and variance of the accepted wage distribution. Also, the shape of the normal distribution undoubtedly resembles the distribution of accepted wages more closely.
However, the point of this section is that this similarity would not be a good basis of choice, since the two distributions need not have the same shape. A normal mixture of exponential or Pareto
distributions, for example, could yield a shape similar to the observed distribution of accepted wage offers and allow for substantially different parameter estimates of the single tailed
distribution of wage offers. The Pareto shape of the upper income distributions and the fact that the shape of a mixture in the upper tail will be the same as the shape of wage offers suggests that
the distribution of wage offers behaves at least in the upper taillike a Pareto distribution. This conclusion is also consistent with both Mandelbrot's theory and Harold Lydall's theory of
hierarchies (1959). Another aspect of the Flinn and Heckman results is that the ratio of reservation wage to expected wage is very low, 0.337 in the case of the exponential distribution and 0.424 in
the case of the normal distribution. These ratios are much lower than those reported in the U.S. Summary of the Employment Profiles (See Tables 3.5 to 3.8 in Chapter 3). For white males aged 16 to 21
and not in school, the ratio is 0.824;
The Distribution oj Wage Rates
for white males aged 22 to 34, the ratio is 0.75. These ratios suggest that workers face a wage offer distribution with a much lower coefficient of variation than estimated by Flinn and Heckman. The
higher dispersion in the distribution of accepted wage offers arises partially from the pooling of different accepted wage offer distributions for groups in the population used for the data. The
dispersion in wage offers facing individual workers would then be much less than the dispersion in all wage offers. The assumption of homogeneity of workers by Flip.n and Heckman in order to permit
estimation leads to a level of dispersion in the wage offer distribution which approximately equals the dispersion in the accepted wage distribution. In contrast to the Flinn and Heckman results, the
Kiefer and Neumann results are consistent with very high ratios of reservation to expected wages. The ratios in Employment Profiles and the estimates of wage offer dispersion in this monograph lie
between the Flinn and Heckman results and the Kiefer and Neumann results. If we accept the argument that the distribution of wage offers is Pareto, then the values of a corresponding to the ratios of
reservation to expected wages of 0.824 and 0.75 are 5.68 and 4.0, respectively. The coefficients of variation for these Pareto distributions are finite and given by 0.219 and 0.354, respectively. The
values of a and the coefficients of variation for the other groups may easily be derived and are shown in the next chapter. Workers with one year of college or family heads, groups which have low
ratios of reservation to expected wages, would have lower values of a and consequently greater dispersion in wage offers. Another consideration may be brought to bear on the shapes of the
distributions of reservation wages and wage offers. The distributions at anyone point in time are the results of flows that create and remove job vacancies and job seekers. Higher wage offers for a
particular group are more likely to be accepted and removed from the stock of current wage offers. while low offers for a group will be frequently turned down and remain available. The current stock
of wage offers will therefore be fatter at the lower end and thinner at the upper end of the wage range than the flow of wage offers into the pool of vacancies. This reasoning further supports the
arguments for a single tailed (or at least highly skewed) distribution of wage offers. Turning to the distribution of reservation wages, the same reasoning argues against a single tailed
distribution. Workers with high reservation wages will remain unemployed longer and will appear disproportionately in the current distribution of reservation wages, while workers with low reservation
wages will be removed from the distribution more rapidly. This suggests that the distribution of reservation wages among a group of the unemployed will differ from the distribution among the
corresponding group in the population. The density at the lower end will be reduced while the density at the upper end will be raised. A single tailed distribution of reservation wages among the
population could conceivably result in a two tailed distribution among the unemployed. Employment Profiles provides some evidence on the distribution of reservation wages among part-year workers. It
indicates that the distribution of reservation wages is two tailed, ruling out an exponential or Pareto distribution for the unemployed. The formal relation between flows into the unemployed and
vacancy states and their stocks can be briefly described. Let yew) be the rate of flow of workers into the unemployed pool with reservation wage w. This means that the number of additional workers
newly unemployed in time dt with reservation wages between WI and W2 is:
Y(X)dX) dt
The Distribution 0/ Wage Rates
Let z{w) be the flow of vacancies into the pool of job offers. As before, let H(w) be the cumulative distribution function of workers by reservation wage and let B be the total number of unemployed
in the market. Let Yew) again be the cumulative distribution of vacancies, i.e., Yew) is the proportion of vacancies with wage offer greater than or equal to w. Let V be the total number of
vacancies, and let y be the rate at which workers receive job interviews. Firms then give interviews at the rate of By/v per vacancy. If an equilibrium exists, it must satisfy the following flow
equations: dBh(w) dt dVv(w) dt
= yew) -
Hh(w) V{w)O
Bo -
= z(w) -7 Vv(w)H(w) = 0
(5.2) (5.3)
From these flow equations, one obtains: d(H(w) V(w» dw
= _ H{w)v{w)+h(w) Yew)
= (y{w) -
The product of the cumulative distribution functions H(w) Yew) can then be recovered from yew) and z(w) through integration. This product will involve a constant of integration and will be two
tailed, since H{w) approaches zero as w declines and Yew) approaches zero as w increases. Then: yew)
= Bh(w) Yew) = B ~~~H(W) Yew)
Therefore: hew) H(w)
dlnH(w) dw
yew) BH(w)V(w)
The function H(w) can then be recovered using integration and exponentiation. Next, Yew) can be expressed in terms of derived functions as the ratio (H{w) V(w»/ H(w). This procedure does not lead to
neat functional forms, even when one begins with simple flows. An exception occurs when yew) and z(w) take uniform distributions, i.e., they are simply constants over their ranges. Then H(w) and Yew)
will be linear functions of w over a range. Numerical integration is required to describe stock distributions in more complicated cases. The above procedure applies only to an economy in which the
job offers are specific to the group in question and cannot be filled by members of other groups. A substantially more complicated model would arise if this assumption is abandoned. Before leaving
the subject of the distribution of accepted wage offers, it is important to discuss the intermediary steps necessary to get from this distribution to the distribution of wage rates among the
population. For a group in which workers face the same distribution of wage offers, the distribution of accepted wages will resemble the distribution of wage rates among the employed only if all
workers have the same transition rates from employment to unemployment. More likely, workers with higher accepted wage offers or with lower reservation wages will have lower quit rates, so that the
two distributions will not be the same. Second, the distributions of wage rates
The Distribution of Wage Rates
for groups must be pooled to yield the aggregate distribution of wage rates. This yields a distribution with dispersion greater than for each of its constituent distributions.
4. The Contribution of Choice The major point of the previous section is that the distribution of accepted wage rates will differ in shape from the distribution of wage offers because of its
interaction with the distribution of reservation wages. A natural question, then, is the contribution of unequal reservation wages to inequality in accepted wage rates. In general, accepted wage
rates will be more unequally distributed than wage offers. The basic reason for the increase in inequality is that when two or more distributions are added together (or mixed), the difference in
means of the two distributions adds to the inequality. This was demonstrated for the distribution of employment in the previous chapter. An exception can occur when one of the constituent
distributions in the mixture has a high level of inequality but a low weight in the mixture; then its inequality can be greater than the mixture's. This also occurs in some cases with the
distributions of employment. Table 5.6 provides one means of examining the contribution of reservation wages to inequality. The entries in this table compare two distributions. The first is the
distribution of accepted wages generated by a normal distribution of reservation wages and a Pareto distribution of wage offers, while the second is the distribution that would arise if all workers
had the same reservation wage. The coefficients of variation for the various normal distributions of reservation wages are presented in the first column on the left. All of the normal distributions
have mean 2.0. The values of the parameter ex in the Pareto distribution of wage offers appear in the first row, while underneath in parentheses appear the squares of the coefficients of variation of
these distributions truncated at the mean reservation wage, 2.0 (the upper level of truncation is 100). The amounts in parentheses are thus the levels of inequality in accepted wages that would arise
with no choice, i.e., if all workers had the same reservation wage of 2.0. The entries are then the ratio of the inequality that would prevail without choice to the inequality that prevails with
choice (in the form of a normal distribution of reservation wages). For the given parameter values, these ratios range from 0.239 to 0.978. For values of ex = 5 and a cv for reservation wages of 0.5,
taken from the next chapter, choice contributes 48 per cent of the inequality in accepted wages. The ratios on which these results are based depend on the particular figures chosen for the
calculations. For example, raising the mean of the distribution of reservation wages will affect the ratios. These results may therefore only be taken as an indication of the approximate magnitude of
the contribution of choice to wage rate inequality.
5. The Source of Wage Rate Dispersion Consider now why differences in wage rates arise for identical labor . Such differences would appear to play no allocative role in the economy and to be a
completely unjustified source of inequality. To the individual the wage rate obtained is the result of good or bad luck. The differences in wage rates for identical workers arise from the wage
dispersion in the market. In perfectly competitive markets with an auctioneer, such dispersion disappears. But George Stigler (1961, 1962), in creating the subject, points out that
The Distribution oj Wage Rates
imperfect information in a market, caused by the costly acquisition of information, would allow price dispersion to continue. In Stigler's view, the price dispersion (or the wage dispersion in labor
markets) is caused by ignorance and the costs to individuals of eliminating ignorance. Workers take lower wages because it costs too much to find the employer with the highest wage. As later authors
indicate (see Michael Rothschild, 1973; Bo Axell, 1974, 1976), this view raises the question of when the wage or price dispersion will continue in equilibrium. It is not only necessary for workers in
a labor market context to stop their search before the highest wages are found, but firms must also pay different wages to equivalent labor. The central question concerns the minimum conditions under
which price or wage dispersion in a market will persist. Further, it is necessary to disentangle, as Stigler puts it, the relation between quality differences, information and wage differences. One
approach is to suppose that a single market is too costly or unwieldly to operate, so that the market breaks up into optimally sized submarkets or islands. Both Dale Mortensen (1976) and Lester
Telser (1978) develop models of markets incorporating this assumption. At any given submarket (or labor exchange in Mortensen's development), the employers and job seekers arrive randomly from the
larger population. Because of this random selection from the larger population, the market clearing wage in each submarket may differ from the wage at which quantity demanded equals quantity supplied
in the entire market. Wage dispersion therefore arises and persists. It is caused by diseconomies in the operation of markets and requires that workers face costs in moving from one submarket to
another. The wage dispersion facing workers therefore replaces larger search costs that would be borne if a single market and wage rate prevailed. The wage dispersion is an alternative to higher
search costs, one which in equilibrium is acceptable to all participants. The wage differences are equivalent to an unequal distribution of search costs among the participants. A worker who wanders
into a submarket with a low wage, below the worker's reservation wage, must bear the costs of adjustment which require the worker to move to another submarket. This move brings the market closer to
equilibrium. If the worker chooses not to move, it is because the lower wage is preferable to the moving costs. The randomness associated with the outcome of a particular submarket search is the
source of the unequal incidence in search costs and hence in wage rates. An alternative to the assumption of increasing costs of market organization is the assumption of exogeneous heterogeneity of
either workers or firms. For example, Joseph Stiglitz (1974) assumes that firms have different training costs for workers and set different wages in order to influence quit rates. The heterogeneity
in jobs creates potential information that must be obtained by job seekers. Eventually, continued search in a market would reveal the information and misinformation would decline. Stiglitz argues
that the flow of ignorance brought about by new entrants offsets the decline in misinformation from search, leading to equilibrium misinformation and wage dispersion. Further, this wage dispersion
itself generates additional imperfect information regarding jobs. In a labor market context, we may suppose at the minimum that otherwise identical workers have heterogeneous reservation wages,
arising from different values of being unemployed in the labor market. Heterogeneous reservation wages imply that a supply curve of wage versus number of workers willing to take a job at that wage
would be upward sloping. Now consider the industry response to heterogeneous reservation wages. Suppose all firms face the same technology. At each possible wage offer there will be a rate of
acceptance and recruitment costs such that the profits of the
\0 \0
0.978 0.905 0.819 0.761 0.725
2 (1.036) 0.955 0.806 0.685 0.637 0.620
3 (0.308) 0.906 0.644 0.531 0.525 0.542
4 (0.125) 0.841 0.492 0.435 0.474 0.521
5 (0.067) 0.765 0.381 0.391 0.466 0.535
6 (0.042)
0.684 0.310 0.378 0.482 0.565
7 (0.029)
0.604 0.268 0.385 0.509 0.600
8 (0.021)
Values of a in Pareto Distribution of Wage Offers
0.527 0.246 0.403 0.540 0.634
9 (0.016)
0.456 0.239 0.427 0.572 0.666
10 (0.012)
The left-hand column gives the value of CV, the coefficient of variation, for the distribution of reservation wages. The first row above the entries gives the alternative values of Pareto's a for the
distribution of wage offers, while the numbers in parentheses underneath are the values of CV2 for the wage offer distributions truncated at the mean reservation wage 2.0. The table entries are the
ratio of CV2 for the accepted wage distribution to CV2 for the wage offer distribution truncated at 2.0.
0.1 0.2 0.3 0.4 0.5
Coefficient of Variation for Normal Distribution of Reservation Wages
Table 5.6 Ratios of Wage Offer to Accepted Wage Inequalities
The Distribution of Wage Rates
firm will be the same as some standard firm. That is, there will be an iso-profit tradeoff between wage offer and recruitment costs. The number of firms at each wage offer will adjust so that all
firms lie along the same iso-profit line. The heterogeneous reservation wages generate heterogeneous economic niches for firms. In this case, even though all labor is outwardly identical, the wage
dispersion performs an allocative role in the labor market. Workers with low reservation wages will be more likely to get a job quickly, while workers with high reservation wages are more likely to
get jobs with high wage rates. The industry response in generating heterogeneous wage offers creates an implicit market for likelihood of getting a job. Workers differing by valuation of unemployment
are assigned to jobs differing in job search required. A suppression of wage dispersion would make it impossible for a worker desperate for a job to get one more quickly than anyone else, and would
make it difficult for a worker with high nonemployment benefit to find a job that would compensate him or her for abandoning the alternative activities. This appears to be the partial effect of the
minimum wage in some markets. Again, even without differences in labor quality, wage dispersion plays an allocative role in the operation of the market. It arises spontaneously to generate an
implicit market for a secondary good, in this case the time taken to find a job. Its presence is necessary for the efficient operation of the market. In the job search models that have been developed
in this monograph, workers differ not only by nonemployment benefits but also by grade, taken as a single dimensional representation of heterogeneous worker characteristics. In this model, the most
important reason that firms offer different wages to the same worker is that they differ in the values of the marginal product for their expected or average workers. A given firm offers the same wage
rate to all potential employees who satisfy the minimum grade requirement. The firm's employees will therefore have a range of grades. A given worker can expect job offers at a number of different
firms, at which the average value of the marginal product and wage offer will vary. As will be discussed in Chapter 8, the wage rate dispersion essentially assigns workers to jobs according to
grades. In the absence of search distortion (to be discussed in Chapter 8), the particular wage offer received by a worker conveys the correct information regarding whether the worker should take the
job at hand or continue search. If the worker decides to accept a low wage, the difference between the accepted wage and the worker's expected wage, We, is a loss arising from the random outcome of
search. This loss is less than the costs of continued search. Wage rate dispersion for a worker therefore arises in lieu of higher search costs for workers and may be categorized as part of the
random incidence of search costs.
6. Summary Because the worker undertakes search by setting a reservation wage, the distribution of possible wages is truncated at the reservation wage. As the reservation wage rises, the expected
wage goes up, the coefficient of variation of wages usually declines and the valuation of unemployment eventually rises. Section 2 investigates the behavior of these magnitudes as the reservation
wage goes up for normal, lognormal, Pareto and exponential wage offer distributions. Section 3 considers the process generating the distribution of accepted wage rates. This distribution is a mixture
which resembles neither the distribution of wage offers nor the distribution of reservation wages. If H( w) is the cumulative distribution of reservation wages and v(w) is the probability density
function for wage offers, then
The Distribution of Wage Rates
H(w)v(w) is the probability density of accepted wage rates. This distribution will be
two tailed even if the two constituent distributions are single tailed. From previous work by Mandelbrot and Lydall, a likely form for the distribution of accepted wage rates is a normal or lognormal
distribution of Pareto distributions. Section 4 considers the role of choice under the assumption that the distribution of accepted wage rates is a normal mixture of Pareto distributions. Table 5.6
compares the square of the coefficient of variation when there is no choice (Le., when all workers have the same reservation wage) with the square of the coefficient of variation when reservation
wages are normally distributed. From this table, choice contributes less than 50 per cent to the inequality in accepted wage rates in most cases. Section 5 examines the source and interpretation of
wage dispersion. Even though the wages offered identical workers vary from firm to firm, this dispersion is not simply a random and arbitrary source of variation in economic outcomes, although it
appears to be so from the individual worker's point of view. Instead, the dispersion plays an economic role in the allocation of labor. The wage rate dispersion creates an implicit market for time
spent searching. It tends to assign workers to jobs on the basis of net costs of search. Workers who are desperate for jobs may achieve a greater likelihood of getting a job because of the presence
of wage dispersion. Workers with high nonemployment benefits are more likely to end up in jobs which compensate them for foregoing their alternative activities. Workers who end up with low wage rates
do so because these wage rates are preferable to them to the costs of continued search. Wage dispersion therefore arises as a consequence of the minimization of search costs. When workers differ by
grade of labor, wage dispersion also plays the allocative role of assigning workers to jobs on the basis of grades and grade requirements.
Chapter 6
Inequality 1. Introduction The main theme of this monograph is the way in which unemployment and job search generate inequality and the nature of that inequality. We have discussed all the essential
elements in the determination of inequality and are now in a position to combine those elements. The subject of inequality may be divided into two parts. First, there is the descriptive problem of
explaining the observed distribution of earnings. In this monograph, we seek to explain the contribution of the distributions of employment, wage offers and reservation wages to inequality. The
second problem is to explain the distribution of economic well-being, which may not be the same as the distribution of earnings. The latter problem requires some judgment about what is a legitimate
source of economic differences and what is not, and how unemployment and wage rates should be combined to yield a measure of economic well-being. We will distinguish inequality generated by choice,
by the unequal outcomes of job search and by differences in the employment characteristics of workers (their grades). Sections 2 and 3 deal with the descriptive problem while sections 4, 5 and 6
concern the distribution of economic well-being.
2. Statistical Relations Consider first the dispersion in earnings facing an individual worker. Let u, the proportion of the time period spent employed. Most workers will have one job with one wage
rate throughout the period. A minority, because of unemployment or periods out of the labor force, will have multiple spells of employment and hence more than one wage rate. To simplify the analysis,
suppose that workers only have one wage rate for the year, so that the earnings they receive for the year are 71 w, the product of the time employed and the wage rate. The variables 71 and ware
independently distributed random variables for a worker with a constant reservation wage. Note that this independence does not hold for a group of workers with different reservation wages. In the
latter case, among workers with the same grade, those with smaller proportions of time employed will tend to have higher wage rates. Let y = 71 w, the earnings in the period. The distribution of 71
and w combine to determine the distribution of y, in a manner that can now be described. Let p,(x), a2 (x) and cv(x) be the mean, variance and coefficient of variation for a variable x. Also, let p
(7f) be the probability density function for the proportion of the time employed, 71. This function is derived in Chapter 5, section 4. Let q(w) be the probability density function for accepted wage
rates. We may suppose that there is some legal minimum wage rate, wm • The probability density function for an individual worker's earnings is then a combination (similar to a convolution) of the two
density functions p("I) and q(w) (Mood, Graybill and Boes, 1974, p.187): 71
-p(ylw)q(w)dw w.. w
The properties of this density function are not apparent from its expression. The mean and variance of yare given by (Mood, Graybill and Boes, 1974, p.180):
= p,(,.,)p,(w)
= u2(,.,w) = ,,2(,.,)u2(w) + ,,2(w)u2(,.,) + u2(w)u2(,.,)
and: u2(y)
Rearranging, one obtains the following expression for the coefficient of variation squared: (6.4)
The inequality in earnings exceeds the sum of the inequality in employment and the inequality in wage rates. If we accept cv2 as the measure of inequality, this expression allows us to make
meaningful statements about the contribution of inequality in employment or wage rates to inequality in earnings (see comments by J.B. Davies and A. F. Shorrocks, 1978). If u2(,.,) were zero,
earnings inequality would equal wage rate inequality, CV2(W). The minimum contribution of dispersion in wage rates to inequality is therefore CV2(W). If instead we consider how much wage rate
dispersion adds to employment inequality in generating earnings inequality, the answer would be CV2(W) + CV2(W)CV2(,.,). The ratio cv2(w)1 CV2(y) may therefore be taken as the proportion of
inequality arising from wage rate dispersion; this is a minimum figure. If one uses the variance of logarithms as the measure of inequality, the contribution of wage rates and employment to
inequality is unambiguous. When,., and w are independent: u2(logy)
= u2(log,., + log w) = u2(log,.,) + u2(log w)
Using this measure, earnings inequality is the simple sum of wage rate inequality and employment inequality. The major difference between the variance of logarithms and the square of the coefficient
of variation is that the former is much more sensitive to the lower tail, whereas the latter is more sensitive to the upper tail. This difference may be seen by considering the elasticities of the
inequality measures with respect to a change in the population of an earnings bracket (see Sattinger, 1980, Chapter 7). Use of the variance of logarithms would then emphasize the contribution of
variations in employment to inequality, since unemployment is responsible for the very lowest incomes. On the other hand, the square of the coefficient of variation would emphasize the importance of
the upper tail of the wage offer curves, since this would be the source of the very highest incomes. The feature of the variance of logarithms which works against its sole use here is that it does
not allow (in the two parameter form) for individuals with zero earnings: the measure then would blow up. Such cases arise when workers remain unemployed all year. Table 6.2 in the next section will
use both the variance of logarithms and the cv2 as measures of inequality in order to facilitate comparison. (Other considerations relevant to the choice of an inequality measure may be found in N.
Kakwani, 1980, and A.W. Marshall and I. Olkins, 1979).
Now let us consider how the choice of reservation wage on the part of the worker affects the distribution of earnings he or she faces. An increase in the reservation wage raises the expected wage
rate but reduces the expected employment. Earnings, the product of the two, will then change less than either alone. If the worker is located on the upward sloping part of the choice set frontier
(i.e., at point B in Figure 3.2), then the increase in the reservation wage will raise the expected earnings of the worker. This was the case generally found in the investigation of unemployment
valuations in Chapter 3. The reservation wage also generally has opposite effects on the employment and wage rate inequality facing an individual worker. While a lowering of the reservation wage
achieves some reduction in employment inequality for the worker, it also generally raises the wage rate inequality. Except for the Pareto distribution, an increase in the reservation wage reduces the
coefficient of variation of wage rates. The net effect of a change in the reservation wage on the inequality facing a worker may therefore be ambiguous, and the dispersion in earnings is more or less
unavoidable. Now consider the distribution of earnings among a group of otherwise identical workers with unequal reservation wages. The consequence of the fact that expected employment and wage rate
move in opposite directions when the reservation wage goes up is that choice, reflected in the dispersion in reservation wages, has a smaller effect on the product of employment and wage rate than it
has on either alone. The expression for 112(y) in (6.3) and for CV2(y) in (6.4) hold only when 7/ and ware independent, as they are for a single worker with constant reservation wage and transitiOI\
rates. When the two random variables are not independent, a more complicated expression arises involving the covariance between the two variables (see Mood, Graybill and Boes, 1974, p.180). Workers
with higher reservation wages will tend to have higher wage rates and lower employment. Therefore the covariance between wage rates and employment will be negative. The inequality in earnings for a
group of otherwise identical workers with unequal reservation wages will then be less than indicated by (6.4). The inequality for a larger, aggregated group (e.g., males aged 35 to 44) is the result
of pooling groups with different distributions of earnings, for example, groups with different educational levels. Depending upon the measure, the resulting inequality may be expressed as the sum of
inequality within groups and between groups. Differences in group means produce further inequality beyond that in the constituent groups. While an analysis of the decomposition of inequality within
and between groups would be desirable, the lack of a sufficiently fine breakdown of groups in some of the data used here precludes such a study (see A.F. Shorrocks, 1980, 1982; F. Cowell, 1980; and
F. Bourguignon, 1979, for other studies of the decomposition of inequality).
3. Observed Earnings Inequality This section describes empirically how the observed distribution of earnings is generated from the distributions of employment, reservation wages and wage offers. The
first task is to examine the distribution in question. Table 6.1 presents the joint distribution of employment and earnings, taken from the one in one thousand sample of the 1970 U.S. Census of the
Population. The observations are limited to those for workers in the labor force most of the year and aged 16 to 64. Using the household data in this set, it is possible to divide the observations
into 19 earnings brackets and 6 brackets of weeks worked. This is a substantially finer division than is
All workers
0.22 0.58 1.43 0.91 0.40 0.22 0.11 0.07 0.07 0.04 0.00 0.02 0.01 0.01 0.00 0.01 0.00 0.00 0.00 4.10
14 to 26
0.14 0.34 0.98 1.18 0.99 0.55 0.53 0.43 0.33 0.22 0.13 0.17 0.06 0.02 0.01 0.01 0.01 0.00 0.00 6.10
27 to 39
0.11 0.16 0.54 0.88 1.34 1.01 0.91 0.81 0.65 0.53 0.42 0.57 0.25 0.12 0.04 0.02 0.02 0.00 0.00
40 to 47
Weeks Worked
0.09 0.10 0:31 0.46 0.75 0.86 0.72 0.67 0.64 0.49 0.34 0.52 0.25 0.12 0.08 0.05 0.04 0.01 0.01
48 to 49
0.44 0.45 1.57 2.55 6.15 7.57 8.33 8.14 8.19 6.98 5.23 7.42 3.97 2.22 1.83 0.89 0.48 0.17 0.12
50 to 52
1.74 2.41 5.24 6.09 9.70 10.25 10.62 10.13 9.88 8.27 6.14 8.71 4.53 2.49 1.96 0.98 0.54 0.18 0.14 100.0
All Workers
Entries are the percentages of the work force in each cell. Number of observations: 41,471. Data source: One in one thousand sample of 1970 U.S. Census of the Population.
0.75 0.79 0.41 0.11 0.06 0.05 0.02 0.01 0.01 0.01 0.01 0.01 0.00 0.00 0.00 0.00 0.00 0.00 0.00
o to 13
0.5 to 1 1 to 2 2 to 3 3 to 4 4 to 5 5 to 6 6 to 7 7 to 8 8 to 9 9 to 10 10 to 12 12 to 14 14 to 16 16 to 20 20 to 25 25 to 35 35 to 50 Over 50
o to 0.5
Earnings Intervals (in $1,000)
Table 6.1 Joint Distribution of Workers by Employment and Yearly Earnings, 1970
available from other earnings or income data and allows a more accurate calculation of inequality measures. The entries in the table are the percentages in each cell. The marginal distribution of
earnings, i.e., for all brackets of weeks worked, is presented in the column on the far right. The marginal distribution of weeks worked is presented in the bottom row. This joint distribution
exhibits a characteristic pattern that also holds for narrowly defined groups. The distribution of employment affects the distribution of yearly earnings by adding to the number of workers in the
lower tail. The very lowest brackets, which have the strongest effects on the variance of logarithms, are dominated by workers with less than full employment. In contrast, the highest brackets are
nearly devoid of workers with less than full year employment. The upper tail is therefore determined by the upper tails of wage offer distributions. Tables 6.2 and 6.3 compare the employment and
earnings inequality for narrowly defined groups using the same sample of U.S. Census data. It is possible to sort the data by sex, race, age and education, thereby controlling for major determinants
of earnings differences. Only groups with 400 or more observations are reported; these are all groups of white workers. Within each group, it is then possible to compare the inequality in employment
and earnings. This is accomplished using both cv2 and the variance of logarithms to measure inequality. Columns 5 and 8 in addition present the inequality of earnings for full year workers. With
employment controlled for, these figures should be roughly the same as the inequality in the distribution of accepted wage rates (later tables show that full year worker's earnings inequality exceeds
weekly earnings inequality). The results of the previous section indicate that, if employment and wage rate were independently distributed, the sum of the figures in columns 4 and 5 should add up to
less than the figure in column 6, the difference being the product of the figures in 4 and 5. In fact, employment and wage rate will not be independent, since workers with higher reservation wages
will on average have lower employment and higher wage rates, everything else the same. Nevertheless the results indicate that the employment inequality in column 4 accounts for roughly the difference
between the earnings inequality for full year workers and the earnings inequality for all workers in the group. The inequality in employment, as measured by cv 2 , is higher for women and for younger
workers aged 20 to 24. It is very low for older, highly educated workers. Earnings inequality is highest for young workers, females and older, highly educated workers. The ratio of employment to
earnings inequality varies considerably from group to group. At the lowest it is about 1.6 per cent for males 55 to 64 with 14 to 17 years of education. The overall proportion is best indicated by
the figure in Table 6.3 for aU workers, 8.5 per cent. The figures using the variance of logarithms tell a slightly different story. The variance of logarithms is more sensitive to workers in the
lowest brackets. The low earnings levels of workers with unemployment therefore makes a greater contribution to inequality. The employment inequality is greater and is also a greater proportion of
the earnings inequality for all workers in a group. In several instances the employment inequality approaches the inequality in earnings of full year workers and exceeds it in a few cases. According
to the previous section, the sum of the figures in columns 7 and 8 should equal the figure in column 9, if employment and wage rate are independent. This is again roughly the case, although there are
some substantial deviations when employment inequality is high. Using the variance of logarithms, the contribution of unemployment to inequality is unambiguously the ratio of the employment
Number of Observations 0.0151 0.0202 0.0493 0.0177 0.0110 0.0128 0.0161 0.0620 0.0114 0.0068 0.0093 0.0083 0.0286 0.0059 0.0089
Employment 0.266 0.216 0.172 0.151 0.195 0.137 0.187 0.225 0.153 0.196 0.282 0.487 0.134 0.216 0.260
(5) 0.277 0.233 0.261 0.194 0.213 0.162 0.198 0.310 0.167 0.203 0.298 0.493 0.178 0.224 0.284
0.0319 0.0628 0.161 0.0492 0.0258 0.0305 0.0492 0.184 0.0356 0.0184 0.0309 0.0178 0.0774 0.0127 0.0229
0.260 0.254 0.514 0.172 0.175 0.190 0.204 0.315 0.180 0.177 0.240 0.284 0.204 0.208 0.261
0.307 0.323 0.687 0.289 0.236 0.256 0.252 0.553 0.228 0.213 0.268 0.321 0.327 0.224 0.297
Earnings of All Workers In Group
Earnings of Full Year Workers
Earnings of Full Year Workers Earnings of All Workers In Group
Variance of Logarithms
Square of Coefficient of Variation
Data sources: the one in one thousand sample of the 1970 U.S. Census of the Population.
10 to 10 to 10 to 10 to 10 to 14 to 14 to 14 to 14 to 14 to 18 to 18 to 18 to
o to 9 o to 9
45 to 54 55 to 64 20 to 24 25 to 34 35 to 44 45 to 54 55 to 64 20 to 24 25 to 34 35 to 44 45 to 54 55 to 64 25 to 34 35 to 44 45 to 54
Table 6.2 Employment and Earnings Inequality, Males, 1970
variance of logarithms (column 7) to the earnings variance of logarithms (column 9). For all workers, this ratio is about 20 per cent. Tables 6.4 to 6.7 present more detailed breakdowns of the source
of earnings inequality. The data are taken from Employment Profiles of Selected Low-Income Areas (1972) and are broken down by age and education for each race and sex group. However, the age groups
consist of workers with different education levels, and similarly for the educational level groups. Workers within a group will therefore exhibit more heterogeneity than for the groups used in Tables
6.2 and 6.3. The results are presented using cv 2 as the measure of inequality. Some results using the variance of logarithms will be presented later. Columns 1 and 2 report the cv 2 for the
calculated and actual distributions of employment. The results for white males were previously reported in Table 4.13. The actual employment cv 2 is calculated from the published distribution,
eliminating workers with no employment. The figure in column 1 is calculated from a distribution determined by the distribution function developed in Chapter 4 and the out of work rate calculated for
the group. Columns 3, 4 and 5 present information on the wage offer distribution inferred from the median wage rate and reservation wage assuming a Pareto or an exponential wage distribution. Because
the ratio of the reservation wage to the expected wage is constant for a Pareto distribution, it is possible to infer Pareto's a from that ratio and then calculate the coefficient of variation.
Similarly, if an exponential distribution is assumed, the difference between the expected and reservation wage is constant, so that the parameter of this distribution can also be inferred. The
implied cv 2 for an exponential wage offer distribution is presented in column 5. Because the exponential density declines faster than a Pareto density, the values of cv 2 for the Pareto distribution
are greater, about twice as large as the inequality assuming an exponential distribution. Inequality in reservation wages is presented in column 6. The data are taken from the published statistics
for part year workers. The data indicate a surprisingly high inequality, in many cases exceeding the inequality in weekly earnings presented in column 7. The distribution of weekly earnings used in
calculating the figures in column 7 are for full time workers, so that the distribution may be taken as roughly equivalent to the distribution of accepted wage rates. As described in the previous
chapter, the distributions of reservation wages and wage offers combine to determine the distribution of accepted wage rates. The weekly earnings inequality exceeds the wage offer inequality assuming
either a Pareto or exponential distribution but as mentioned is not always greater than the reported reservation wage inequality. Presumably, the reservation wage distribution modifies the wage offer
distribution in the manner described in Chapter 5, thereby contributing to greater inequality. However, it is also possible with these data that heterogeneity of the groups contributes both to
greater reservation wage and weekly earnings inequality. Column S provides the inequality measures for annual earnings of full year workers. These figures are generally greater than the corresponding
figures for weekly earnings, perhaps because multiple employment over the year increases dispersion. Finally, inequality in employment, column 2, and accepted wage rates, column 7, combine to produce
inequality in the annual earnings of all workers in a group, given in column 9. Except for young adults aged 16 to 21, employment and wage rate inequality again roughly add up to annual earnings
inequality, even though the actual employment inequality figures in column 2 exclude those without any employment. Tables 6.4 to 6.7 reveal a number of differences among the groups. The same
statistics for major aggregated groups, including all workers, are collected and presented in Table 6.S to facilitate comparison.
10 to 10 to 10 to 10 to 14 to 14 to 14 to 14 to 14 to 14 to 18 to
25 to 35 to 45 to 55 to 16 to 20 to 25 to 35 to 45 to 55 to 25 to 41471
Number of Observations
0.0807 0,0660 0,0437 0,0353 0,2100 0,0767 0,0790 0,0497 0,0310 0,0262 0,0610
0,130 0,293 0,372 0,223 0,144 0,244 0,125 0,225 0,152 0,353 0,122
Earnings of Full Year Workers
0,263 0,372 0.423 0,359 0.491 0.444 0,276 0,301 0,202 0,383 0,177
Earnings of All Workers In Group
Square of Coefficient of Variation
Data sources: the one in one thousand sample of the 1970 U,S, Census of the Population,
All workers
0,232 0,216 0,138 0,101 0.402 0,216 0,260 0,148 0,100 0,0705 0,150
0,565 0,532 0,436 0,358 0,645 0,547 0,507 0,439 0,352 0,376 0,381 0,637
Earnings of All Workers In Group 0,216 0,243 0,207 0,218 0,270 0,268 0,143 0,182 0,198 0,239 0,190
Earnings of Full Year Workers
Variance of Logarithms
Table 6.3 Employment and Earnings Inequality, Females, 1970
0.0571 0.0480 0.0586 0.0475 0.0493
0.0342 0.106 0.0681
0.200 0.0640 0.0357 0.0376 0.0458
5.60 4.59 3.80 3.54 3.15
4.38 4.26 4.00
4.50 4.00 4.24 4.70 4.09
Estimated a for Pareto Wage Offers
0.0497 0.0841 0.146 0.184 0.276
0.0961 0.104 0.125
0.0888 0.125 0.105 0.0790 0.117
cVZ for Pareto Wage Offers
0.0320 0.0475 0.0692 0.0801 0.100
0.0524 0.0552 0.0625
0.0493 0.0625 0.0557 0.0454 0.0595
0.216 0.208 0.188 0.178 0.252
0.215 0.157 0.268
0.0924 0.149 0.204 0.275 0.296
cVZ for Exponential cVZ for Wage Reservation Offers Wages
0.181 0.158 0.166 0.167 0.207
0.175 0.185 0.264
0.160 0.191 0.199 0.188 0.210
cVZ for Weekly Earnings
0.186 0.164 0.165 0.164 0.263
0.189 0.222 0.297
0.223 0.213 0.215 0.203 0.220
0.197 0.311 0.364
0.711 0.252 0.234 0.222 0.249
cVZ for cVZ for Annual Annual Earnings of Earnings of All Full Year Workers Workers
Asterisk indicates data are unavailable. Except for column (3), entries are the squares of the coefficient of variation, cv2. Data sources and methods of calculation: see text.
0.0467 0.0445 0.0502 0.0335 0.0328
0.1160 0.0480 0.0335 0.0339 0.0437
7 years or less 8 years 1 to 3 years high school 4 years high school 1 year or more college
0.0289 0.0751 0.0511
Family status Head Other member Unrelated individual
16 to 22 to 35 to 45 to 55 to
cVZ for cVZ for calculated Actual Employment Employment
Table 6.4 Sources of Inequality, White Males
-0.116 0.0992 0.129 0.0864 0.0729
0.0930 0.120 0.103 0.0610
0.199 0.121 0.0835 0.0660 0.0524
10.71 6.46 4.83 4.04 4.48
4.36 4.84 4.23 4.24
4.30 3.89 4.70 4.52 5.17
0.0108 0.0346 0.0729 0.121 0.0900
0.0967 0.0729 0.106 0.106
0.101 0.136 0.0790 0.0876 0.0610
c1f2 for Pareto Wage Offers
0.0087 0.0240 0.0428 0.0615 0.0497
0.0524 0.0428 0.0562 0.0557
0.0543 0.0660 0.0454 0.0488 0.0376
0.0615 0.0660 0.0615 0.0818 0.176
0.0876 0.0986 0.0650 0.215
0.0795 0.167 0.0858 0.0751 0.126
c1f2 for Exponential c1f2 for Reservation Wage Offers Wages
0.163 0.141 0.162 0.201 0.258
0.174 0.250 0.168 0.271
0.152 0.226 0.298 0.275 0.333
0.275 0.452 0.252 0.324
0.691 0.386 0.419 0.370 0.437
c1f2 for c1f2 for Annual Annual Earnings of Earnings of Full Year All Workers Workers
cv2 • Data sources and methods of
0.103 0.106 0.152 0.164 0.214
0.222 0.199 0.154 0.234
0.110 0.207 0.246 0.229 0.253
c1f2 for Weekly earnings
Asterisk indicates data are unavailable. Except for column (3), entries are the squares of the coefficient of variation, calculation: see text.
0.0912 0.0801 0.0961 0.0655 0.0506
0.123 0.0930 0.0681 0.0571 0.0520
Education 7 years or less 8 years 1 to 3 years high school 4 years high school 1 year or more college
0.0767 0.0936 0.0686 0.0416
Family status Head Wife of head Other member Unrelated individual
16 to 22 to 35 to 45 to 55 to
Estimated a for c1f2 for c1f2 for Pareto Calculated Actual Wage Employment Employment Offers
Table 6.5 Sources of Inequality, White Females
0.0342 0.135 0.0595 0.0493 0.0471 0.0676 0.0538 0.0471
0.0475 0.0462 0.0595 0.0437 0.0324
0.235 0.0625 0.0412 0.0353 0.0412
0.0328 0.0900 0.0506
0.132 0.0548 0.0357 0.0342 0.0412
8.85 6.10 4.56 4.09 3.36
5.32 5.05 5.25
4.70 4.81 5.00 5.70 8.64
0.0166 0.0400 0.0858 0.117 0.218
0.0566 0.0650 0.0586
0.0790 0.0740 0.0666 0.0475 0.0174
cwll for Pareto Wage Offers
0.0128 0.0269 0.484 0.0595 0.0888
0.0353 0.392 0.0365
0.0454 0.0433 0.0400 0.0310 0.0134
0.163 0.184 0.187 0.167 0.160 0.154 0.160 0.186
0.186 0.171 0.143 0.135 0.181
0.158 0.163 0.169 0.171 0.174
cwll for Weekly Earnings
0.139 0.121 0.163
0.0936 0.109 0.144 0.172 0.181
cwll for cwll for Exponential Reservation Wage Offers Wages
Black Males
0.200 0.194 0.178 0.154 0.176
0.170 0.255 0.199
0.225 0.171 0.173 0.188 0.193
0.180 0.354 0.234
0.865 0.193 0.188 0.209 0.215
cwll for cwll for Annual Annual Earnings of Earnings of Full Year All Workers Workers
Asterisk indicates data are unavailable. Except for column (3). entries are the squares of the coefficient of variation. cv2. Data sources and methods of calculation: see text.
Family status Head Other member Unrelated individual Education 7 years or less 8 years 1 to 3 years high school 4 years high school 1 year or more college
16 to 22 to 35 to 45 to 55 to
cwll for cwll for calculated Actual Employment Employment
Estimated «for Pareto Wage Offers
Sources of
Table 6.6
0.0801 0.0912 0.118 0.0949 0.0697
0.0894 0.0992 0.132 0.0562
0.237 0.117 0.0681 0.0538 0.0458
10.14 8.56 6.83 4.45 3.05
4.27 5.11 4.55 5.35
4.53 4.41 4.27 5.93 15.60
0.0121 0.0180 0.0303 0.0912 0.314
0.103 0.0630 0.0557 0.0557
0.0876 0.0942 0.103 0.0428 0.0047
cva tor Pareto Wage Offers
0.0097 0.0137 0.0213 0.0502 0.108
0.0548 0.0384 0.0484 0.0350
0.0488 0.0515 0.0548 0.0286 0.0041
0.0778 0.0713 0.0740 0.0818 0.961
0.0795 0.0894 0.0740 0.0655
0.0635 0.0888 0.0718 0.0686 0.111
cva for exponential cva for Reservation Wage Offers Wages
0.138 0.122 0.217 0.171 0.217
0.228 0.234 0.234 0.260
0.161 0.207 0.229 0.264 0.258
cva for Weekly earnings
0.230 0.197 0.200 0.213 0.242
0.272 0.278 0.228 0.252
0.200 0.222 0.293 0.326 0.315
0.388 0.423 0.382 0.318
0.808 0.340 0.396 0.441 0.446
cva for cva for Annual Annual earnings of Earnings of All Full Year Workers Workers
Asterisk indicates data are unavailable. Except for column (3), entries are the squares of the coefficient of variation, c.,z. Data sources and methods of calculation: see text.
0.0729 0.0745 0.0847 0.0751 0.0576
0.128 0.0900 0.0645 0.0538 0.0502
Education 7 years or less 8 years 1 to 3 years high school 4 years high school 1 year or more college
0.0767 0.0801 0.0858 0.0538
Family status Head Wife of head Other member Unrelated individual
16 to 22 to 35 to 45 to 55 to
cva for cva for calculated Actual Employment Employment
estimated a for Pareto Wage Offers
Table 6.7 Sources of Inequality, Black Females
First, young workers, aged 16 to 21, experience extremely high employment inequality compared to the rest of the work population. Employment inequality appears to be substantially higher for women
than for men. Pareto's IX declines with education. Further, except for white males, IX increases with age. A larger value of IX corresponds to a smaller coefficient of variation of wage offers. The
distribution of reservation wages differs substantially among groups, according to the published data. Inequality in reservation wages is much higher for males and is considerably lower for young
workers aged 16 to 21. It is highest for the oldest workers and for those with the most education. Except for the age bracket 16 to 21, the inequality in annual earnings of all workers in a group is
greater for females than for males. This arises because of greater inequality in both the distribution of employment and in wage rates (weekly earnings) and in spite of lower inequality in wage
offers and reservation wages. The evidence collected in these tables is consistent with the relations among distributions that are described in previous chapters and section 2. Using the variance. of
logarithms, the earnings inequality is approximately equal to the sum of the employment inequality and the accepted wage rate inequality. The relation between the distributions of accepted wage rates
and the distributions of wage offers and reservation wages is substantially less clear. We have no firm knowledge on the functional form of the distribution of wage offers and there is no clear
quantitative relation among the levels of inequality. Nevertheless, it appears that the distribution of reservation wages modifies to a considerable extent the distribution of wage offers in the
direction indicated in Chapter 5. The distribution of accepted wage rates is much more unequal than the inferred distribution of wage offers.
4. The Role of Choice and Uncertain Outcomes Chapters 4 and 5 discuss the influence of choice, through the selection of different reservation wages, on the distributions of employment and wage rates,
respectively. Section 2 of this chapter discusses the distribution of the product of employment and the wage rate, i.e., earnings, but provides no means of determining the influence of choice on
earnings, since employment and wage rate are not independent for a population. This section presents an alternative way of representing earnings and an approximate method for finding the influence of
choice on earnings inequality. Instead of expressing earnings as employment times wage rate, write the earnings of worker i, Yi, as expected earnings of the worker, Yk, times a random outcome
variable, f. Assume that the mean of f is one and that q2(log f), the inequality in outcomes facing a given worker, is constant and independent of the mean earnings or reservation wage of the worker.
This assumption is unlikely to be strictly true for the data. Departure from the assumption will not influence the empirical estimates very much but will greatly simplify the analysis. When the
worker chooses a higher reservation wage, inequality in employment goes up while inequality in wage rates generally declines. In the case of a Pareto distribution of wage offers, the wage rate
inequality is unaffected by the reservation wage, so that the variance of logarithms of f would in fact be larger for greater values of woo However, differences in the variance of logarithms of f
will play a secondary role in the determination of the variance of logarithms of earnings. Assume now that the logarithm of expected earnings for a worker is a linear function of the worker's
reservation wage. Writing u = /LI(}" + /L), the slope of this relation, obtained by differentiation and substitution using (2.8), is as follows:
cv2 for Pareto Wage Offers
0.190 0.0824
0.0717 0.0483
cv2 for cv2 for Exponential Reservation Wage Offers Wages
cv2 for cv2 for Annual Annual Earnings of Earnings of Full Year All Workers Workers
cv2 • Data sources and methods of
cv2 for Weekly Earnings
Asterisk indicates data are unavailable. Except for column (3). entries are the squares of the coefficient of variation. calculation: see text.
All workers
Black males
Black females
White females
White males
cv2 for cv2 for Calculated Actual Employment Employment
Estimated a for Pareto Wage Offers
Table 6.8 Sources of Inequality, Major Groups
.. + JL)] We awo
= awel awo We
.. wei (>.. + JL)] I awo awo a(>"1 (>.. + JL»I awo >"/(>"+JL)
a>../ awo = aWel awo + -JL- ------'We
Assume that wage offers follow a Pareto distribution with w;:: 1 and with parameter a > 2, so that wolwe = a/(a - 1). Write the transition rate>.. as 0\ where ois the rate at which job offers are
received and ~is the probability of acceptance of a job offer. Then the value of ~ for a reservation wage of Wo is ax-0:- 1 dx = Wo -0:. Then:
aWe/aWo =We Woe
a>../ (Jwo >..
- a Woe -0:-1 Woe 0:
O(J~/ (Jwo
a -,
where Woe is the average reservation wage. Thus under the assumption of a Pareto distribution of wage offers: (6.7) The parameter 0 drops out of this expression. Under the assumption that Yie and E
are independently distributed: u2 (logYi)
= u (lOgYie) + u2(log E) 2
In the above: (6.9) This expression is the amount of inequality generated by choice and may be compared with the amount of inequality arising from uncertain outcomes, u2 (log E), for a particular
group. Tables 6.9 to 6.12 use these results to calculate the contribution of choice and uncertain outcomes to inequality using Employment Profiles data. Column 1 presents the value of 1 - ua, which
is Wo times the slope of the linear relation between the logarithm of the worker's expected earnings and the reservation wage. This slope could be positive, negative or zero depending on the worker's
valuation of unemployment and the location of the worker's optimal point on the choice set frontier. The empirical work from Chapter 3 indicates that the slope is in general positive, so that workers
require an increase in earnings in order to be willing to accept an increase in unemployment. The slope generally increases with age, except that it drops off for black workers aged 55 to 64. It also
tends to increase with education. These results are consistent with the unemployment premiums calculated in Chapter 3. The coefficient of variation for reservation wages is presented in column 2; the
squares of these figures appeared in Tables 6.4 to 6.7. The product of (1 - ua)2 and
0.4652 0.4562 0.4336 0.4220 0.5016
0.4641 0.3955 0.5178
0.3038 0.3861 0.4517 0.5243 0.5438
cvfor Reservation Wages
0.0800 0.0945 0.0823 0.1115 0.1680
0.1374 0.0190 0.1182
0.0045 0.0730 0.1160 0.1589 0.1969
Choice Inequality, S2cv2(wo)
0.319 0.0474 0.0693 0.0800 0.1007
0.0522 0.0551 0.0625
0.0494 0.0625 0.0556 0.0453 0.0598
Variance Of Logs, Wage Offers
0.147 0.142 0.159 0.100 0.094
0.085 0.249 0.156
0.374 0.149 0.101 0.105 0.145
Variance Of Logs, Employment
0.2779 0.7378 0.6564
1.3607 0.4504 0.3518 0.3606 0.4064
Variance Of Logs, Yearly Earnings
0.3090 0.3329 0.2651 0.3827 0.4632
0.5003 0.0587 0.3511
0.0106 0.2567 0.4256 0.5140 0.4902
0.4944 0.0257 0.1801
0.0033 0.1622 0.3297 0.4407 0.4845
0.4939 0.4122 0.3329
0.3112 0.4696 0.4450 0.4167 0.5038
(4) +(5)
(3) (3) + (4) + (5)
Asterisk indicates that data are not available. Entries in columns (3), (4) and (5) are variance of logarithm measures of inequality. The table divides earnings inequality into components arising
from choice and from random outcomes. Data and methods of calculation: see text.
0.6080 0.6739 0.6618 0.7913 0.8172
0.2215 0.7000 0.7539 0.7603 0.8159
Education 7 years or less 8 years 1 to 3 years high school 4 years high school 1 year or more college
0.7988 0.3483 0.6640
Family status Head Other member Unrelated individual
16 to 22 to 35 to 45 to 55 to
Slope Times wo, S=1-ua
Table 6.9 Choice and Inequality, White Males
appears in column 3; this is the contribution of choice to inequality. It depends positively on the inequality in reservation wages and negatively on the level of unemployment and the value of 0: for
the Pareto distribution of wage offers. That is, a greater value of 0:, and a more equal wage offer distribution, produces a smaller contribution of choice to inequality. If the slope 1 - u 0: is
zero, then dispersion in reservation wages will contribute nothing to inequality. Increases in We from a higher reservation wage will be exactly compensated by a decrease in expected employment. The
sign of the slope 1 - Uo: has no influence on the amount of inequality generated by choice since it is squared. The amount of choice inequality varies substantially from group to group. It increases
with age and education, although there are reversals. It is substantially greater for white males and substantially smaller for black females. Columns 4 and 5 present the uncertainty facing an
individual worker. The figures in column 4 are the variance of logarithms of a Pareto wage offer distribution, calculated at the median reservation wage. Column 5 is the variance of logarithms of
employment, calculated assuming constant transition rates and using the employment density functions of Chapter 5. Because the variance of logarithms is so sensitive to low values of the variable,
the employment inequality is generally much larger than the wage offer inequality. For a worker with a given reservation wage, the employment and wage rate will be independently distributed. The
variance of logarithms for the distribution of earnings facing an individual worker will then be the sum of the variance of logarithms for employment and for wage rates. The sum of the amounts in
columns 4 and 5 therefore measures the dispersion of earnings facing an individual unemployed worker as a result of the random outcome of job search and hence measures the worker's uncertainty.
Column 6 presents the variance of logarithms of yearly earnings; previous tables listed the value of cv2 for the same distribution. Columns 7, 8 and 9 present the inequality from choice and
uncertainty as proportions of the total amount of inequality. Column 7 considers the proportion of inequality that would be attributable to choice among a group of workers who are identical except
for heterogeneous reservation wages. Following roughly the pattern in choice inequality, this proportion is greatest for white males. It rises with age and education and is greater for males than for
females. The largest proportion is about 50 per cent for older white males. Column 8 considers the amount of choice inequality in relation to the total amount of inequality in the group, arising not
only from uncertain outcomes but from differences in grades or skills among workers in the group. This proportion is also highest for males and rises to almost 50 per cent oftotal inequality. For
other groups the proportion is much less and is negligible in the case of most females. . . Judging by the proportions in column 9, job search is a major source of inequality. About 30 to 50 per cent
of inequality among groups is attributable to the employment and wage differences arising from job search. The proportion is higher for males and falls below 30 per cent for older black females.
These results are a powerful antidote to the point of view that search is a productive activity undertaken by a worker, and for the worker's own gain, in the normal course of operating in the labor
market. Unemployment is not an evenly distributed investment expense, undertaken by the worker to improve his or her economic status. Instead, it is a gamble of substantial proportions imposed on the
individual worker. We think of inequality as arising from differences among workers. But much of ineqUality does not arise from ex ante differences. As a result of job search, otherwise identical
workers can end up with very different earnings outcomes. These unequal
CV2( Wo)
0.2480 0.2569 0.2482 0.2863 0.4190
0.2959 0.3144 0.2550 0.4639
0.2816 0.4089 0.2928 0.2736 0.3553
cvfor Reservation Wages
0.0013 0.0112 0.0138 0.0470 0.1009
0.0346 0.0365 0.0214 0.1279
0.0123 0.0758 0.0370 0.0422 0.0795
Choice Inequality,
0.0087 0.0240 0.0428 0.0613 0.0498
0.0525 0.0427 0.0560 0.0556
0.0541 0.0661 0.0453 0.0489 0.0375
Variance Of Logs, Wage Offers
0.304 0.268 0.323 0.217 0.152
0.261 0.310 0.231 0.125
0.402 0.306 0.223 0.184 0.172
Variance Of Logs, Employment
0.6651 0.9674 0.6565 0.5371
1.3752 0.9376 0.7322 0.6641 0.6833
0.0042 0.0370 0.0363 0.1446 0.3333
0.0995 0.0938 0.0693 0.4146
0.0262 0.1692 0.1212 0.1535 0.2750
Variance Of Logs, (3) Yearly Earnings (3) + (4) + (5)
0.0521 0.0377 0.0325 0.2382
0.0089 0.0809 0.0505 0.0636 0.1163
0.4714 0.3645 0.4371 0.3363
0.3316 0.3969 0.3664 0.3506 0.3065
(6) (8)
Asterisk indicates that data are not available. Entries in columns (3). (4) and (5) are variance of logarithm measures of inequality. The table divides earnings inequality into components arising
from choice and from random outcomes. Data and methods of calculation: see text.
-0.1464 0.4120 0.4732 0.7576 0.7579
0.3937 0.6733 0.6569 0.7512 0.7933
7 years or less 8 years 1 to 3 years high school 4 years high school 1 year or more college
0.6291 0.6078 0.5730 0.7710
Family status Head Wife of head Other member Unrelated individual
16 to 22 to 35 to 45 to 55 to
Slope Times wo,
Table 6.10 Choice and Inequality, White Females
0.7287 -0.1554 0.5485
0.4604 0.5181 0.4488 0.6605 0.7983
Family status Head Other member Unrelated individual
7 years or less 8 years 1 to 3 years high school 4 years high school 1 year or more college 0.4312 0.4125 0.3777 0.3671 0.4255
0.3733 0.3479 0.4042
0.3058 0.3304 0.3789 0.4153 0.4255
cv for Reservation Wages
0.0394 0.0457 0.0287 0.0588 0.1154
0.0740 0.0029 0.0491
0.0096 0.0283 0.0734 0.0911 0.0715
Choice Inequality, S2cvl(wo)
0.0128 0.0269 0.0482 0.0598 0.0885
0.0353 0.0393 0.0363
0.0453 0.0431 0.0400 0.0308 0.0134
Wage Offers
Of Logs,
0.153 0.151 0.202 0.143 0.098
0.101 0.310 0.165
0.440 0.179 0.113 0.109 0.134
Variance Of Logs, Employment
0.2956 0.7969 0.4546
0.3595 0.3839
1.5266 0.3550
0.1921 0.2043 0.1030 0.2247 0.3822
0.3518 0.0083 0.1962
0.0194 0.1129 0.3242 0.3944 0.3266
Variance Of Logs, (3) Yearly earnings (3) + (4) + (5)
0.2503 0.0037 0.1081
0.0063 0.0796 0.2256 0.2533 0.1862
0.4612 0.4383 0.4428
0.3179 0.6257 0.4702 0.3889 0.3839
(6) (8)
(4)+(5) (3)
Asterisk indicates that data are not available. Entries in columns (3), (4) and (5) are variance of logarithm measures of inequality. The table divides earnings inequality into components arising
from choice and from random outcomes. Data and methods of calculation: see text.
-0.3207 0.5089 0.7150 0.7266 0.6284
16 to 21 22 to 34 35 to 44 45 to 54 55 to 64
Slope Tlmeswo, S=1-ua
Table 6.11 Choice and Inequality, Black Males
-0.2789 0.2671 0.2722 0.2864 0.3103
0.2821 0.2992 0.2721 0.2556
0.2524 0.2976 0.2679 0.2620 0.3330
cvfor Reservation Wages
0.0025 0.0021 0.0002 0.0210 0.0601
0.0220 0.0237 0.0005 0.0264
0.0076 0.0139 0.0336 0.0295 0.0314
Choice Inequality,
0.0097 0.0137 0.0214 0.0504 0.1077
0.0548 0.0383 0.0483 0.0349
0.0488 0.0514 0.0548 0.0284 0.0041
Variance Of Logs, Wage Offers
0.248 0.254 0.297 0.258 0.181
0.266 0.269 0.302 0.181
0.437 0.311 0.215 0.176 0.162
Variance Of Logs, Employment
0.7567 0.8520 0.9270 0.5955
1.4108 0.7796 0.7089 0.7476 0.7931
Variance Of Logs, Yearly Earnings
0.0095 0.0077 0.0006 0.0637 0.1722
0.0642 0.0716 0.0015 0.1090
0.0153 0.0369 0.1106 0.1263 0.1589
(3) (3) + (4) + (5)
0.0291 0.0278 0.0006 0.0444
0.0054 0.0178 0.0473 0.0395 0.0396
(3) (6)
0.4239 0.3607 0.3779 0.3626
0.3444 0.4649 0.3806 0.2734 0.2094
(4) + (5) (6)
Asterisk indicates that data are not available. Entries in columns (3), (4) and (5) are variance of logarithm measures of inequality. The table divides earnings inequality into components arising
from choice and from random outcomes. Data and methods of calculation: see text.
0.1784 0.1701 -0.0523 0.5055 0.7897
-0.3443 0.3960 0.6838 0.6559 0.5320
Education 7 years or less 8 years 1 to 3 years high school 4 years high school 1 year or more college
0.5257 0.5144 0.0855 0.6360
Family status Head Wife of head Other member Unrelated individual
16 to 22 to 35 to 45 to 55 to
Slope Times wo, S= 1-ua
Table 6.12 Choice and Inequality, Black Females
outcomes are a reflection of the uncertainty facing the individual worker. Undoubtedly worker reactions to differences created by random outcomes are not the same as reactions to differences
generated by unequal employment characteristics.
5. Unemployment-Compensated Wage Rates The outcome of search for unemployed workers is determined by both the wage rate and proportion of time employed. The expected wage We describes only one aspect
of the outcome facing a worker. Suppose we are interested in finding a measure of economic status that takes into account both the wage rate and unemployment prospects facing the worker. Using the
unemployment valuations discussed in Chapter 3, it is possible to find out how much the expected wage rate for a worker must be adjusted to account for the amount of unemployment the worker faces.
The wage rate and unemployment trade-off is (we - wo)/(u(1 - u». Multiplying the trade-off times the expected proportion of the time unemployed, u, yields the amount a worker would be willing to pay
per time period employed to avoid the threat of unemployment. Multiplying by the proportion of the time spent employed and subtracting from the expected wage rate yields the unemployment-adjusted
wage rate: We - Wo
We - u(l _ u) u(l - u)
= We -
(We - Wo)
= Wo
Of course, this is a highly indirect way of making an obvious point. The reservation wage is the flow of benefits from being unemployed in the labor market and therefore implicitly takes account of
the unemployment prospects facil;lg a worker. By combining wage rate and employment prospects into a single measure; the reservation wage conveys additional information on the economic status of a
worker or group. This result suggests that, instead of using the wage rate or earnings to compare different groups or calculated inequality, we ought to use the reservation wage. Differences in
likelihood of employment as well as wage rates during employment would then be incorporated into any comparison, providing a more accurate measure of relative economic status. However, differences in
economic status also arise from the random outcomes of search. The reservation wage is more or less an average over possible outcomes, so that the actual economic outcome for a worker may differ from
his or her reservation wage. Also, reservation wages differ not only because of the labor market conditions facing workers but also the nonemployment benefits they receive, i.e., what their
opportunities or losses are outside the labor market. Nevertheless, the reservation wage reflects the worker's evaluation of the value of being unemployed in the labor market and is therefore the
most accurate measure of economic status. Tables 6.4 to 6.8 provide some limited information on the distribution of reservation wages among part year workers (employed workers have higher values of
being in the labor market). In section 3, the inequality in reservation wages, measured by cv2 in column 6 of Tables 6.4 to 6.8, was considered as contributing to earnings inequality. Here, we can
regard the cv 2 for reservation wages as itself a measure of economic inequality. As mentioned in section 3, inequality in the reported reservation wages occasionally exceeds inequality in annual
earnings of all workers. This occurs for older white male workers. The high level of inequality in reservation wages suggests that nonemployment benefits are also very unequally distributed. This is
consistent with the results of Chapter 3, which showed a large amount of residual variation in the
determination of reservation wages. Reservation wages are much more unequally distributed for males than for females, as previously noted. One may have expected the opposite, since females presumably
have more varied opportunities outside the labor market. The measures of inequality for aggregate groups show that inequality in reservation wages is always less than inequality in annual earnings,
as shown in column 9. The annual earnings inequality measure reflects two changes from the reservation wage measure. The first change is that it does not reflect unequal nonemployment benefits. The
second change is that it adds differences in wage rate and employment outcomes to the differences in prospects facing individual workers. This second change is responsible for producing a higher
inequality among annual earnings than among reservation wages. In addition to the distributions of annual earnings and reservation wages, a third distribution may be distinguished. The distribution
of annual earnings includes differences in actual outcomes (employment and wage rate) facing workers. However, unlike the distribution of reservation wages, it values unemployment at a level equal to
the foregone earnings. That is, the implicit unemployment premium is zero, in the terminology of Chapter 3. If instead the differences in employment were weighted by the costs of unemployment, one
would obtain a distribution of economic well-being which is an alternative to the distributions of either annual earnings or reservation wages. Like the reservation wage distribution, it would
incorporate workers' valuations of unemployment. Like the annual earnings distribution, it would incorporate the actual outcomes of job search. Because the unemployment premium is generally positive,
it will indicate a higher level of inequality than in annual earnings. Workers with unemployment would have lower levels of economic well-being than indicated by their earnings, thereby increasing
inequality. The calculation of such a measure of inequality in economic well-being would require detailed data on the joint distribution of earnings and unemployment for subgroups in the population
and the corresponding unemployment premiums. Table 6.13 compares groups on the basis of reservation wages as well as median weekly earnings of full-time workers. For example, for white females, the
ratio of median reservation wage for workers aged 35 to 44 to the median reservation wage for white males aged 35 to 44 is 0.692. The ratio for median weekly earnings of full-time workers is 0.671.
Generally a lower ratio arises if one uses annual earnings of full year and part year workers because of greater employment of white males. Differences between the ratios show up for workers aged 55
to 64 and when workers are divided up by educational level. The results from the ratios for educationallevels or all workers indicate that differences using reservation wages are narrower than
differences using weekly earnings. One may have expected that because unemployment rates are greater for females and blacks, reservation wages would show wider differences since they adjust the wage
rate for the higher levels of unemployment. However, the weights used in this adjustment are worker's valuations of unemployment in terms of earnings, which are also lower for females and blacks.
Apparently the lower valuations more than compensate for the higher levels of unemployment, yielding narrower differences in reservation wages than in weekly earnings. Using reservation wages, then,
there are smaller differences in economic status between males and females and whites and blacks than indicated by wage rates alone. This conclusion rests on the use of marginal subjective valuations
of unemployment in terms of earnings. These valuations may not hold over the larger differences in unemployment levels facing different workers. Further, the reservation wages are only
collected for part year workers, whereas the weekly earnings are collected for a different group of workers, those employed full-time. Ratios based on comparable groups may yield different
6. Inequality by Source This monograph argues that not all inequality is equal. When inequality arises from choice, we should not treat it the same as inequality arising from differences in earnings
capacity of workers or inequality arising from the random outcomes of search. This chapter has attempted to isolate the contribution of choice to inequality and to find the magnitude of inequality
generated by job search. Reservation wages play a dual role in the study of inequality. The previous section shows that inequality may be measured using the distribution of reservation wages, since
they incorporate the levels and valuations of unemployment. But they also generate inequality in earnings by modifying the distribution of wage offers to produce a more unequal distribution of
accepted wage rates. The initial source of the inequality among a group of otherwise identical workers is dispersion in nonemployment benefits, the value of being unemployed in the labor market. A
higher nonemployment benefit b leads a worker to choose a higher reservation wage. By the envelope theorem, the reservation wage goes up by II,! ('A + /L + i) for every dollar increase in b. Using
reservation wages as a measure of economic well-being, inequality among a group of unemployed workers with identical labor market characteristics arises because of conditions and circumstances
outside the labor market. Among all unemployed workers, differences in wage rate and employment prospects contribute further to inequality in reservation wages. To obtain a measure of inequality for
all workers, including the employed, based on the value of being in the labor market, one would need values of iL, the value of being employed in the labor market, as well as reservation wages, which
are equal to iM. Differences in economic status would then arise from current employment status as well as labor market characteristics of the worker and his or her nonemployment benefit. This
chapter's evidence on inequality in the value of being in the labor market is restricted to data on reservation wages of part year workers. The evidence suggests that inequality in reservation wages
is quite high for white males. For older males it exceeds inequality in annual earnings of all workers in a group. It is higher for males than for females, for whom the inequality in reservation
wages is very low. The second role of reservation wages is in generating inequality in annual earnings. A group of identical workers face a wage offer distribution in searching for a job. Workers
with higher reservation wages reject some jobs and thereby face higher expected wages. The dispersion in reservation wages modifies the distribution of wage offers to yield the distribution of
accepted wage offers. Inequality in reservation wages therefore generates greater inequality in annual earnings by raising the inequality in accepted wage offers. At the same time, inequality in
reservation wages also modifies the distribution of employment. By inspection of the figures in Tables 6.4 to 6.8, the contribution to earnings inequality of choice via dispersion in reservation
wages is not apparent. Tables 6.9 to 6.12 use assumptions on the relation between reservation wages and expected wages and on the distribution of wage offers to infer the contribution of choice to
inequality. Inequality due to choice is the product of the square of the slope of a relation and the variance of logarithms of reservation wages. The tables suggest that for older white males, the
contribution is substantial, approaching 50 per cent within a given group. For older white males, both the valuation of unemployment and dispersion in
0.670 0.677 0.654 0.697 0.778
0.686 0.809 0.803
0.869 0.772 0.671 0.674 0.689
Weekly Earnings
1.11 1.05 0.980 0.981 0.974
1.00 1.01 1.03
0.961 1.01 1.01 0.973 1.05
1.03 0.984 0.925 0.931 0.946
0.950 0.965 0.955
0.949 0.956 0.964 0.929 0.896
Weekly Earnings
Black Males Reservation Wage
0.696 0.701 0.714 0.731 0.754
0.667 0.807 0.747
0.870 0.735 0.673 0.667 0.716
0.634 0.621 0.617 0.676 0.766
0.671 0.791 0.689
0.869 0.713 0.671 0.631 0.578
Weekly Earnings
Black Females Reservation Wage
Entries are the ratios of reservation wages (columns 1, 3 and 5) or weekly earnings of full-time workers (columns 2, 4 and 6) to the same variables for corresponding groups of white males. Data
sources: Employment Profiles of Selected Low-Income Areas, (1972).
All workers
0.739 0.732 0.704 0.731 0.886
0.857 0.765 0.692 0.667 0.735
Education 7 years or less 8 years 1 to 3 years high school 4 years high school 1 or more years college
0.685 0.807 0.818
Family status Head Other member Unrelated individual
16 to 22 to 35 to 45 to 55 to
Reservation Wage
White Females
Table 6.13 Ratios of Reservation Wages and of Weekly Earnings
reservation wages are high. But for other groups the contribution is less, and for females it is negligible. Another type of inequality that deserves separate analysis is the dispersion caused by the
random outcomes of job search. From the worker point of view, this dispersion is a form of uncertainty. The amount of this uncertainty is substantial, generally greater than the amount of inequality
caused by choice. Using the variance of logarithms in Tables 6.9 to 6.12, the random outcomes of job search are estimated to account for 30 to 50 per cent of all inequality for most groups. Studies
of inequality using earnings functions usually find that up to 50 per cent of the variance of the dependent variable is unexplained by the individual worker's characteristics or dummy variables. This
residual variance is attributed to unobserved differences or luck. Similarly, studies of twins find that luck, or noncommon environment, accounts for a large proportion of inequality. The results of
this chapter indicate how luck enters into the determination of earnings. The random outcomes of job search influence earnings through the distributions of employment and wage offers. Not all
employment ineqUality is the result of luck or random outcomes; it arises partly from dispersion in reservation wages and partly, for aggregated groups, from differences in transition rates arising
from unequal grades of labor. But for a welldefined group, such as those used in Tables 6.2 and 6.3, the employment inequality arises mostly from the random outcomes of search. Further, the
employment inequality for all workers is about the same as for individual groups of workers, because of the peculiar shape of the distribution of employment. For all workers in Table 6.3, employment
ineqUality is about 8.5 per cent of earnings inequality using C1I 2 and 20.4 per cent using the variance of logarithms, which weights observations in the lower brackets more heavily. From Table 6.8,
the proportion of calculated employment inequality (assuming a constant transition rate from out of work to employment and using C1I2) to total earnings inequality is 0.178 for all workers. But the
random outcomes of job search affect inequality both through employment and the accepted wage rates. The inequality in wage offers is estimated in Tables 6.9 to 6.12 in column 4, assuming a Pareto
distribution. The wage offer inequality is generally less than the employment inequality, since there are no observations below a worker's reservation wage. The sum of employment and wage offer
inequalities, as measured by the variances of logarithms, is the amount of uncertainty faced by individual workers in the job market. Inequality arising from uncertainty is a personal concern as well
as a social concern. Workers are worse off when their future employment and earnings are subject to random fluctuations. It is not clear, however, that income differences which do not arise from
differences in education, experience or age are perceived as less just by workers. The significance of the results concerning job search is that luck, previously an unexplained residual, is now
subject to economic analysis. The magnitudes may be explained in terms of features of the labor market, even though particular values for individuals are random variables. Furthermore, inequality
arising from job search is no longer outside the scope of public policy. Since the analysis reveals the way labor market conditions influence job search inequality, policies that would reduce this
inequality may be formulated and examined. In particular, those policies which facilitate rapid employment of the unemployed will reduce the uncertainty facing individual workers and also the
inequality in the entire labor force. Employment agencies which produce a more accurate matching of workers with jobs will also reduce inequality in wage offers, thereby promoting greater equality.
Finally, the large amount
of inequality arising from job search suggests that redistributional tax and benefit policies provide a substantial element of insurance which all participants would agree to beforehand.
7. Summary a. The probability density function for earnings is a combination, similar to a convolution, of the density functions for employment and wage rates, as shown in (6.1). b. Inequality in
earnings is related to inequality in employment and wage rates by (6.4), using the coefficient of variation, and by (6.5), using the variance of logarithms. These relations are generally observed in
the one in one thousand U.S. Census sample, Tables 6.2 and 6.3, and in the Employment Profiles data, Tables 6.4 to 6.8. c. Dispersion in reservation wages, or choice, has a smaller effect on earnings
than on either employment or wage rates alone, since an increase in the reservation wage has opposite effects on the two variables. d. The joint distribution of emplQyment and earnings is presented
in Table 6.1, which has the same features as tables for more disaggregated groups. The distribution of employment modifies the distribution of earnings by adding observations to the lower tail. The
highest brackets have few workers employed less than full year and are therefore determined by the upper tails of wage offer distributions. e. The contribution of employment inequality to earnings
inequality is about 8.5 per cent using the square of the coefficient of variation and 20.4 per cent using the variance of logarithms. f. Weekly earnings inequality exceeds wage offer inequality
assuming either a Pareto or exponential distribution, presumably because the distribution of reservation wages modifies the wage offer distribution. g. Under the assumption that wage offers have a
Pareto distribution and that the relation between the logarithm of a worker's wage and the worker's reservation wage is linear, it is possible to estimate the contribution of choice using (6.9). h.
The ratio of choice inequality to total inequality is greatest for white males, rising to about 50 per cent for older workers. The ratio is much less for black males and is negligible in the case of
most females. i. The inequality generated by job search is about 30 to 50 per cent of total inequality for disaggregated groups, using the variance of logarithms. The random outcomes of job search
therefore tend to contribute more to inequality than choice. j. The reservation wage is an unemployment-compensated wage rate, i.e., the level of wage rates after compensating for the amount of
unemployment a worker faces, using the worker's valuation of unemployment in terms of earnings. k. Inequality in reservation wages among part year workers is substantial, exceeding inequality in
earnings for older white males. Reservation wages are more equally distributed for females than for males. The high level of dispersion in reservation wages suggests very unequal nonemployment
benefits. I. Table 6.13 compares groups using both reservation wages and median weekly earnings of full-time workers. Differences using reservation wages are generally narrower than using median
weekly earnings, apparently because lower valuations of unemployment in terms of earnings more than make up for the higher levels of unemployment some groups face.
Chapter 7
The Operation of Labor Markets 1. Introduction Labor markets are characterized by a number of complexities that are absent in simple markets. Because of job search, unemployment arises which mayor
may not be efficient. Supply and demand behavior are no longer represented by simple curves but instead depend both on the expected wage prevailing in the market for individual workers and on the
level of unemployment or likelihood of getting a job. Also, there is not a single market; instead there are many overlapping markets, so that one worker's prospects depend on the behavior of
participants in the related markets. This chapter examines the qualitative conclusions concerning the operation of labor markets that arise from the theoretical and empirical results of this
monograph. In particular, the analysis undertaken here suggests that labor markets operate very differently for higher grade workers versus lower grade workers. These differences are described in the
next three sections.
2. Dual Labor Markets The dual labor market hypothesis seeks to explain why some labor markets operate differently from others, and why workers face such different employment conditions. Originally
the hypothesis explained these differences by arguing that labor markets were segmented or separated and that the participants in the two markets behaved very differently. Lack of mobility between
the two markets then permits the generation of unequal wages, unemployment rates and working conditions. More recent versions of the dual labor market hypothesis do not rely on labor market
segmentation. Instead, the hypothesis states that behavior of participants and outcomes change systematically as one moves across the labor market spectrum (Michael J. Piore, 1979, p.xiii). This
monograph's labor market analysis supports this more sophisticated view of the dual labor market hypothesis. Differences in firm search strategy and worker supply behavior generate qualitative
differences in labor market conditions among workers of different grades. These differences are not generated by segmentation but by overlapping markets. In previous deterministic models involving
the assignment of workers to jobs, seeming segmentation arises in that workers of different types are employed at different jobs. But this division arises from worker selection of jobs based on
income or utility maximization in the presence of comparative advantage, the scale of resources effect or preferences. In the mutual search models developed in this monograph, workers do not directly
choose the jobs to which they would be assigned in a deterministic equilibrium. That is, there is no self-selection, as it is sometimes called. Instead, the minimum grade requirements and reservation
wages combine to limit the jobs at which a given worker could end up. The result is a seeming segmentation of labor markets in which workers with higher grades are found at firms with higher grade
The Operation 0/ Labor Markets
This section summarizes some of the qualitative differences in labor markets for workers of different grades. The first area of difference is in firm search behavior. This behavior, described in
Chapter 2, differs significantly by technology. The grade differential, pQ2/n, describes the value to the firm of an increase in the average grade of workers. Firms with higher grade differentials
tend to hire workers with higher grades. But the grade differential also equals cm/(Z(ge - go», the search costs per worker divided by the difference between the average grade and the firm's minimum
grade requirement. Firms with higher grade differentials therefore pursue a very different search strategy. They have high search costs per worker for a given rate of worker separation from jobs, m.
Such firms interview more workers per position but extend fewer offers or have fewer accepted. Higher grade differential firms are therefore more selective, so that it is more difficult for a worker
to get an offer from such a firm. With more expenditure on finding a worker for high grade differential firms, such firms would be less likely to layoff workers in response to a decline in business
conditions. Low grade differential firms, on the other hand, make offers to more workers and spend less on search costs per worker. They are more likely to layoff workers in a downturn because the
worker's value of the marginal product is more likely to sink below the wage. From the worker's view, these differences produce unequal labor market conditions. Higher grade workers get more job
offers but turn down more offers to get the jobs they want. Low grade workers must search longer to find job offers but are more likely to take whatever is offered. In alternative terms, higher grade
workers are wagesearchers, while low grade workers are offer-searchers, these being the constraints on finding a suitable job. The differences in labor market conditions for workers are partly
revealed by the distribution of wage offers for different grades in Table 5.1. Another major difference in labor market conditions is generated by regression towards the mean, which will be discussed
in detail in the next chapter. The general effect of the search procedure on the assignment of workers to jobs is to push workers and firms closer to the middle values. Higher grade workers tend to
end up in firms with grade differentials closer to the average grade differential, and hence below the grade differential of the firm at which they would end up in a deterministic assignment. These
workers tend to have higher grades than their fellow workers. Lower grade workers, however, end up in firms that have higher grade differentials than they would get in a deterministic assignment.
These workers are generally below the "average grade of their fellow workers. If probability of being laid off depends on position within the firm, then low grade workers are more likely to be laid
off. If workers are more likely to quit when they are above the average of their fellow workers, then higher grade workers are more likely to quit. Separation will then tend to be very different for
high versus low grade workers. Choice also varies from group to group. Choice is reflected in the ability of a worker to achieve a reduction in expected unemployment through a lower reservation wage
and lower expected wage. This ability is measured by the unemployment tradeoff, described in Chapter 3. Workers with high unemployment trade-offs must make a substantial sacrifice in the expected
wage in order to get a reduction in expected unemployment. The estimates in Chapter 3 reveal substantial differences in the valuation of unemployment among workers. Partly these differences reflect
the tastes and preferences of workers, since the valuations are the marginal rates at which workers are willing to substitute expected wages for expected unemployment. But the valuations also equal
the rates at which workers are able to trade off expected earnings for
The Operation oj Labor Markets
expected unemployment in the labor market. Generally the unemployment trade-off increases with age and education. These results may be turned around. A worker with a high unemployment tradeoff has a
low wage trade-off. That is, if the worker must take a large expected wage sacrifice to get a reduction in unemployment, then the same worker can get a large expected wage increase with a relatively
small unemployment increase. From this point of view, the workers with the lowest unemployment valuations, who appear to have the greatest choice among unemployment rates, have the least choice in
wage rates. These two alternative statements of the position of workers with low unemployment valuations are contained in the statement that the slope of the choice set frontier for such workers, on
a graph of expected wage rate on the vertical axis versus expected unemployment on the horizontal axis, is low. Low apparent unemployment valuations also would arise for workers seeking jobs at the
minimum wage. Then workers would have no choice in the risk of unemployment or wage rate. The solution to the worker choice problem of finding a reservation wage occurs at the boundary value given by
the minimum wage. The major differences among workers arise over the business cycle and in response to long run price and wage adjustment in the economy; these will be discussed in the next sections.
3. The Business Cycle A major way in which groups differ is in their behavior and conditions over the business cycle. Typically a recession will hit some groups of workers harder than others, and
groups respond differently to the higher unemployment and reduced wages. These differences have been examined by a number of authors, but probably the most prominent analysis is Melvin Reder's theory
of occupational wage differences (1955). Reder's central point is that it is always possible for firms to convert low grade labor into high grade labor at a fixed cost of training. In response to an
across-the-board increase in the demand for all labor, wage rates for high grade workers will not rise. Instead, firms will promote lower grade workers into the higher positions. The numbers of
unemployed high grade workers will therefore decrease by a lower proportion than the numbers of unemployed low grade workers. Because of these changes, wages in the low grade labor market will rise
faster relative to wages in the higher grade labor market. Reder concludes that wage differentials will narrow during a peak period and widen during a recession. The results of this monograph suggest
that the labor markets do not behave as Reder concludes. As economic conditions improve, the unemployment rate of low grade workers declines relative to the unemployment rate of high grade workers,
as Reder concludes; but the wage differential increases instead of decreases. Even within the context of Reder's model, there are difficulties with Reder's conclusion. At any point in time, firms
have two sources of high grade workers. They can hire currently unemployed high grade workers at the prevailing wage rate or else they can promote and train lower grade workers. The decision as to
which source to use depends on the relative wages of high and low grade workers and the cost of training low grade workers to fill high grade positions, which is assumed constant in the model. Firms
will only switch from hiring high grade workers to promoting low grade workers if the wage differential increases. Therefore, the relative decline in unemployment of low grade workers cannot occur
unless the wage differential increases. The behavior of different groups over the business cycle may also be analyzed in terms of supply and demand using the results of Chapters 2 and 3. The supply
The Operation of Labor Markets
behavior of workers is determined by the response of the reservation wage to changes in economic conditions. Labor markets with search differ substantially from simple non-search auction markets in
that participants change their market behavior directly in response to overall conditions. With a simple market, the supply curve is determined by the numbers willing to work at each wage. With the
tatonnement process, an auctioneer adjusts the wage until quantity demanded equals quantity supplied. In a market with search and wage dispersion, workers set reservation wages in response to the
expected wage in the market and unemployment, so that any change in market conditions is reflected directly in the supply behavior of workers. There is therefore no supply curve in the ordinary
sense. At best there is a distribution of reservation wages, but this reservation wage distribution shifts in response to a shift in the demand for labor. Workers respond to a change in the
unemployment rate as follows (see Chapter 2, section 3): S> IS>uu = b - c - We - i(we+ b - .c) uWo A+JI.+1
The results of Chapter 3 show that higher grade workers (those with greater education or who are older) have lower values of b - c and of course also have higher expected wages. These differences
tend to cancel out in the second expression, We + b - c, but reinforce in the first expression. The response awol au will therefore tend to be greater for those workers with higher unemployment
valuations. They will adjust the reservation wage more in response to a change in unemployment, since the costs of unemployment are greater. Holding the transition rate A constant, worker response to
a change in the expected wage is given by: aWo
(See (2.22) for the more general worker response when A also varies.) The term A/(}"+ JI. + i) is less than 1 - u. Workers facing higher unemployment therefore adjust the reservation wage less in
response to a change in the expected or market wage rate. The firm response to changing market conditions is less clear because of the complicated comparative static results, cited in Chapter 2,
section 6. The response of the firm to a change in the product price is ambiguous. An increase in supply of labor to the firm shows up as an increase in the acceptance rate z. This may arise from
greater numbers of unemployed workers or lowered reservation wages of workers. The firm responds to these conditions by raising the minimum grade requirement. It would be highly interesting to find
out if the changes in grade requirement and wage offer differ between firms according to grade differential, but this result is not evident from the comparative static results. However, even an equal
shift in grade requirements for firms of all grade differentials would affect workers unequally. Workers with high grades would still be able to get job offers, although perhaps at lower wage rates,
while workers with low grades would find that job offers would decrease substantially. Essentially, a form of skill bumping in the labor market would occur, in which higher grade workers displace
lower grade workers. Lower reservation wages on the part of workers lead them to compete with workers with lower grades, reducing their opportunities. These results indicate that labor markets
operate very differently for workers with high versus low grades. In response to poor economic conditions, workers with high grades reduce their reservation wages more. They compete for jobs at firms
The Operation of Labor Markets
lower grade differentials and thereby stabilize their unemployment rates. In good economic times, unemployment among high grade workers is not reduced substantially. Instead, any reduction in
unemployment or increase in expected wages is translated into higher reservation wages. Over the business cycle, high grade workers face greater fluctuations in the wage rate rather than fluctuations
in unemployment. In contrast, low grade workers face larger fluctuations in employment and smaller fluctuations in wage rates. Their reservation wages change less in response to changes in
unemployment rates or expected wages. Fluctuations in economic activity are therefore mostly absorbed in fluctuations in unemployment. These observations remain true if the lower unemployment
valuations of lower grade workers are generated by minimum wages. If minimum wages put a lower bound on reservation wages, then lower grade workers cannot adapt to worsening economic conditions by
lowering their reservation wages. They must then face fluctuations in unemployment rates rather than fluctuations in wage rates. Over the course of the business cycle, we would expect to see an
increase in wage rate differentials as economic activity increases. At the same time, we would expect to see a decrease in unemployment rate differentials, as the unemployment rate of lower grade
workers declines relative to the rate for higher grade workers. Earnings are the product of wage rates and the proportion of the time employed, so earnings and wage rates will behave differently.
Earnings differentials will fluctuate more moderately than either wage rates or unemployment, and may increase or decrease over the business cycle. Since high grade and low grade workers experience
different mixes of wage rate and unemployment changes, their relative well-being over the business cycle depends on the valuation of unemployment in terms of earnings. We can use the worker's own
valuations by looking at the reservation wages. As economic activity increases, the reservation wage of higher grade workers increases relative to the reservation wage of lower grade workers. The
relative economic status of higher grade workers improves with an increase in economic activity. On this basis (rather than on the basis of earnings), inequality increases as economic activity
improves, although all workers are better off in terms of their economic prospects. Another consequence of these results is that an aggregate macroeconomic policy will always produce undesirable
effects for one group or the other. An antiunemployment policy will reduce unemployment among lower grade workers as intended but will boost wages for higher grade workers with little reduction in
their unemployment, causing eventual price increases in the economy. An anti-inflation policy will achieve lower wage rates among higher grade workers but at the cost of much higher unemployment
rates among lower grade workers, with only small reductions in their wage rates. This section has described the behavior of labor markets in disequilibrium over the business cycle. But in addition,
the labor markets for high and low grade workers differ in terms of long-run adjustment, as will be described in the next section. The general direction of adjustment is the same, though. Higher
grade labor markets tend to adjust through wage flexibility, while lower grade labor markets adjust through employment flexibility.
4. Wage Rigidity, Wage Resistance and the Aggregate Supply Curve A central question in the study of macroeconomic systems is whether and how rapidly a system will eliminate excessive unemployment. In
a paper on the efficiency of search, Robert E. Hall (1979a) finds that the supply and demand for labor do not
The Operation of Labor Markets
influence the natural unemployment rate. The search procedures of workers and firms and the operation of a Walrasian system would return unemployment to the natural rate after a shift in demand or
supply. In this book, unemployed workers are also represented as engaged in search behavior. However, this section reaches a substantially different conclusion from Hall's. If the demand for labor
declines, the search behavior of workers (in particular their reservation wages) may not adjust sufficiently to return unemployment to its previous level. The results of this section may also be
related to early discussions of equilibrium unemployment. In the Keynesian macroeconomic system, a surefire way to get equilibrium unemployment is to assume wages are downwardly rigid. The labor
supply curve is then horizontal at the rigid wage, and wages fail to adjust downward sufflciently to eliminate any unemployment. This assumption produces an upward sloping aggre~ate supply curve, the
relation between national income and the price level it generates. The aggregate demand curve shows the relation between the price level and the level of national income consistent with equilibrium
in the commodity and financial markets. Under the Keynesian system, as the aggregate demand curve shifts left during a depression or recession, it moves to the left along the aggregate supply curve,
yielding lower national income. The price level fails to fall sufflciently to return the system to the previous level of employment and output. In contrast, the monetarist view is that there is
enough downward flexibility in prices and wages to return the economy eventually to a natural rate of employment and output, defined by the condition that the inflation rate does not change at those
levels. The long run aggregate supply curve is then vertical. The point of this section is that wage rigidity is unnecessary to obtain the Keynesian system. Instead, a much weaker condition, called
here wage resistance, is sufflcient to produce an upward sloping aggregate supply curve. In a simple supply and demand model of the labor market in the absence of controls, an excess supply will lead
to unsatisfied sellers (workers) bidding down the wage. With wage rigidity, the horizontal section of the supply curve will prevent wages from being reduced further, even as unemployment increases.
The job search literature shows that a certain amount of unemployment can arise in a labor market without downward pressure on the wage rate. But search theory is also consistent with the following.
As labor demand declines, the wage rate falls, but perhaps not enough to reduce unemployment to its previous level. The labor market could therefore sustain high levels of unemployment without
eliminating the unemployment through wage reductions. The aggregate supply curve would be upward sloping in the lower region and price flexibility would be insufflcient to return the system to a
previous, lower level of unemployment. The positive but insufflcient decline in wage rates as labor demand goes down may be called wage resistance. To understand how this wage resistance could come
about, note that the wage prevailing in a labor market is an average wage and not a single value. It is determined by the distributions of wage offers and reservation wages. A decline in the levels
of wage offers will bring about a decrease in the average wage, but complete adjustment and return to the previous level of unemployment require an approximately equal drop in the reservation wages
of unemployed workers. This may not happen. Workers may respond to a decline in the average or expected wage with a smaller decline in their reservation wage, so that unemployment increases. The
average wage and transition rate from unemployment to employment may be expressed in terms of the distributions of reservation wages and wage offers. Let v(w/o,)/o, be the probability density
function for wage offers, where the parameter 0,
The Operation of Labor Markets
determines the level of offers. A decrease in 01 lowers the wage offers (a given percentile of wage offers occurs at a lower wage). Suppose the cumulative distribution function of the reservation
wages among the unemployed is H(wl0 2 ), where 02 is again a parameter. A decrease in 02 essentially shifts the distribution downward. If a worker's reservation wage were ten dollars and 02 goes from
two to one, the worker will now accept any job paying at least five dollars. The transition rate from unemployment to employment is then: X = 'Y {H'(WI02)V(WI01)dWIOI
where 'Y is the rate at which interviews take place and the remaining part of the expression on the right is the probability that an interview results in an accepted offer. The density of accepted
wage rates is then: H(wl02)v(wlo l)/01
The average wage rate is:
(00 H(wl02)v(wlol)/01 d
~ W
Now suppose that the price level in the product market declines, so that wage offers decline. This is reflected by a decline in the parameter 01 in the wage offer density function. Complete
adjustment to this decline with no change in unemployment requires that X return to its former level (assuming no change in 'Y or the transition rate out of employment). This occurs if 01 = 02.
Substituting x = wlo l into (7.1): 'Y IooH(x) v(x)
The average wage in the market is then: looOlxH(X) v(x)dxl(X/'Y) o
This is 01 times the former average wage level. Therefore a decline in wage offers and reservation wages of equal proportion yields a decline in the average wage of the same proportion with no
increase in unemployment. Now suppose that wo, We and X all respond to a fall in the product price p. Then from (2.15), these responses are related as follows: X
()wo ()p =
X+ J.' + i
()We ()p
We -
With complete adjustment, ()XI ()p is zero and, when 01 ~Wol ()p
But from the expression for
()wel ()p We
when ()XI ()p = 0: ()wo ()p
+ X+ J.' + i ()p
= X+ J.' + i
The Operation oj Labor Markets
Therefore complete adjustment only occurs when: Wo
= 'A+Jl.+i We
By inspection of (2.15), this occurs if and only if b - c = O. Then complete adjustment occurs and the labor market returns to the former unemployment level. Two other adjustment cases arise
depending on the value of b - c. If the term is positive, Wo decreases by a smaller proportion than We if (J'A/ (Jp remains the same. Complete adjustment with a return to the former level of
unemployment is impossible. Instead, a decline in wage offers will be accompanied by a rise in unemployment, reflected by a decline in 'A. Reservation wages will not decline sufficiently to return to
the former level of ,memployment. In this case, what prevents the system from reaching the former level of activity is not wage rigidity. Wages continue to decline as economic activity and employment
go down, but not enough. Instead, wage resistance prevents the adjustment. When wages decline in nominal terms, they also decline relative to the positive nonemployment benefits net of search costs,
b - c. This induces workers to keep the reservation wage up and risk a higher level of unemployment. Put differently, workers are unable to influence their expected unemployment except at the cost of
a large decline in the expected wage rate, so they choose to keep the reservation wage up. If b - c is positive, it follows further that an increase in wage rates will reduce the level of
unemployment; workers choose to raise their reservation wages by a lower proportion than expected wages go up. A genuine long run trade-off between the price level and economic activity would result
(assuming b - c stays fixed). In the other case, b - c is negative. The empirical evidence from Chapter 3 suggests that this occurs for males, older workers and workers with more education. In this
case, a decline in levels of wage offers makes unemployment relatively more costly to workers. They respond to the drop in wages by lowering their reservation wages by a greater proportion for a
given level of unemployment. The equilibrium level of unemployment must then decline in response to the fall in wage offers. This counterintuitive result has an equally counter-intuitive obverse. If
wage offers rise, unemployment will increase. The proportional change in reservation wages will be greater than the proportional change in expected wages if (J'A/ (Jp = 0, so that unemployment must
rise. In general, b - c will not be zero, nor will it have the same value for all groups. A shift in aggregate demand will therefore not leave the economy unaffected in the long run. Price and wage
flexibility will not return the economy to a state where all groups have their former levels of unemployment and real wages. A decline in the aggregate demand curve and a corresponding drop in wage
offers will eventually produce a lower proportional fall in wages and an increase in unemployment for those groups with positive values of b - c and will produce a higher proportional fall in wages
and decrease in unemployment for those groups with a negative value of b - c. These adjustments are similar to the changes that would occur over the business cycle. The above conclusions arise when
we limit consideration to changes in wage offer distributions that are simple proportional translations. That is, the density function of wage offers, drawn on a graph with a logarithmic horizontal
axis, is simply shifted to the right or left. With a change in aggregate demand, more complicated changes in the wage offer distribution could occur. Second, it is possible that the term b - c may
also change along with the general price level. If it has the same proportional change as the wage rates, then it may be shown that price and wage flexibility is sufficient to return
The Operation of Labor Markets
the economy to the former level of unemployment. This result holds whether b - c is positive or negative. The Keynesian system with wage rigidity arises because a nominal term stays fIxed while real
terms change. Wage resistance similarly arises because a nominal term, b - c, stays fixed while real terms change. It differs from the Keynesian system in that the nominal variable is one stage
removed from observed economic variables.
Chapter 8
Chronic Underemployment And Regression Towards the Mean 1. Introduction The standard presumption regarding unemployment generated by search is that it is efficient (Robert Hall, 1979a; E. Prescott,
1975). But this conclusion is based on an incomplete notion of the role of search. Previous work on the subject, by not recognizing that search assigns workers to jobs, has neglected to study the
assignment brought about by search and has ruled out the phenomenon of regression towards the mean. This phenomenon distorts the assignment of workers to jobs in a systematic way and produces chronic
underemployment (excessive unemployment) of less skilled workers. Although the job matching literature (Boyan Jovanovic, 1979; Dale Mortensen, 1976) has incorporated some of the allocative role of
search, it has not specified the heterogeneous nature of the workers and firms to be assigned to each other, with the result that regression towards the mean cannot arise. This chapter explores the
nature of unemployment generated by search. The major question concerns whether this unemployment is greater or smaller than the economically efficient level. This question will be answered through
microeconomic analysis of the potential labor market trades that exist in the market. In conducting this analysis, an important distinction is introduced between direct and indirect trades. Direct
trades of labor for wages are generally exploited in the labor market. The possibility of underemployment arises in indirect trades of labor for labor, brought about by workers turning down current
jobs and thereby remaining available for other jobs. There are fundamentally two schools of thought regarding unemployment. There are those who are against it; and there are those who are for it.
Economists in the former group seek to show that equilibrium or sustained unemployment is possible and that appropriate policies exist that can reduce it. Economists in the latter group wish to
demonstrate that the unemployment generated by individual decision making is efficient and besides, no one can do anything about unemployment anyway. Corresponding to these two views, the discussion
often revolves around whether the unemployment is voluntary or not, this being regarded as a test of whether unemployment is optimal. But whether unemployment is voluntary or not depends on semantic
distinctions. In an important sense, unemployment generated by search is involuntary in spite of the presence of choice in the determination of the reservation wage. To confound the matter further,
choice can be present when there is too much unemployment (and in fact can cause the excessive unemployment), while involuntary unemployment can fall short of the optimal level. A sounder basis for
the consideration of over or underemployment is the existence of trades in the labor market that make some individuals better off while leaving other individuals no worse off. The absence of such
trades is of course a Pareto
Chronic Underemployment
criterion for an optimal allocation and is a natural principle to be used in organizing the discussion of various views of unemployment in the next section. Along with the question of what trades are
possible in the labor market, the next section also considers what role unemployment plays in bringing about those trades. Section 3 presents the definition of underemployment that will be used in
the investigation of chronic underemployment. This definition involves a comparison of a worker's contribution to production at a given firm with his or her potential contribution elsewhere (the
opportunity cost) if he or she remains unemployed. This definition is therefore directly founded upon the existence of advantageous trades in the labor market and can therefore be related to the
Pareto criteria for an optimal allocation of resources. Underemployment is viewed as a disaggregated phenomenon, applying to some groups in the economy but not others, rather than an aggregate
phenomenon holding for the whole labor market. Section 4 presents tests for underemployment that apply to groups defined by changes in marginal employment criteria, such as the reservation wage, wage
offer and grade requirement. Section 5 discusses regression towards the mean while section 6 relates the principle to labor market phenomena. Regression towards the mean implies that search
systematically distorts the assignment of workers to jobs in a way that leaves less skilled workers chronically underemployed according to the definition and tests developed in sections 3 and 4.
.Section 7 compares the nature of the unemployment described in this chapter with previous theories of unemployment.
2. Advantageous Trades and the Economic Role of Unemployment 8. The simplest type of un exploited advantageous trade in the labor market is an exchange of labor for wages. In the standard Keynesian
definition, involuntary unemployment occurs when an increase in the prices of goods bought with wages, relative to current wage rates, leads to a state where both the quantity demanded of labor and
the quantity supplied exceed the current level of employment. That is, unemployment is involuntary when a greater trade of labor for wages can be brought about by raising prices relative to wages.
This notion of unemployment is firmly rooted in the microeconomic argument that both voluntary participants in an exchange are made better off by it. The existence of an alternative state of the
economy, perhaps brought about by moving up the Phillips curve towards more inflation, in which new exchanges of labor for wages take place, is evidence that the former state of the economy was
characterized by underemployment. The next question concerns the economic role unemployment plays in bringing about unexploited trades of labor for wages. It must be rather strange to think of
unemployment as playing a role in the economic system or solving a problem, especially since it entails such high costs to the individuals involved. But even in the simplest models, where it appears
only in disequilibrium, unemployment moves the economy back toward equilibrium and hence plays some role. The high costs of the unemployment are also consistent with a useful role for it: the costs
indicate who bears the burden of paying for this role and are not per se evidence of too much or too little unemployment. In a perfectly competitive system, unemployment acts to equate quantity of
labor demanded with quantity of labor supplied. That is, unemployment in a Walrasian general equilibrium system is an excess supply which leads to a downward adjustment in the wage rate and a
movement towards equilibrium, at which the quantity demanded equals the quantity supplied in all markets. Unemployment is therefore part of the
Chronic Underemployment
mechanism which keeps the economy operating close to equilibrium. In this kind of model, workers are homogeneous, as are goods in a perfectly competitive market. The relevant question in this kind of
a system is whether unemployment in fact performs the role assigned to it. In the neoclassical view, it does, just as the interest rate equates savings and investment in simple models. In the
Keynesian view, unemployment fails to move the economy towards equilibrium, or else the adjustment rate is too slow. The Keynesian and neoclassical systems have been so extensively analyzed that
there is little point in repeating the discussion (see recent work by D.F. Gordon, 1976; R.J. Gordon, 1976; A.G. Hines, 1980; and E. Malinvaud, 1977). b. The second possible type of trade in the
labor market is labor now for labor later. This type of trade arises in the model of aggregate labor supply developed by R.E. Lucas, Jr., and L.A. Rapping (1970). In their model, labor supply depends
on both current and expected future wage rates, as well as asset holdings and the expected real interest rate. The presence of both current and expected future wage rates reflects the authors' view
that laborers are engaged in the intertemporal transfer of work and leisure. That is, they seek to pattern their desired work and leisure episodes over time in a manner that maximizes their
consumption possibilities. If real wages now are greater than what is expected in the future, laborers will choose to work now and will schedule leisure for the future. Similarly, if no job shows up
now that matches what the worker expects in the future, the worker will remain unemployed, schedule the leisure now and the work later. Unemployment therefore allows the worker to take advantage of
current work opportunities to reallocate intertemporally the periods of work and leisure. Another aspect of this theory is that by remaining unemployed when no current job pays as much as the worker
expects in the future, the worker undertakes a form of investment. By doing so, the worker reserves his or her labor for future work opportunities where the value of the labor, in terms of its
marginal product, will presumably be greater. From the point of view of the economy as a whole, therefore, unemployment is necessary for the efficient intertemporal allocation of work and leisure. A
suitable definition of underemployment in the Lucas and Rapping model is that it occurs in a given period when employment opportunities are superior to those in some future period. That is, if the
value of a worker's production in the given period is greater than it will be in some future period, then an advantageous trade could be brought about in which labor now is increased and labor in the
future period is decreased. An implication of this theory is that overemployment can also take place, in which case more exchanges of labor for wages occur currently than are socially optimal.
Consider an economy in steady state with no inflation and a constant level of employment. Then government policy increases aggregate demand, prices of goods and wage rates. Workers respond by working
more currently, planning to take leisure in later periods. When later periods arrive, less labor is forthcoming even at previous real wage rates (the result of fewer acceptances on the part of
workers) and employment goes below its steady state level. The effect of the government policy is then to reallocate the production and output of the economy, without increasing the total long-run
output. If current production opportunities are worse than those in the future, the effect of the government policy is to reduce long-run total output. In this case, the increase in current labor
exchanges is no evidence that the previous state involved an inefficiently low level of employment. The current additional labor exchanges brought about by government policy are not created trade but
trade diverted
Chronic Underemployment
from the future. The cost of employing labor now is to reduce its availability in the future (in the form of higher unemployment) when production opportunities may be better; government policy would
then be inefficient. Unemployment also arises in the context of this model when workers' expectations of future earnings are incorrect. Suppose that workers falsely believe that future wage rates
will be greater than they are now. This misinformation or bias in expectations will lead workers to remain currently unemployed at a level which is unjustified by future production opportunities.
Socially inefficient underemployment is therefore possible in the model but disappears with rational expectations. This form of unemployment leaves little scope for government intervention. J.
Altonji and O. Ashenfelter (1980) examine the extent to which the difference between current and expected future real wages could account for fluctuations in unemployment. They find that rational
forecasts would produce only a constant difference, so that the fluctuations could not be attributed to that difference. There are several disturbing aspects of the Lucas and Rapping model. All
unemployment appears to arise from supply behavior of workers, although it is influenced by past and expected future levels of aggregate demand. Unemployment is relatively costless and appears as a
form of leisure. A spell of unemployment is not a loss of wages but a forced rescheduling of work and leisure episodes. Contrary to presumption, the proportion of the work force that does not seek to
work all the time, and is therefore in a position to reallocate intertemporally the periods of work and leisure, may be small. Although the theory may only describe a minority of the work force, it
captures the point that whether there is too little employment depends on an intertemporal comparison. c. Search models concern trades between employment now and higher wages or higher productivity
later. Following Stigler's early contributions, C.c. Holt (1970) and Dale Mortensen (1970) relate the theory of job search to unemployment and wage dynamics. According to the standard job search
model, workers stay unemployed rather than take any job that comes along in order to earn a higher wage. Unemployment therefore yields higher wages for workers. This makes sense from the individual
worker's point of view, but what is the source of the higher wages? The worker must somehow be more productive in the jobs that pay higher wages. The incorporation of firm behavior leads to the
theory of job matching developed by Mortensen (1976) and Jovanovic (1979). Unemployment then produces a more accurate matching of workers to jobs, which is the source of the greater productivity
necessary to pay for the higher wages. In the context of search, underemployment occurs when workers remain unemployed too long in looking for jobs. Since workers take into account the expected
duration of unemployment when they choose reservation wages, underemployment cannot occur for rational workers unless the increase in wages gained by remaining unemployed does not accurately reflect
the social gain in increased productivity, or unless workers generate externalities by remaining unemployed. In markets with search, unemployment plays the role of a productive activity that improves
the match of workers with jobs. Since search theory is the subject of Chapter 2, there is no need to go into it again here. The general view is that search unemployment is all "voluntary". Further,
since it results from optimizing behavior on the part of individuals, the resulting level of unemployment is presumed to be efficient. The efficiency of the natural rate (Prescott, 1975; Hall, 1979),
far from being the major result in search theory, disqualifies the approach from relevancy to the question of involuntary or excessive
Chronic Underemployment
unemployment, according to Hines (1980, p.144). This reputation for search theory is undeserved. Unemployment is voluntary only in a limited sense, and the worker's status is more accurately
portrayed by describing the choice set he or she faces. By raising the reservation wage, the worker can increase the expected wage and increase the expected level of unemployment. It is in this sense
that unemployment is voluntary; but this is really a rather limited sense of the term and is no indication of whether the unemployment is a matter for social concern more than private concern. First,
the worker faces no choice in the trade-offs he or she faces. The alternative combinations of expected wage and expected unemployment level are determined by the labor market and are not subject to
individual choice. Second, only the expected levels are chosen; the actual levels of unemployment and earnings are the result of the outcome of the search process and are not directly under the
control or choice of the worker. To the extent that the worker always prefers work at the expected wage to continued unemployment, all actual unemployment is involuntary. Third, the actual times when
workers can exercise choice will be rare. These will occur when the worker faces a job offer that is slightly below the reservation wage. Here, the worker has the opportunity to reduce the
unemployment by accepting a wage which is substantially below the worker's expected wage. But job offers in that range may be unlikely occurrences, and the relevance of the choice open to the worker
will be limited to a minority of the time. Most of the time spent unemployed will be time spent waiting and will be an involuntary loss to the worker. Search unemployment is therefore not inherently
voluntary. More important than the involuntary nature of search unemployment, efficiency is not the inevitable conclusion from using the methods of search theory. Instead, this conclusion arises from
the particular way the theory has been developed. For example, Hall (1979), in his paper on the efficiency of the job-finding rate and unemployment, explicitly assumes that jobs and workers are
homogeneous. The consequence of this homogeneity is that the opportunity cost of a worker's employment at a particular firm is identical to the expected contribution to production at that firm, so
there is no divergence between private and social costs. The introduction of mutual search and systematic heterogeneity, as incorporated into the model in Chapter 2, leads to a model in which
unemployment plays the role of bringing about an assignment of workers to jobs. The notion of the unemployed as a reserve pool of workers can be made explicit: by withholding labor from a job with a
low wage, workers reserve their labor for employment at a firm where their contribution to production will be greater. In this type of model, excessive unemployment or underemployment is not
eliminated by the decentralized search decisions of firms and workers. This will be developed in the following sections. Another source of inefficiency in the level of search unemployment is
externalities generated by unemployed workers. In his presidential address to the American Economic Association, James Tobin (1972) suggests the possibility of search congestion (see also Phelps,
1972). Tobin argues that the entry of a worker into the condition of unemployment imposes externalities on other unemployed workers by lengthening the time it would take them to receive jobs, in
analogy to a queue at a counter. Such an argument neglects the reduction in search costs on the other side of the labor market brought about by unemployment. Search congestion is potentially present
in all labor markets, even without job rationing, but a market for interviews will tend to eliminate the congestion (see Sattinger, 1984, and also work on search congestion by P. Diamond, 1982,
Diamond and Eric Maskin, 1979, and Christopher Pissarides, 1983, 1984).
Chronic Underemployment
3. Definitions of Underemployment This section proposes some definitions of under and overemployment that arise from the view of unemployment developed in the preceding section. Ideally, a definition
of underemployment should provide tests for the existence of possible trades that make some labor market participants better off without making anyone worse off. The tests should also identify which
decisions are suboptimal. These decisions include participation in the labor market, and choices of the reservation wage, firm grade requirement and wage offer. Basically, two kinds of trades are
possible in a labor market. The first is the direct trade of labor for wages. Workers may be employed more, and in exchange for the greater production receive higher incomes. Alternatively, more
individuals may decide to enter the labor market. Indirect trades arise when a worker declines to accept employment at a particular firm and thereby remains available for employment at some other
firm. Essentially, a trade takes place among firms, with the exchanged good being the worker's labor. There is of course no necessary contact between the two firms. The distinction is important to
the isolation of conditions under which underemployment takes place. Individual market behavior, reflected in the search procedures of workers and firms, tends to exploit all possible direct trades.
But individual decisions do not necessarily recognize or take account. of indirect trades. This problem would not arise in a deterministic setting, where there would be only one possible value of the
marginal product for a worker. But in a labor market with search, there are many alternative employments for a given worker such that the grade and reservation wage requirements are satisfied.
Turning down a job where a worker's marginal product is low could then leave him or her available for employment where the marginal product is higher. For example, suppose a worker has an interview
at a firm where the grade requirement is substantially below the worker's grade but the wage offer is slightly below the worker's reservation wage. With the firm's search costs sunk and lost, it
would seem that the gains of the firm from the employment would exceed the slight loss to the worker of accepting the job at a wage below the value of being in the labor market. Then a direct trade
would seem to be desirable, although the labor market in this case would not bring it about. But the relevant comparison is between the value of the worker's marginal product at the firm in question
and the expected contribution of the worker to production at other firms, net of the search costs imposed by the worker. To make this comparison, we need a specific expression for this expected
contribution. Define PDV(wo) as the expected present discounted value of the contribution of an unemployed worker's labor. This term includes two parts. When the worker is employed, the contribution
of his or her labor to production is the value of the marginal product net of firms' search costs. When the worker is unemployed, the worker's labor contributes the nonemployment benefit net of the
worker's search costs. This term, PDV(wo), will be constructed in the same manner as M(wo) in Chapter 2. The other necessary terminology is as follows. Let NMP, standing for the net marginal product,
be the value of the marginal product of a worker at a job net of the firm's average search costs. Corresponding to the function L(w) in Chapter 2, let LMP(NMP) be the present value of future
contributions of an employed worker's labor, given the worker's reservation wage Wo and the value of NMP at the current
Chronic Underemployment
job. In analogy to (2.4) and (2.6), for a small period of time T, the following recursive relations hold: PDV(wo)
= (b-c)T+(l-AT)PDV(wo)e-iT+"ATLMP(NMPe)e-iT
and: LMP(NMP)
= NMPT+(l- p,T)LMP(NMP)e-iT+p,TPDV(wo)e-iT
In (8.1), NMPe is the average value of NMP over all the firms at which the worker could be employed. The term LMP(NMPe ) in (8.1) may be found from (8.2) by rearranging and taking the limit as T
approaches zero. Substituting LMP(NMPe) into (8.1), rearranging and taking the limit as T approaches zero yields the following: . A p,+i .NMPe + A .(b - c) IPDV(wo) = A +P.+l +P.+l
This expression for iPDV(wo) differs from the expression for iM(wo) only in that the average net marginal product, NMPe , replaces the average wage, We. This may be seen explicitly by reexpressing
iPDV(wo) as follows: iPDV(wo)
A A . (NMPe - We) +P.+l
At first, it would seem that an appropriate test for underemployment would be to compare iPDV(wo) for a worker with Wo, so that the underemployment occurs when iPDV(wo) > woo Then an additional
worker's contribution to production would be more than adequate to pay for the worker's reservation wage, yielding a net gain. Also, from (8.4), the test would reduce to a comparison between NMPe and
We for the worker. This test would be analogous to the comparison between a worker's marginal product and his or her wage in a labor market without search. However, in a labor market without search,
additional workers are available at the prevailing wage rate in the presence of unemployment. When search is present, there may not be additional workers available at the reservation wage woo Most of
the workers in the labor market will be those for whom Wo is strictly greater than the nonemployment benefit b. Additional workers will only be available at reservation wages that just equal their
values of b, and these reservation wages will typically be greater than the average. Additional exchanges of labor for wages (a direct trade) are only possible for workers who are just indifferent as
to whether to enter the labor market. Therefore the comparison between iPDV(wo) and Wo, to see if additional workers should enter the labor market, will in general be irrelevant: in most cases there
will not be any additional workers outside the labor force who would enter at that reservation wage. Instead of comparing iPDV(wo) with Wo, it will be compared with the value of the marginal product
(hereafter the VMP) of a worker or workers at a particular firm. This comparison is intended to test for underemployment at the moment when the employment decision is made, when a firm, having
interviewed a given worker, must decide whether to extend an offer and the worker must decide whether to accept. Since the interview will already have taken place, the search costs are sunk and
irrelevant to the comparison. The relevant comparison for purposes of determining underemployment is therefore the VMP at the firm in question and the expected flow of contributions elsewhere net of
search costs, iPDVe. If for a worker or group of workers the average VMP exceeds iPDVe, then workers are rejecting current offers too much (or too few offers are being extended to the group), too
many members of
Chronic Underemployment
the group in question are in the unemployed pool, and total production could be increased by having members of the group take current jobs rather than remain available for other jobs. An advantageous
trade can be brought about by reducing the unemployment of this group. In exchange for a reduction in employment of this group elsewhere, employment in the current job openings is increased. This
trade is therefore an indirect one. In most cases, the worker and firm in question will be indifferent to this trade. As will be demonstrated, the trade will be brought about by a reduction in the
worker's reservation wage or an increase in the flrm's wage offer; since the derivatives of the respective objective functions with respect to these variables are zero, changes in profits or
well-b~ng are negligible to the flrm and workers in question. Let us therefore deflne underemployment as occurring when the following condition holds for a group:
VMPe > iPDVe
The value of the marginal product, VMP, used in the underemployment test may be expressed in terms of a flrm's production function and search costs. From Chapter 2, section 5, the VMP of a worker
with grade g at a given flrm may be expressed in various ways as follows:
= pQt + (pQ2/n)(g - gel = w+cm/z+(pQ2/n)(g - gel = w+(PQ2In)(ge - go) + (PQ2/ n )(g - gel
= w+(pQ2/n)(g-go)
If a job offer is extended and accepted, the VMP for a worker must equal or exceed
the worker's reservation wage. The two alternative tests, iPDVversus Wo and iPDV versus VMP, may now be compared. If the average reservation wage exceeds the average value of iPDV, then VMP > iPDVand
underemployment must be present. But if the average reservation wage is less than the average value of iPDV, we cannot rule out underemployment. Therefore Woe> iPDVe is a suffIcient but not necessary
condition for underemployment. The nature of the underemployment definition can now be partially indicated. It concerns indirect trades among alternative employers of a given group of workers. By
withholding their labor from a given employment, workers remain available for employment at other firms. In a labor market with search, a worker's VMP is not completely determined by his or her
grade. There will exist a range of firms at which offers would be extended and accepted; the VMP for the worker will differ among these firms. Therefore an advantageous indirect trade may exist if
the expected VMP elsewhere is large enough to cover the added search costs imposed on other flrms by turning down a current offer. But these trades would never be brought about by bargaining between
parties to an employment contract, since under the assumptions they will be indifferent to marginal changes in their employment criteria (wage offer, reservation wage and grade requirement). In a
sense, the possibility of an advantageous indirect trade arises from an externality imposed on flrms which are not party to the employment bargain between a worker and a flrm at which the worker
interviews. This externality arises from the search costs imposed by workers at various grade levels on prospective employers. Having set up the test comparison which will determine underemployment,
the remaining question is to explain why the VMP of a worker at the current interviewing flrm might systematically diverge from the NMPe of the worker elsewhere.
Chronic Underemployment
4. Tests for Underemployment Now let us consider the comparison between VMP and NMPe for three distinct groups of workers. These groups are defined by marginal changes in the employment criteria of
workers and firms. For small changes in the reservation wage, wage offer or grade requirement, additional workers will be employed or not employed at particular firms. These additional workers will
constitute the groups for which the underemployment test will be conducted. a. The Reservation Wage. Suppose a given worker lowers slightly his or her reservation wage. Then among the firms at which
the worker interviews, the worker will accept offers at the now lower reservation wage where before the worker rejected the offers. It is then possible to compare VMPe at these firms with NMPe of the
worker elsewhere. Suppose the worker's grade is g. The general expression for the worker's VMP is given in (8.5). This VMP will vary among firms depending on the wage offer and grade requirement. Let
VMP(g, w,y) be the average VMP for a worker of grade g at a firm with wage offer wand grade requirement y. Now VMPe at the firms in question (the additional firms at which offers are accepted) is
obtained by integrating VMP(g, w,y) over values of the grade requirment less than g and for values of the wage offer equal to Wo:
VMPe =
t 0
VMP(g,wo,y)v(wo,y)dy - V( • wo,gA)
In this expression, v(wo,y) is the density function of firms with wage offer given worker's reservation wage) and grade requirement y and:
- V.(wo,g)
= - (tV/(two =
t o
Now let the average value of the grade differential pQ2/n for firms with grade requirement y and wage offer W be expressed as GD(w,y). Then from (8.6):
VMPe =
Wo+ 0
GD(wo,y)(g - y)v(wo,y)dy V( A) - • wo,g
Comparing VMPe with the expression for iPDVfor the worker in (8.4), underemployment occurs whenever:
GD(wo,y)(g - y) v(wo,y)dy A e V (A) > A - • wo,g +11+1. (NMP
- We)
In this inequality, the left hand side is always positive. The test for underemployment for the worker is directly relevant to the employment variable subject to the decision of the worker. If
underemployment holds, then the worker's reservation wage should be lower. By lowering his or her reservation wage, the worker brings about an indirect trade between employment at firms which have
wage offers equal to the worker's reservation wage and at firms where the worker would end up if he or she continued searching. But the worker is indifferent to this trade. Since (tiM/lJwo = 0, a
small change in the worker's reservation wage leaves the value of being unemployed in the labor market, M(wo), unchanged. The potential beneficiaries of the indirect trades are the firms at which the
worker might end up if he
Chronic Underemployment
or she continues searching. But these potential beneficiaries are not even known when a worker considers accepting a job at or below the worker's reservation wage, are not involved in the decision to
accept the job offer, and have no way of influencing that decision. b. The Firm's Wage Offer. Now let us consider the changes in employment that are brought about by a change in the firm's wage
offer. By raising the wage offer w, a firm hires additionally those workers for whom the reservation wage just equals the wage offer. It is then possible to compare the VMPe of these workers with
their expected value of iPDV elsewhere. Integrating over grades of workers greater than or equal to the firm's grade requirement go, one obtains: VMPe
= w+
t' go
(PQ2/ n )(y - go)h(w,y)dy H ( ) I w,go
where h(w,y) is the density function of workers with reservation wage wand grade y and: HI (w,go)
For those workers for whom .P'DY.e
= aH/ aw = J""h(w,y)dy go = wand g > go:
1'" +A+ ...
. (NMPe - we)h(wo,y)dy
go 1\ P. I + --'----'---H=-=-(-:---)----I
Hence underemployment for this group occurs when:
1""(pQ2/n)(y g,
1'"A+A g,
. (NMPe - we)h(wo,y)dy
In this test, the left hand side is always positive. Again, this underemployment test is directly relevant to the decision variable under the control of the firm. If underemployment exists for this
group, the firm should raise its wage offer. By doing so, it brings about an indirect trade between employment at the firm in question and potential employment elsewhere. In the case of
underemployment, the increase in the worker's VMP elsewhere by turning down the current job is insufficient to cover the added search costs imposed on other firms. c. The Firm's Grade Requirement. A
third way in which indirect trades may be brought about is by lowering the grade requirement for a firm. When the grade requirement declines by a small amount, the firm hires additional workers with
VMP's equal to w, the wage offer of the firm. To test for underemployment for this group of additional workers, it is necessary to find the expected value of iPDV for the group. Let iPDV(x,go) be the
value of iPDV for workers with reservation wage x and grade go. The expected value of iPDV can then be derived by integrating over reservation wages less than or equal to the firm's wage offer w.
Comparing VMP with iPDVe, underemployment then occurs when:
Chronic Underemployment
A IOA+tt+ "\ . (NMP 1
e - We)h(X,go)dx
W - Woe>
H 2 ( w,go )
In the above, Woe is the average reservation wage for the group. It must be less than the firm's wage offer, w, so that the left side of this inequality is positive. When this test is satisfied, an
advantageous trade can be brought about by lowering the firm's grade requirement. For all three of the above tests, corresponding tests for overemployment arise by reversing the inequalities. The
implications for the employment criteria are also reversed. When overemployment occurs, the reservation wage should be raised, the wage offer should be lowered or the grade requirement should be
raised. Overemployment arises when current production opportunities for workers are inferior to the potential production opportunities for workers if they remain unemployed. Current unemployment is
therefore too low. All three tests for underemployment involve two terms. In (S.9), (S.11) and (S.12), the term on the left is positive. In (S.9) and (S.ll), the term on the left arises from the
excess of the worker's grade over the firm's grade requirement. It represents the contribution of the worker's above-minimum grade towards the sunk costs of search incurred by the firm. In (S .12),
the positive term arises from the difference between the wage offer of the firm and the average reservation wage of those workers with the minimum grade. These workers make no contribution towards
the firm's sunk search costs, so the difference W - Woe represents the average gain to workers of getting a job offer from the firm in question. In all three tests for underemployment, the term on
the right involves the difference NMPe - We. The next question concerns whether there is any reason to believe that NMPe - We is systematically positive or negative. At first, we might expect the
difference would turn out to be zero for all groups. Consider all workers at a given firm. From (8.6), the average VMP for all workers equals the wage offer plus the average search costs, so that
NMPe = We. Dividing workers up by firms, NMPe - We will be identically zero for all workers at a firm, whether the firm employs workers with a high grade or a low grade on average. If NMPe - We were
identically zero for all groups, then all three underemployment tests would hold, but only because of the presence of the fixed search costs. Then there would always exist some advantage to reducing
unemployment in order to reduce the search costs imposed on firms by continued unemployment of workers. But this is a rather weak argument for too high a level of unemployment. The major point to
make at this stage is that NMPe - We is not identically zero. This occurs because the labor force can be divided up in a different manner than by firm. For example, the tests for underemployment look
at groups of workers defined by reservation wage or by the grade, and these groups cut across firms. The next section examines the reasons why NMPe - We will not be identically zero for all groups,
even though it is zero for all workers employed at a particular firm.
5. Regression Towards the Mean The phenomenon that will explain why NMPe - We is not identically zero is regression towards the mean. The term refers to the behavior of conditional expectations in
bivariate distributions and is the basis for the use of the word regression in
Chronic Underemployment
linear estimation. The phenomenon is well-known in psychology and can be illustrated with the following example, cited by Christopher Jencks (1973, p.74). The husbands of women with IQ's of 120 have
average IQ's of 111. One might conclude that married women are smarter than their husbands, but it turns out that the wives with husbands of IQ 120 have average IQ's of 111 also. That is, we can get
a different comparison between IQ's of males and females depending on which group we look at. Alternatively, suppose that we can divide the population up by height. It might then happen that for each
height interval, the average IQ of males equals the average IQ of females. Despite this equality by height group, we can still find other groups (married couples for whom the wife's IQ is 120, or
married couples for whom the husband's IQ is 120) where the average IQ's are unequal. Clearly, the comparison between male and female IQ's, and in analogy the comparison between NMPe and We, depends
on how we slice up the population. To get closer to the relevance to labor markets, let us extend the foregoing example. Consider a male, George, with IQ of 120 married to a female, Marsha, of IQ of
111, corresponding to the expected IQ of spouses for him. Marsha, if she had not married George, would have expected to marry another fellow, in which case the average IQ of potential suitors would
have been below 111. George therefore observes that, comparing himself immodestly with Marsha's alternative suitors, he has an IQ which exceeds their average. Also, in dating prior to marriage,
George generally found that his IQ exceeded the average IQ of males courting the girl he was going out with. Only when the IQ of a girl exceeded, say, 130 would his IQ fall below the average of other
males wooing the girl in question. Similar observations hold for Ann, IQ 120, and her spouse Henry, IQ 111. (I hope I am not conveying a distorted view of American courtship in this example. Frankly,
George sounds like a loser.) Before returning to labor markets, let us consider the statistical phenomenon in more detail. Let N(p,x,ux,/Ly,Uy,p) be a bivariate normal distribution (Mood, Graybill
and Boes, 1974, p.148). Then the conditional densities for x and yare: E(xly)
= /Lx + (pux/ Uy)(y -
and: E(Ylx) = /Ly + (PUy/ ux)(x - /Lx)
Consider a particular value of x, say Xo, and let Yo = E(Ylxo), the expected value of y given Xo. We want to compare Xo with E(xlyo) = x = E(xIE(Ylxo». From the expressions for the conditional
densities, we get x = (1 - pZ) /Lx + pZxo. Therefore x is a weighted average of the mean for all values of x and the original value of Xo, with the weight depending on p. Let the relation between x
and Xo be called the feedback line. This relation is shown in Figure 8.1. The slope of the feedback line is pZ. At the mean for x, x = x. Above /Lx, x regresses towards the mean, so that x is below
the value Xo which generates it. Similarly, when Xo is below the mean, x again regresses towards the mean and lies above the original value of x. If there is no correlation between x and y, p = 0 and
x = /Lx independent of what value Xo takes. With complete correlation, p = 1 and x = Xo. In the above statistical description, the x's may be taken to be the male IQ's, the y's are the female IQ's,
and the bivariate normal distribution describes the joint distribution of husband and wife IQ's. With imperfect correlation, 0 < p < 1. The wife's expected IQ for a given husband's IQ is less than
the husband's IQ when the husband's IQ is above average and is greater than the husband's IQ when the husband's IQ is below average. This is simply the consequence of regression towards the
Chronic Underemployment
E(x I E(y Ix.))
line of Equality
1',. Mean Value of x
Figure 8.1: Regression Towards the Mean
mean. Even though male and female IQ's may be identically distributed, their correlation in marriage makes it possible to identify groups for which the average IQ's are unequal, e.g., married couples
where the wife's IQ is 120. More relevant to labor markets is the further result that the husband's IQ, corresponding to X, will exceed the average for other suitors, Xo, when Xo is above average and
will fall below the average when Xo is below averl:!.ge.
6. Chronic Underemployment Now let us consider what implications regression towards the mean has for the operation of labor markets and in particular the comparison between NMPe and We for various
groups of workers. Instead of marriage between males and females, the labor market is characterized by matching between employees, differing in grade, and employers, differing in grade differential.
In the previous section, the variable x may be taken to correspond to the grade of a worker, whiley corresponds to the grade differential of the firm. Regression towards the mean then implies that a
worker with a high grade will tend to end up at a firm further down the list of firms, with a grade differential closer to the mean of firm grade differentials than the worker's grade is to the mean
of all workers' grades. Similarly, a worker with a low grade will tend to end up at a firm with grade differential closer to the mean for firm grade differentials. Regression towards the mean also
operates on the firm side. Each firm's work force will have an average grade. A firm with a high grade differential will have a work force with an average grade closer to the mean grade than the
firm's grade differential is to the mean of all firm grade differentials. Firms with low grade differentials will hire workers with an average grade closer to the mean for the whole labor force.
Chronic Underemployment
Putting together these two consequences of regression towards the mean, one obtains the feedback relation depicted in Figure 8.1. In that figure, Xo can now be taken to be the worker's grade, while x
represents the average grade of workers at the firm where the worker is employed. By the feedback relation, a worker with a high grade will tend to be employed at a firm where his or her grade
exceeds the average grade of the other workers. A worker with a low grade will tend to be employed at a firm where his or her grade is below the average of the other workers. These relations are
analogous to the comparison between a fellow's IQ and the average IQ of his competing suitors. For a given firm, the value of the marginal product for a worker with the firm's average grade level
will equal the wage offer plus the average search costs. A worker with a grade below the average at the firm will have a value of the marginal product which falls short of the sum of the wage plus
the average search costs, while a worker with a grade above the average will have a value of the marginal product which exceeds the wage plus the search costs. Since a worker with above average grade
will tend to be employed at a firm where his or her average is greater than the firm's average grade, the average value of the marginal product, VMPe, will exceed the average search costs and average
wage for the worker. For this worker, NMPe (the average value of the marginal product minus the average search costs) will exceed the average wage. Similarly, a worker with below average grade will
tend to be employed where his or her average is below the coworkers' average grade. Then the worker's NMPe will fail to cover the expected wage We. Returning to the results of section 4, the tests
for underemployment are all satisfied if NMPe - We is negative. Workers with grade levels below the average in the labor force therefore are on average underemployed in the economy, according to the
definitions presented in this chapter. This underemployment is not the result of any disequilibrium in the economy or any misinformation on the part of the workers or firms. Instead, it results from
the standard search behavior of firms and workers and can therefore be labeled chronic. Workers with grade levels above the average will have positive values of NMPe - We. The tests for
underemployment will therefore be ambiguous, since both sides of the test inequalities are then positive. Essentially, we are getting nonzero values of NMPe - We because we are dividing up the labor
force differently. Considering all workers at a given firm, the average grade is ge, the same as the average grade of coworkers at the firm, and NMPe = We. But taking a particular worker and
considering all firms at which the worker could end up, one reaches a different comparison. At some firms, the worker's grade will be below the average and the VMP for the worker will be less than
the average search costs plus the firm's wage. At other firms, the worker's grade will exceed the firm's average, and the VMP will exceed the search costs plus the wage. By looking at the firms at
which a given worker could end up, one can therefore obtain an inequality between NMPe and We. A more formal relation between NMPe and We can be derived as follows. Let (em/Z)e be the average search
costs and suppose the worker's grade is g. Using integration to obtain VMPe and (em/Z)e, the difference NMPe - We can be shown to equal the following: em g-ge J ODl~H ()--v(x,y)dxdy A
x,y ge - Y
Chronic Underemployment
Clearly, this term is not identically zero. When g exceeds ge the majority of the time, the term will come out positive, in which case NMPe > We. If instead g tends to be below ge most of the time,
which occurs when the worker's grade is below the average for the labor market, then the term is negative and NMPe < We. The expression in (8.13) indicates that under or overemployment of low grade
workers depends on their reservation wages. A particular low grade worker may have a very low reservation wage, below that of fellow workers of the same grade. The effect of the low reservation wage
is to increase the number of firms at which the worker could be employed and where his or her grade will be above the average grade for workers at that firm. Then the difference NMPe - We for this
worker could be positive and the worker might not be underemployed. Similarly, a high grade worker could have a high reservation wage relative to other workers with the same grade; the difference
NMPe - We for this worker could then be negative and the worker could be underemployed. On average, however, low grade workers will have negative values and high grade workers positive values of NMPe
- We. From (8.13), one can also view the underemployment of a worker or group of workers in terms of search costs. In looking for a job, an unemployed worker imposes search costs on firms that
interview him or her. The firm's hope to recoup this sunk cost lies in the possibility that if an offer is extended and accepted, the worker's grade exceeds the firm's grade requirement enough to pay
for the average search costs per employed worker. But a worker with grade below the average for all workers will tend to end up at a firm where his or her grade is below the firm's average grade.
Therefore the worker's contribution towards search costs will tend to fall short of the search costs imposed by the worker. In a sense, an externality occurs. An advantageous trade then takes place
if the unemployment rate among lower grade workers is reduced, by having firms that could employ these workers raise their wage offers or reduce their grade requirements. It is also possible to view
the phenomenon of regression towards the mean from the point of view of firms. Consider a firm employing workers with an average grade below the mean for the whole labor market. On average, these
workers, if they had not ended up at the firm in question, could have expected to end up at firms with higher wages, grade requirements, grade differentials and average grades. In other words, the
firm hiring in the low end of the labor market tends to be below the expectations of its workers. Similarly, firms with grade requirements above the average tend to gather, through the search
process, workers that otherwise would have ended up at firms with lower grade averages and wages. The alternatives open to the firm's employees, then, tend to be inferior to their current employment.
The general effect of regression towards the mean may be seen by comparing the assignment of workers to firms that comes about in a deterministic setting with the assignment that arises from search.
In the deterministic assignment studied in previous work, the k-th worker in order of decreasing grade is assigned to the k-th job in order of decreasing job characteristic, based on the assignment
principle that is relevant (comparative advantage, scale of resources or preferences). In the technology used in the search model, we may suppose that the job characteristic is the grade differential
and the assignment principle the scale of resources effect. But the exact assignment is impossible to achieve by search. If the k-th worker (out of, say, ten million) would only accept employment at
the k-th job, the duration of unemployment would be prohibitively long. Instead, there must be a range of jobs the k-th worker would be willing to take. In particular, the very first worker would be
unable to be assigned by the search procedures to the first job, and instead must accept a
Chronic Underemployment
spread of jobs extending further down the job list. For workers in the upper part of the list, the effect of search is to diffuse the assignment and push workers towards jobs closer to the mean of
the job list. At the bottom of the job list, search allows firms to have higher grade work forces than they would get with deterministic assignment. In order to have reasonably short job search,
workers at the middle to lower end of the worker list must choose reservation wages which would lead to their acceptance of jobs further down the job list. Since random search will lead some of these
workers into interviews at the firms low on the job list, these firms are able to attain higher grade work forces. For the same reasons, workers at the lower end of the labor list can expect to end
up higher than they would with deterministic assignment. Compared with the deterministic assignment, the general effect of search is to scrunch everybody towards the mean. Workers with grades above
and below the mean end up in jobs closer to the mean, while jobs with grade differentials above and below the mean tend to get filled with workers closer to the mean grade. The source of chronic
underemployment of workers with below average grades can now be understood in terms of the improvement in the assignment of workers to jobs that can be brought about. The reduction in these workers'
reservation wages tends to place them in jobs at firms with lower grade differentials, closer to the assignment brought about by a deterministic process. The increase in wage offers for firms
employing labor with below average grades has the same effect as a reduction in workers' reservation wages. The reduction in a firm's grade requirement lowers the average grade of the firm's work
force, bringing its employees more in line with the level corresponding to the deterministic assignment. The underemployment associated with the lower grade labor market arises because the random
search process leads workers, in their own self interest, to aim for jobs with higher grade differentials than they could get in a deterministic setting. Similarly, search leads firms with low grade
differentials to seek a work force with a higher grade average than they would get in the absence of search. But the search for better jobs and better workers is inefficient from the point of view of
the whole economy in that it worsens the assignment of workers to jobs. The policy changes which reduce the unemployment of the low grade workers also bring about an improvement in the accuracy of
the assignment and therefore lead to an increase in the overall level of production in the economy. In contrast, the consequence of search for firms and workers in the upper grade labor market is to
lower the matches they are able to achieve compared to a deterministic assignment. The effect of a reduction in unemployment is therefore to worsen the assignment rather than improve it. If instead
upper grade workers remained unemployed longer. they would be led to choose firms closer in grade differential to the level they would be assigned in a deterministic economy. By raising their
reservation wages, such workers would not take jobs at the lowest grade differentials but would remain available for employment at firms with higher grade differentials. Similarly, by reducing their
wage offers and raising their minimum grade requirements, firms simultaneously increase the level of unemployment and achieve a more accurate assignment of workers to firms. These changes raise the
average grade of workers at firms hiring upper grade workers. Whether this increase in unemployment is desirable depends on a comparison of the increased production arising from the more accurate
assignment, reflected in the positive value of NMPe - We, and the added search costs, reflected in the left-hand sides in the tests (8.9), (8.11) and (8.12).
Chronic Underemployment
An interesting feature of the policies that would reduce unemployment for low grade workers and raise unemployment for upper grade workers is that the wage offer and the reservation wage are required
to move in opposite directions. For low grade workers the wage offer goes up while the reservation wage goes down. One might think that raising a firm's wage offer would increase the average grade of
its work force, thereby worsening the assignment. While the firm's work force does get a higher average grade, it does so by hiring workers that otherwise would tend to be employed at firms with
still higher grade differentials. Apparently, the latter effect is more important in bringing about an improvement in the assignment. Also, for upper grade workers, the reservation wage goes up and
the wage offer goes down to bring about an increase in employment. Again, the decrease in the wage offer would seem to worsen the assignment by lowering the grade average of a firm's work force. But
in fact the policy turns away from the firm the better workers, who are then available to be employed at firms with higher grade differentials. . At this point it is important to emphasize the
distinction between the size of the unemployment rates and whether these rates represent under or overemployment. The foregoing argument does not explain why unemployment rates would be higher for
low grade workers. The level of unemployment for such workers depends on the search strategy of firms hiring in low grade markets (i.e., the acceptance rate Z of such firms) and the unemployment
valuations of workers in the market, as reflected in the difference between their expected and reservation wages and in the job acceptance rate. Instead, the foregoing argument explains why the
resulting unemployment rate is too low or too high for particular well-defined groups in the labor market. It is clearly conceivable that the unemployment rate could be lower for the low grade
workers and yet constitute underemployment. Now let us consider the effects of public policy on the underemployment that has been described. First, suppose that in order to bring down the
unemployment rate of low grade workers, the government pursues a policy of increasing aggregate demand, either through fiscal or monetary policy. The effect would be to raise the level of the
marginal product across the board for workers. In the low grade labor market, wage offers rise while grade requirements probably decline, moving the labor market in the direction of the desired
assignment. However, in response to the higher average wages and the lower unemployment rates, workers raise their reservation wages, partially cancelling out any gain. In the high grade labor
market, the greater value of the marginal product will similarly lead firms to raise their wage offers and reduce their grade requirements, thereby worsening the assignment. Again, though, workers in
the high grade labor market will revise their reservation wages upward, preventing the assignment from worsening further. The overall effect of the aggregate policy is to reduce unemployment
differentially in both the high grade and low grade markets, depending on the response of workers in revising their reservation wages. But while the decline in unemployment is desirable in the low
grade market, the decline in the high grade market is probably not desirable. As in models with structural unemployment, the effect of the aggregate policy may be to reduce unemployment in one sector
(the low grade labor market) while simply raising the price and wage level in the other (the high grade labor market). An unavoidable trade-off between unemployment and inflation in the short run,
consistent with the Phillips curve, would then arise, to no one's surprise. In terms of labor market trades, the new exchanges of labor for wages brought about by an increase in aggregate demand are
not necessarily advantageous. The
Chronic Underemployment
reason is not that current reductions in unemployment rates will raise unemployment in the future, as in the Lucas and Rapping model. Instead, the current labor exchanges may bring about an
inefficient assignment of workers to jobs. By increasing the current level of employment, workers are locked into jobs for a while, and the jobs they are hired at may not be with the firms where
their value of the marginal product is the highest. That is, the reduction in the unemployment rate may bring about disadvantageous trades, whereby current output is increased at the cost of future
productivity levels. The trade-off brought about by an aggregate demand policy is therefore not between labor supply now and labor supply later but between output now and productivity later.
Nevertheless, the reduction in unemployment at the lower grade end of the labor market may outweigh the possible loss from the reduction in unemployment at the upper end. A superior labor market
policy would affect the lower and upper grade labor markets differentially. It would proceed by distorting the wage offer and grade requirement choices of the firms hiring in the low grade market in
such a way that unemployment is reduced, while leaving unaffected or having the opposite effect in the high grade labor market. The general conclusion from the disaggregated analysis that has been
conducted in this chapter is that macroeconomic policies, which affect labor markets in an undifferentiated manner, are an inappropriate response to the chronic underemployment that is generated by
search in ordinary economic times. The observations arising from regression towards the mean carry potential implications for quit and layoff behavior and the pattern of job changing for workers.
First, setting aside the search model as it has been developed, suppose workers are more likely to be laid off if their grade is closer to the minimum grade requirement of the firm at which they are
working. It follows that, everything else the same, workers with low grades will tend to be laid off more often, since they tend to be employed at firms where their grades are below the firm's
average. Similarly, workers with above average grades will be laid off less, since their grades will tend to be greater than the average for the firm at which they are working. Second, suppose
workers are more likely to quit if their alternatives elsewhere are on average better. Then firms with the expected grade of workers below the average for the whole labor market will find that they
have higher quit rates than firms with grade averages above the average for the labor market. The employees of the former firms will tend to have better alternatives elsewhere and employees of the
latter firms will have worse alternatives. Somewhat paradoxically, workers with above average grades will quit more often, since their grades will tend to be above the average for the firm at which
they are employed. So the greatest quits occur for firms hiring workers with below average grades and for workers with above average grades. These quits and layoffs will tend to improve the
assignment of workers to jobs over time. The quits of workers where their grade is above the average will eventually lead them to employment at firms with higher grade differentials, again closer to
the desired assignment.
7. Comparison with Previous Theories Following the publication in Phelps' book (1970) of a collection of papers, economists have examined macroeconomic phenomena using microeconomic analysis,
demonstrating how unemployment and inflation arise from individual decision making. In particular, unemployment arises in a market with incomplete information and wage dispersion as workers search
for better jobs. But in promoting
Chronic Underemployment
search as an explanation for unemployment, ecobornsts have neglected the fundamental reason why search takes place: with both workers and firms heterogeneous, an arbitrary pairing of a worker with a
job will not in general produce a fruitful match. While job matching has been the subject of some work, the workers in such models have been assumed to be homogeneously heterogeneous, with the
likelihood of a match at a given firm being unrelated to any stated characteristic of the worker or firm. Matches then occur as a random variable at interviews. The irony of this development is that
the labor market is still being treated in an aggregate manner with no meaningful distinctions among various members of the labor force. The theory of chronic underemployment developed here differs
in nature from the Keynesian, Lucas and Rapping and efficient search unemployment that were discussed in section 1. First, chronic underemployment applies only to specific sections of the labor
market, with possible exceptions for particular workers. There is no test for under or overemployment of the labor force as a whole, and the problem of underemployment is not perceived as an
aggregate problem but a sectoral one. The tests for underemployment are derived directly from a consideration of what trades are advantageous in the economy. While direct trades of labor for wages
are generally exploited by the independent participants in the labor market, indirect trades are potentially advantageous. But the existence of these advantageous trades cannot influence· the
on-going bargaining between firms and workers and can therefore persist in the labor market. Chronic underemployment is not generated by deficient aggregate demand as in the Keynesian system or
biased expectations of future earnings as in the Lucas and Rapping system. Instead, it is generated by the effect of search on the assignment of workers to jobs. Standard models of search behavior
have previously found the unemployment generated by search to be efficient and consistent with the maximization of output. But these search models have assumed ex ante homogeneity of workers and
firms, so that the phenomenon of regression towards the mean could not occur. With heterogeneous workers and firms, the assignment of workers to jobs is systematically altered by the operation of
search. Lower unemployment of low grade workers reduces the distortion in the assignment caused by search and also reduces the search costs. By reducing unemployment of low grade workers in the
current period, a direct trade of labor for wages is brought about. Workers and firms involved in this trade are indifferent to it, since it results from changes in employment criteria that maximize
their objective functions. Small marginal changes in the workers' reservation wages and in firms' wage offers and minimum grade requirements therefore leave the values of the objective functions
unchanged. However, these trades generate an externality for other firms in the low grade labor market, by improving the assignment of workers to jobs and reducing firm search costs. Search behavior
by firms and workers therefore does not bring about an efficient level of unemployment among low grade workers since the externalities are not taken into account.
Chapter 9
Summary 1. Ten Theoretical Conclusions a. The derivation of the worker's reservation wage in Chapter 2, section 2, provides a nearly explicit expression for it:
. 1M =
M+i = A+M+l .(b -
c)+ A
. We +M+l
The reservation wage is a weighted average of the expected benefits of being employed, We, and the benefits or losses of being unemployed in the labor market, b - c. The weights are close to the
long-run proportions of time spent in the two states but are modified by the presence of the discounting rate. The presence of discounting reduces the benefit of being employed and raises the weight
for the unemployed state because the benefits of being employed occur later. The expression for the reservation wage allows one to use the envelope theorem to calculate the impact of a change in a
parameter or in labor market conditions on either the well-being of the worker, as measured by iM, or the supply behavior of the worker, as reflected in the reservation wage Woo b. The trade-off
between the expected wage and unemployment that the worker can achieve by varying the reservation wage is given in (2.11) and (3.1) as: We -
A orin terms of the unemployment rate u in (3.4) as: (}We/ (}Wo
We -
u(1 - u)
These results appear to arise as a general feature of truncated distributions. They permit one to find the rate at which workers are willing and able to substitute expected wage rates for expected
unemployment in the labor market in terms of a few magnitudes. c. Mutual search by workers and firms provides a mechanism by which the assignment of workers to firms is brought about without
self-selection or complete information. Workers search for wages which exceed their reservation wages while firms seek workers with grades that exceed the firms' grade requirements. The result is
that a given worker can only be employed at a limited number of firms. Mutual search explains in a natural way the wage dispersion faced by workers. Workers face overlapping markets, so they could be
employed at a range of firms with different wage offers and with different values of the marginal product. d. The firm search behavior is represented in the two conditions (2.34) and (2.35):
= pQ. = w+cMlz
= pQ. -
and: cMlz
The first condition states that the firm's value of the marginal product for a worker with the average grade will equal the wage plus the average search costs per worker. The second condition states
that the value of the marginal product for a worker with the minimum requirement is just equal to the wage, so that for such a worker the firm gains no extra output to cover the sunk search costs.
With search, the firm therefore has two marginal productivity conditions. The firm satisfies the second condition at all times, probably by varying the grade requirement. The first condition only
holds on the average over time. At one point in time, the value of the marginal product for workers can therefore exceed or fall short of the wage rate for a neoclassical firm. e. Chapter 3 describes
three types of unemployment valuations. The first is the unemployment trade-off, noted in b above and given in (3.4). Next is the unemployment premium in (3.5): 8(1 - u)wel8wo _ (1 - u)we - Wo 8uI8wo
This measures the amount that a worker must receive in order to be willing to experience an increase in unemployment. The unemployment cost, given in (3.6), is (We - wo)/u and is the cost to a worker
of a week's unemployment. These expressions allow us to describe simultaneously the trade-offs workers are able and willing to achieve and the various types of labor market behavior. The possible
values of the measures are represented in Figure 3.2. f. Chapter 4 presents the explicit expressions for the time spent in one state of a two state continuous time Markov process. The results appear
in (4.1) through (4.6). They provide the means of estimating the employment inequality arising from constant transition rates and the contribution of choice to inequality in the distribution of
employment. g. The ratio wolwe for a truncated Pareto distribution is constant while the difference We - Wo for a truncated exponential distribution is constant (Chapter 5, section 2). These are
rather simple results, but they allow us to find the values of the parameters for wage offer distributions from Wo and We and from these parameters the inequality in wage offers facing groups of
workers. h. The accepted wage rate density is given by H(w)v(w), where v(w) is the density function of wage offers and H(w) is the cumulative distribution function for reservation wages. The
distribution of accepted wage rates is therefore a mixture of the distributions of wage offers and reservation wages. The resulting distribution may resemble neither of the two distributions that
produce it. In particular, both the distributions of wage offers and reservation wages could be single-tailed, and yet the accepted wage rate distribution would be two-tailed. i. Because of wage
resistance, priCe flexibility may not move the economy back to the natural or full employment level of activity. Wage rigidity, which is sufficient to produce a Keynesian macroeconomic system, is an
unnecessarily strong condition. Wage resistance arises when reservation wages in a labor market do not fall sufficiently to return the labor market to the former level of unemployment. The aggregate
supply curve would then be upward sloping, yielding a Keynesian system. Wage resistance occurs when b - c, the net benefit of being unemployed in the labor market, is
positive; reservation wages then fall by a smaller proportion than market or expected wages. j. Regression towards the mean distorts the assignment of workers to jobs, as described in Chapter 8.
Higher grade workers end up in jobs at firms with grade differentials below where the worker would end up in a deterministic assignment. Higher grade differential firms get workers with a lower
average grade than they would get with a deterministic assignment. Regression towards the mean produces a chronic underemployment of lower grade workers, which is defined as occurring when the
average value of the marginal product for a group of workers exceeds the present discounted value of the workers' future contributions to production. Essentially, the definition compares the
contribution to production at a firm where a worker or group of workers has a job offer with the expected contribution if the worker remains unemployed. For lower grade workers, too few take current
job offers and remain unemployed, thereby imposing search costs on other firms. The definition of underemployment and overemployment in terms of opportunity costs identifies advantageous trades that
are present in the economy.
2. Ten Empirical Results a. The direct estimates of Chapter 3, section 4, in Tables 3.1 to 3.3, show that the reservation wage works as predicted by the job search theory. Raising the reservation
wage increases expected wage rates while also reducing expected employment. Workers may therefore bring about a trade-off between the expected wage and the expected level of employment. b. The
various estimates in Chapter 3 indicate that for most workers the unemployment premium is positive. The cost of a week's unemployment exceeds foregone earnings, and workers are located at point A in
Figure 3.2. The unemployment valuations generally increase with age and education and are sharply higher for workers with four years of high school or one year or more of college. Negative
unemployment premiums occur for some, notably younger workers, but may be caused by the presence of the minimum wage. c. The procedure developed in Chapter 4, section 2, indicates substantial
heterogeneity in transition rates. The distribution of transition rates from unemployment to employment are shown in Tables 4.1 and 4.2, and transition rates from employment to unemployment are shown
in Tables 4.3 and 4.4. Heterogeneous transition rates could account for the evidence Clark and Summers cite against search and the Markov process as a description of labor market behavior. d.
Calculations using the employment density functions developed in Chapter 4, section 4, show that the Markov process generates substantial inequality in the distribution of employment. Also,
inequality is greater for higher levels of unemployment, though not much, and this inequality is unavoidable through decreases in a worker's reservation wage. e. Depending on the group, choice
accounts for 0.046 to 0.42 of the inequality in the distribution of employment, using the square of the coefficient of variation. Choice here shows up as heterogeneous transition rates. These figures
are obtained by comparing calculated and actual distributions of employment, reported in Table 4.13. In half of the cases, choice accounts for less than 20 per cent of the inequality. For all
workers, from Table 4.15, choice and the pooling of different demographic groups together contribute 23.7 per cent of inequality.
f. Using a Pareto distribution of wage offers with ex = 5 and a normal distribution of reservation wages with coefficient of variation 0.5, choice contributes 48 per cent of the inequality in
accepted wage rates. This result is obtained by comparing the square of the coefficient of variation of the wage offer distribution with the square of the coefficient of variation for the accepted
wage rate distribution. g. The joint distribution of employment and earnings, in Table 6.1, exhibits two characteristic features. The distribution of employment contributes to inequality by adding to
the lower tail a number of workers with less than full employment. In the upper tail, almost all workers are employed full time, so that the upper tail is determined by the upper tails of wage offer
curves. h. Inequality in earnings approximately equals the sum of inequality in wage rates and employment, using the variance of logarithms or the square of the coefficient of variation. This may be
seen in Tables 6.2 and 6.3 using data from the U.S. Census of the Population and in Tables 6.4 to 6.8 using data from the Employment Profiles. The contribution of employment inequality to earnings
inequality is about 8.5 per cent using the square of the coefficient of variation and 20 per cent using the variance of logarithms, which weights low earnings more heavily. i. Choice generates less
inequality in earnings than in either employment or wage rates, since an increase in the reservation wage raises the expected wage while lowering the expected employment. Inequality arising from
choice rises to about 50 per cent of all inequality for older white males. The proportion is much less for black males and is negligible for most females. These calculations are based on the
assumptions that there is a linear relation between the logarithm of a worker's expected wage and the worker's reservation wage and that wage offers have a Pareto density function. These results are
presented in Tables 6.9 to 6.12. j. Job search generates 30 to 50 per cent of inequality for disaggregated groups. This inequality would arise among otherwise identical workers and constitutes a type
of uncertainty facing workers. Job search, operating through the distributions of employment and wage rates, therefore contributes more to inequality than choice for almost all groups.
3. Six Remaining Tasks a. The treatment of transitions from employment to unemployment in the search model of Chapter 2 is inadequate. Movements between the two states are partly determined by
employers through layoffs and partIy by workers through quits. Yet from either the firm's or the worker's point of view all movements are exogeneous. This exogeneity is assumed in the calculation of
the worker's and firm's objective functions and in the use of a Markov model. Probably quits and layoffs could be handled by treating separations in the same manner as hires. That is, at some point
in time after employment commences, the firm offers to continue the employment if the worker satisfies certain minimum requirements, while the worker accepts if the wage offer to continue exceeds
some reservation level. Both firm and worker would then influence the separation decision. But given the firm's minimum performance requirement and the worker's reservation continuing wage, the
separation would be exogenous to both. b. This monograph places an enormous burden on reservation wages. They indicate a worker's flow of benefits from being in the labor market and can be used along
with other data to infer unemployment valuations, supply behavior, the
distribution of wage offers, impacts of unemployment and relative economic status. Chapter 3 uses data from Employment Profiles of Selected Low-Income Areas to calculate the unemployment valuations
for different groups. My experience with the household data from this source is that they are not entirely reliable. It would be desirable therefore to obtain accurate information on reservation
wages from another source and either confirm or contradict the results derived here. c. In Chapters 4 and 6, labor force participation decisions need to be separated from transitions between
employment and unemployment in the determination of the distributions of employment and earnings. This monograph has essentially combined the states of unemployment and out of the labor force for
previously employed workers, yet movements into and out of the two states make distinct contributions to inequality. Furthermore, measures of inequality remain uncorrected for changes in labor force
participation in response to labor market conditions. d. The study of wage offer distributions in Chapter 5 relies on conjecture, supposition and simulation. While some conclusions may be obtained
through such analysis, it is clearly incomplete without empirical estimation of structural parameters, as in the work of Kiefer and Neumann (1981a) and Flinn and Heckman (1 982a). e. A sufficient
data base should allow the complete decomposition of inequality by choice, random outcomes, labor force participation decisions and ex ante worker differences. Chapter 6 provides only a partial
decomposition by choice and random outcomes for individual groups. f. A number of policy implications remain completely unexplored. Minimum wages interfere with the assignment of workers to jobs by
constraining firms in their wage offers and workers in the possible values of their reservation wages. Workers are unable to achieve a trade-off between expected earnings and expected unemployment.
Firms with low grade differentials end up with labor that has higher productivity elsewhere. Also, unemployment compensation needs to be studied in terms of its influence on the assignment of workers
to jobs. Unemployment compensation policies clearly influence the reservation wages of workers and thereby alter the trades that are brought about in the labor market. An optimal level of
unemployment compensation could be determined in terms of its influence on the welfare of the unemployed versus the underemployment it generates.
4. Three New Directions a. Job search theory explains how unemployment can arise in labor markets through the search behavior of workers. The theory seems to imply that this unemployment is
voluntary, since it is determined by worker choice. But the presence of choice does not make unemployment costIess, nor does it imply that all unemployment is efficient. This monograph demonstrates
that, contrary to trivializing unemployment, job search theory may be used to estimate the costs of unemployment. By extending the theory of search to simultaneous search by firms and workers, this
monograph redirects the theory to explore how search assigns workers to jobs in a probabilistic economy. The extended theory provides a natural explanation for wage dispersion facing workers and the
source of the returns to continued search. b. Despite developments in the last fifteen years, the microeconomic foundations of unemployment have not been adequately developed. Search theory
supposedly provides a microeconomic explanation of unemployment. But economists have failed to make meaningful distinctions among workers or firms and so have failed to
describe the connection between unemployment and the assignment of workers to jobs. In particular, the phenomenon of regression towards the mean and its distortion of the assignment have not been
studied. This monograph provides solid microeconomic foundation for the study of unemployment. Underemployment is defined in terms of the opportunity costs of the employment of labor in current jobs.
This definition may be used to identify advantageous trades and find how current decisions must be altered to obtain an efficient level of unemployment for a group. c. Previously, the study of
inequality emphasized ex ante differences among workers as a source of inequality. This monograph instead studies the inequality that would arise among otherwise identical workers. Choice and the
random outcomes of job search appear to contribute more to inequality than differences in education or age. Yet these sources have very different implications for the nature of inequality and its
social costs. The monograph directs the study of inequality towards these neglected sources and develops a formal analysis of what was previously an unexplained residual. It supports previous work in
demonstrating that earnings differences play an allocative role in the economy, even when differences occur for otherwise identical workers.
Abowd, John and Orley Ashenfelter, 1979. Unemployment and Compensating Wage Differentials. Working Paper No. 120, Industrial Relations Section, Princeton University. Abowd, John and Orley
Ashenfelter, 1981. Anticipated Unemployment, Temporary Layoffs, and Compensating Wage Differentials. In: Rosen (1981), 140-170. Aitchison, J. and J.A.C. Brown, 1957. The Lognormal Distribution.
Cambridge University Press, Cambridge. Akerlof, G.A. and B.G.M. Main, 1981. Pitfalls in Markov Modelling of Labor Market Stocks and Flows. Journal of Human Resources, 16(1), 141-151. Altonji, A. and
O. Ashenfelter, 1980. Wage Movements and the Labour Market Equilibrium Hypothesis. Economica, 47 (August), 217-245. Andersen, Per Kragh, 1982. The Counting Process Approach to the Statistical
Analysis of Labour Force Dynamics. Unpublished paper, Statistical Research Unit, Danish Medical and Social Science Research Council. Also printed in Neumann and Westergaard-Nielsen (1984), 1-15.
Ashenfelter, Orley and James Blum, 1976. Evaluating the Labor Market Effects of Social Programs. Industrial Relations Section, Princeton University. Axell, Bo, 1974. Price Dispersion and Information:
An Adaptive Sequential Search Model. Swedish Journal of Economics, March, 55-76. Axell, Bo, 1976. Prices Under Imperfect Information: A Theory of Search Market Equilibrium. University of Stockholm,
Stockholm. Azariadis, C., 1975. Implicit Contracts and Underemployment Equilibrium. Journal of Political Economy, 83(6), 1183-1202. Baily, M.N., 1974. Wages and Employment Under Uncertain Demand.
Review of Economic Studies, 41, 37-50. Baily, M.N., 1979. Comments and Discussions. Brookings Papers on Economic Activity, (1), 67-70. Barro, R.J. and H. Grossman, 1971. A General Disequilibrium
Model of Income and Employment. American Economic Review, 61 (March), 82-93. Basmann, R. and G. Rhodes, 1982. Advances in Econometrics, Vol. I. JAI Press, Greenwich. Beach, Charles M., 1977. Cyclical
Sensitivity of Aggregate Income Inequality. Review of Economics and Statistics, 59(1), 56-66. Becker, Gary, 1962. Investment in Human Capital: A Theoretical Analysis. Journal of Political Economy, 70
(5), part 2, Supplement, 9-49. Bellman, Richard E. and Stuart E. Dreyfus, 1962. Applied Dynamic Programming. Princeton University Press, Princeton. Blumen, I., M. Kogan and P. McCarthy, 1955. The
Industrial Mobility of Labor as a Probability Process. Cornell Studies of Industrial and Labor Relations, Vol. 6, Cornell University Press, Ithaca.
Bourguignon, F., 1979. Decomposable Income Inequality Measures. Econometrica, 47(4), 901-920. Bowen, William G. and T. Aldrich Finegan, 1969. The Economics oj Labor Force Participation. Princeton
University Press, Princeton. Brunner, K. and A. Meltzer, 1976. Editors: The Phillips Curve and Labor Markets. North-Holland Publishing Co., Amsterdam. Budd, Edward C. and T.C. Whiteman, 1978.
Macroeconomic Fluctuations and the Size Distribution of Income and Earnings in the United States. In: Griliches et al (1978), 11-27. Burdett, Kenneth, 1978. The Theory of Employee Search and Quit
Rates. American Economic Review, 68(1), 212-220. Review, 68 (April), 212-220. Burdett, Kenneth, 1981. A Useful Restriction on the Offer Distribution in Job Search Models. In: Studies in Labor Market
Behavior: Sweden and the United States. Proceedings of a Symposium at the Institute for Economic and Social Research. Stockholm. Burdett, Kenneth and Dale Mortensen, 1978. Labor Supply Under
Uncertainty. In: Ehrenberg (1978), 109-157. Burdett, K.• N. Kiefer. D. Mortensen and G. Neumann, 1984. Earnings. Unemployment and the Allocation of Time over Time. In: Neumann and WestergaardNielsen
(1984). Chiswick. Barry R., 1974. Income Inequality. Columbia University Press for the NBER, New York. Chiswick, Barry and Jacob Mincer. 1972. Time-Series Changes in Personal Income Inequality in the
United States from 1939, with Projections to 1985. In: Schultz (1972). Clark, Kim and Lawrence Summers. 1979. Labor Market Dynamics and Unemployment: A Reconsideration. Brookings Papers on Economic
Activity, (1), 13-60. Coleman. Thomas, 1983. A Dynamic Model of Labor Supply Under Uncertainty. Unpublished Manuscript. University of Chicago. Cowell, Frank A .• 1980. On the Structure of Additive
Inequality Measures. Review oj Economic Studies. 47(3), 521-531. Cox, D.R. and H.D. Miller, 1965. The Theory oj Stochastic Processes. John Wiley and Sons, New York. Davies, J .B. and A.F. Shorrocks,
1978. Assessing the Quantitative Importance of Inheritance in the Distribution of Wealth. Oxjord Economic Papers, 30(1), 138-149. Diamond, P.A. 1982. Wage Determination and Efficiency in Search
Equilibrium. Review oj Economic Studies, 49, 217-227. Diamond, P.A. and Eric Maskin, 1979. An Equilibrium Analysis of Search and Breach of Contract, I: Steady States. Bell Journal oj Economics,
(10),282-316. Ehrenberg, Ronald G., 1977. Editor: Research in Labor Economics, Vol. 1. JAI Press, Greenwich. Ehrenberg, Ronald G .• ·1978. Editor: Research in Labor Economics. Vol. 2. JAI Press,
Greenwich. Feldstein, Martin, 1976. Temporary Layoffs in the Theory of Unemployment. Journal oj Political Economy, 84(5), 937-957. Feller, William, 1971. An Introduction to Probability Theory and its
Applications, Vol. 2, 2nd edition. John Wiley and Sons, New York. Flinn, Christopher J. and James J. Heckman, 1982a. Models for the Analysis of Labor Force Dynamics. In: Basmann and Rhodes
Flinn, Christopher J. and James J. Heckman, 1982b. Comment on "Individual Effects in a Nonlinear Model: Explicit Treatment of Heterogeneity in The Empirical Job Search Model." Unpublished Paper,
University of Chicago. Flinn, Christopher J. and James J. Heckman, 1982c. New Methods for Analyzing Structural Models of Labor Force Dynamics. Journal of Econometrics, 18(1), 115-168. Friedman,
Milton, 1953. Choice, Chance and the Personal Distribution of Income. Journal of Political Economy, 61(4), 277-290. Gabriel, K.R., 1959. The Distribution of the Number of Successes in a Sequence of
Dependent Trials. Biometrika, 46, 454-460. Garfinkel, Irwin and Robert Haveman, 1977a. Earnings Capacity, Economic Status and Poverty. Journal of Human Resources, 12 (Winter), 49-70. Garfinkel, Irwin
and Robert Haveman, 1977b. Earnings Capacity. Poverty and Inequality. Academic Press, New York. Goldberger, Arthur S., 1980. Abnormal Selection Bias. Working Paper No. 8006, SSRI Workshop Series,
University of Wisconsin. Goodman, Leo, 1961. Statistical Methods for the Mover-Stayer Model. Journal of the American Statistical Society, 56(296), 841-868. Gordon, Donald F., 1976. A Neo-Classical
Theory of Keynesian Unemployment. In: Brunner and Meltzer (1976). Gordon, Robert J., 1973. The Welfare Cost of Higher Unemployment. Brookings Papers on Economic Activity, (4), 133-195. Gordon, Robert
J., 1976. Aspects of the Theory of Involuntary Unemployment, a Comment. In: Brunner and Meltzer (1976). Gramlich, Edward M., 1974. The Distributional Effects of Higher Unemployment. Brookings Papers
on Economic Activity, (2), 293-336. Griliches, Z., Wilhelm Krelle, Hans-Jurgen Krupp and Oldrich Kyn, 1978. Income Distribution and Economic Inequality. Halstead Press, New York. Hall, Robert E.,
1972. Turnover in the Labor Force. Brookings Papers on Economic Activity, (3), 709-756. Hall, Robert E., 1979a. A Theory of the Natural Unemployment Rate and the Duration of Unemployment. Journal of
Monetary Economics, 5(2), 153-164. Hall, Robert E., 1979b. Comments and Discussions. Brookings Papers on Economic Activity, (1), 64-67. Hartog, Joop, 1981. Wages and Allocation Under Imperfect
Information. De Economist, 129(3), 311-323. Heckman, James J., 1979. Sample Bias as a Specification Error. Econometrica, 47(1), 153-161. Heckman, James J., 1981. Heterogeneity and State Dependence.
In: Rosen (1981), 91-139. Heckman, James J. and G. Borjas, 1980. Does Unemployment Cause Future Unemployment? Definitions, Questions and Answers from a Continuous Time Model of Heterogeneity and
State Dependence. Economica, 4 (August), 247-283. Heckman, James J. and Burton Singer, 1982. The Identification Problem in Econometric Models for Duration Data. In: Hildenbrand (1982), 39-77.
Hildenbrand, W., 1982. Editor, Advances in Econometrics. Cambridge University Press, Cambridge. Hines, A.G., 1980. Involuntary Unemployment. In: Malinvaud and Fitoussi (1980). Holt, Charles C., 1970.
Job Search, Phillips' Wage Relation, and Union Influence: Theory and Evidence. In: Phelps (1970), 53-123.
Holt, Charles C., 1979. Comments and Discussions. Brookings Papers on Economic Activity, (1), 61-63. Howard, Ronald, 1960. Dynamic Programming and Markov Processes. Cambridge Technology Press of MIT,
Cambridge. Hurd, Michael, 1980. A Compensation Measure of the Cost of Unemployment to the Unemployed. Quarterly Journal oj Economics, 95(2), 225-243. Isaacson, Dean L. and Richard W. Madsen, 1976.
Markov Chains: Theory and Applications. John Wiley and Sons, New York. Jencks, Christopher, 1973. Inequality. Harper Colophon Books, New York. Johnson, Harry G., 1973. The Theory oj Income
Distribution. Gray-Mills Publishing Co., London. Jovanovic, Boyan, 1979. Job Matching and the Theory of Turnover. Journal oj Political Economy, 87(5), 972-990. Kakwani, Nanak c., 1980. Income
Inequality and Poverty. Oxford University Press for the World Bank, New York. Karlin, Samuel and Howard M. Taylor, 1975. A First Course in Stochastic Processes, second edition. Academic Press, New
York. Kiefer, Nicholas and George Neumann, 1979a. An Empirical Job Search Model with a Test of the Constant Reservation Wage Hypothesis. Journal oj Political Economy, 87(1), 84-108. Kiefer, Nicholas
and George Neumann, 1979b. Estimates of Wage Offer Distributions and Reservation Wages. In: Lippman and McCall (1979). Kiefer, Nicholas and George Neumann, 1981a. Individual Effects in a Non-Linear
Model: Explicit Treatment of Heterogeneity in the Empirical Job Search Model. Econometrica, 49(4), 965-979. Kiefer, Nicholas and George Neumann, 1981b. Structural and Reduced Form Approaches to
Analyzing Unemployment Durations. In: Rosen (1981), 171-185. Lancaster, Tony, 1979. Econometric Methods for the Duration of Unemployment. Econometrica, 47, 939-956. Lancaster, Tony and Andrew
Chesher, 1981. Simultaneous Equations with Endogenous Hazards. Economic Research Paper No. 84, University of Hull, England. Also printed in: Neumann and Westergaard-Nielsen (1984), 16-44 ..
Lancaster, Tony and Andrew Chesher, 1983. An Econometric Analysis of Reservation Wages. Econometrica, 51(6),1661-1676. Lippman, Stephen A. and James J. McCall, 1976. The Economics of Job Search, A
Survey. Economic Inquiry, Part I, June 1976; Part II, September 1976. Lippman, Stephen A. and James J. McCall, 1979. Editors, Studies in the Economics oj Search. North-Holland Publishing Co.,
Amsterdam. Lucas, Robert and Leonard Rapping, 1970. Real Wages, Employment and Inflation. In: Phelps (1970), 257-305. Lundberg, S., 1982. Household Labor Supply with Quantity Constraints. Paper
presented at the Workshop on Labour Market Dynamics, Sandbjerg, Denmark. Also printed in: Neumann and Westergaard-Nielsen (1984), 219-237. Lydall, Harold, 1959. The Distribution of Employment
Incomes. Econometrica, 27, 110-115. Malinvaud, E., 1977. The Theory oj Unemployment Reconsidered. Basil Blackwell, Oxford. Malinvaud, E. and Jean-Paul Fitoussi, 1980. Editors, Unemployment in Western
Countries. Macmillan, London. Mandelbrot, Benoit, 1960. The Pareto-Levy Law and the Distribution of Income. International Economic Review, 1(2), 79-106.
Mandelbrot, Benoit, 1962. Paretian Distributions and Income Maximization. Quarterly Journal of Economics, 76(1), 57-85. Marshall, A.W. and I. Olkins, 1979. Inequalities: Theory of Majorization and
its Applications. Academic Press, New York. Marston, Stephen T., 1976. Employment Instability and High Unemployment Rates. Brookings Papers on Economic Activity, (1), 169-203. Menderhausen, Horst,
1946. Changes in Income Distribution During the Great Depression. National Bureau of Economic Research, New York. Metcalf, Charles E., 1969. The Size Distribution of Personal Income During the
Business Cycle. American Economic Review, 59(4), 657-668. Mincer, Jacob, 1974. Schooling, Experience and Earnings. Columbia University Press for the National Bureau of Economic Research, New York.
Mine, H. and S. Osaki, 1970. Markovian Decision Processes. American Elsevier Publishing Co., New York. Mirer, Thad W., 1973a. The Distributional Impact of the 1970 Recession. Review of Economics and
Statistics, 55(2), 214-224. Mirer, Thad W., 1973b. The Effects of Macroeconomic Fluctuations on the Distribution of Income. Review of Income and Wealth, 19(4). Mirer, Thad W., 1979. The Utilization
of Earning Capacity. Review of Economics and Statistics, 61(3), 466-469. Mood, A.M., F.A. Graybill and D.C. Boes, 1974. Introduction to the Theory of Statistics, third edition. McGraw-Hill Publishing
Co., New York. Mortensen, Dale T., 1970. A Theory of Wage and Employment Dynamics. In: Phelps (1970), 167-211. Mortensen, Dale T., 1976. Job Matching Under Imperfect Information. In: Ashenfelter and
Blum (1976), 194-232. Mortensen, Dale T. and George Neumann, 1984. Choice or Chance? A Structural Interpretation of Individual Labor Market Histories. In: Neumann and Westergaard-Nielsen (1984),
98-131. Neumann, G. and N.C. Westergaard-Nielsen, 1984. Editors: Studies in Labor Market Dynamics. Springer-Verlag, Berlin. Oi, Walter, 1962. Labor as a Quasi-Fixed Factor. Journal of Political
Economy, 70(6), 538-555. Perry, George E., 1972. Unemployment Flows in the U.S. Labor Market. Brookings Papers on Economic Activity, (2),245-278. Phelps, E.S., 1970. Editor, Microeconomic Foundations
of Employment and Inflation Theory. W.W. Norton and Co., New York. Phelps, E.S., 1972. Inflation Policy and Unemployment Theory: The Cost-Benefit Approach to Monetary Planning. Macmillan, London.
Piore, M.J., 1979. Editor, Unemployment and Inflation. M.E. Sharpe, Inc., White Plains, New York. Pissarides, Christopher, 1983. The Allocation of Jobs Through Search: Some Questions of Efficiency.
Centre for Labour Economics Discussion Paper No. 156, London School of Economics. Pissarides, Christopher, 1984. Search Intensity, Job Advertising and Efficiency. Journal of Labor Economics, 2(1),
128-143. Polachek, S., 1981. Occupational Self Selection: A Human Capital Approach to Sex Differences in Occupational Structure. Review of Economics and Statistics, 63(1), 60-68. Prescott, E., 1975.
Efficiency of the Natural Rate. Journal of Political Economy, 83(6), 1229-1236.
Reder, Melvin W., 1955. The Theory of Occupational Wage Differentials. American Economic Review, 45(5), 833-852. Reder, Melvin W., 1964. Wage Structure and Structural Unemployment. Review of Economic
Studies, 31 (October), 309-322. Ridder, G., 1982. The Statistical Analysis of Single-Spell Duration Data. Report AE 6/82, Faculty of Actuarial Science and Econometrics, University of Amsterdam. Also
printed in Neumann and Westergaard-Nielsen (1984),45-73. Rosen, Sherwin, 1978. Substitution and the Division of Labour. Economica, 45, 235-250. Rosen, Sherwin, 1981. Editor, Studies in Labor Markets.
University of Chicago Press for the National Bureau of Economic Research, Chicago. Rothschild, M., 1973. Models of Market Organization with Imperfect Information: A Survey. Journal of Political
Economy, 81(6), 1283-1308. Salant, S.W., 1977. Search Theory and Duration Data: A Theory of Sorts. Quarterly Journal of Economics, 91(1), 39-57. Sattinger, Michael, 1975. Comparative Advantage and
the Distributions of Earnings and Abilities. Econometrica, 43 (May), 455-468. Sattinger, Michael, 1977. Compensating Wage Differences. Journal of Economic Theory, 16(2), 496-503. Sattinger, Michael,
1978. Comparative Advantage in Individuals. Review of Economics and Statistics, 60(2), 259-267. Sattinger, Michael, 1979. Differential Rents and the Distribution of Earnings. Oxford Economic Papers,
31(1), 60-71. Sattinger, Michael, 1980. Capital and the Distribution of Labor Earnings. NorthHolland Publishing Co., Amsterdam. Sattinger, Michael, 1983. Distribution of the Time Spent in One State
of a Two State Continuous Time Markov Process. Memo 1983-0, Institute of Economics, Aarhus University, Denmark. Sattinger, Michael, 1984. Search Congestion, With a Numerical Example. Working Paper
No. 170, State University of New York at Albany. Schultz, T.W., 1972. Editor, Investment in Education. University of Chicago Press, Chicago. Shorrocks, A.F., 1980. The Class of Additively
Decomposable Inequality Measures. Econometrica, 48(3), 613-625. Shorrocks, A.F., 1982. Inequality Decomposition by Factor Components. Econometrica, 50(1), 193-211. Singer, Burton and S. Spilerman,
1976. Some Methodological Issues in the Analysis of Longitudinal Surveys. Annals of Economic and Social Measurement, 5(4), 447-474. Smith, Ralph E., 1977. A Simulation Model of the Demographic
Composition of Employment, Unemployment and Labor Force Participation. In: Ehrenberg (1977), 259-303. Smith, Ralph E., Jean E. Vanski and Charles C. Holt, 1974. Recession and the Employment of
Demographic Groups. Brookings Papers on Economic Activity, (3), 737-758. Spence, A.M., 1974. Market Signaling. Harvard University Press, Cambridge. Stigler, George J., 1961. The Economics of
Information. Journal of Political Economy, 68(3), 213-225. Stigler, George J., 1962. Information in the Labor Market. Journal of Political Economy, 70(5), Part 2, Supplement, 94-105.
Stiglitz, J.E., 1974. Equilibrium Wage Distributions. Cowles Foundation Discussion Paper No. 375. Takacs, L., 1960. Stochastic Processes. Methuen and Co., London. Telser, Lester G., 1978. Economic
Theory and the Core. University of Chicago Press, Chicago. Tinbergen, Jan, 1951. Some Remarks on the Distribution of Labour Incomes. International Economic Papers, 195-207. Tinbergen, Jan, 1956. On
the Theory of Income Distribution. Weltwirtschaftliches Archiv, 155-174. Tinbergen, Jan, 1975. Income Distribution: Analysis and Policies. North-Holland Publishing Co., Amsterdam. Tobin, James, 1972.
Inflation and Unemployment. American Economic Review, 62(March), 1-18. Toikka, R.S., 1976. A Markovian Model of Labor Market Decisions by Workers. American Economic Review, 66 (Dec.), 821-834. U.S.
Bureau of the Census, 1972. Employment Profiles of Selected Low-Income Areas, Final Report, PHC (3)-1, United States Summary. U.S. Government Printing Office, Washington, D.C. Warner, J.T., C.
Poindexter, Jr., and R.M. Fearn, 1980. Employer-Employee Interaction and the Duration of Unemployment. Quarterly Journal of Economics, 94(2), 211-233. Westergaard-Nielsen, Niels, 1980. Job Search in
a Professional Labor Market. Discussion Paper No. 606-80, Institute of Poverty Research, University of Wisconsin, Madison. Westergaard-Nielsen, Niels, 1981a. A Study of a Professional Labor Market:
Introduction and Data. Working Paper 81-2, Institute of Economics, Aarhus University, Denmark. Westergaard-Nielsen, Niels, 1981b. Estimation of the Reservation Wage. Working Paper 81-4, Institute of
Economics, Aarhus University, Denmark. Whittaker, E.T. and G.N. Watson, 1963. A Course of Modern Analysis, fourth edition. Cambridge University Press, Cambridge. Wilson, Charles A., 1980. A Model of
Job Search and Matching. Unpublished paper, Department of Economics, University of Wisconsin, Madison.
Author Index
Hall, R.E., 5, 16, 56, 57, 80, 132, 137, 140, 141 Hartog, J., 4 Haveman, R., 3, 55-56, 64, 74 Heckman, J., 9, 13, 37, 62, 86-88, 94-95, 160 Hines, A.G., 139, 141 Holt, C.C., 56, 80, 140 Howard, R.,
10 Hurd, M., 10 Isaacson, D., 9 Jencks, C., 148 Johnson, H.G., 5 Jovanovic, B., 13, 137, 140 Kakwani, N., 103 Karlin, S., 9, 10 Kiefer, N., 9, 62, 86-88, 95, 160 Kogan, M., 58 Lancaster, T., 15, 39,
62, 90 Lippman, S.A., 8 Lucas, R., 139-140, 155 Lundberg, S., 8 Lydall, H., 94, 101 Madsen, R.W., 9 Malinvaud, E., 8, 139 Mandelbrot, B., 93-94, 101 Marshall, A.W., 103 "Marston, S., 59, 80 Maskin,
E., 141 McCall, J.J., 8 McCarthy, P., 58 Menderhausen, H., 4 Metcalf, C., 4 Miller, H.D., 64 Mincer, J., 3, 6, 72 Mine, H., 10 Mirer, T.W., 4, 55-56 Mood, A.M., 76, 93, 102, 103, 104, 148 Mortensen,
D.T., 9, 62, 82, 98, 137, 140 Neumann, G., 9, 62, 86-88, 95, 160 Oi, W., 4, 23 Olkins, I., 103 Osaki, S., 10 Perry, G.E., 80 Phelps, E.S., 141, 154 Piore, M.J., 128 Pissarides, C., 141 Poindexter,
Jr., C., 9, 34
Abowd, J., 8, 31, 33-34 Aitchison, J., 86, 89 Akerlof, G.A., 58 Altonji, A., 140 Andersen, P .K., 62 Ashenfelter, 0., 8, 31, 33-34 Axell, B., 98 Azariadis, C., 4 Baily, M.N., 4, 56 Barro, R.J., 8
Beach, C.M., 4 Becker, G., 4, 23 Bellman, R.E., 10 Blumen, I., 58 Boes, D.C., 76, 93, 102, 103, 104, 148 Borjas, G., 13 Bourguignon, F., 74, 104 Bowen, W.G., 45, 48 Brown, J .A.C., 86, 89 Budd, E.C.,
4 Burdett, K., 9, 16, 62, 92 Chesher, A., 15, 39, 62, 90 Chiswick, B.R., 3 Clark, K., 55-58, 62 Coleman, T., 10 Cowell, F.A., 76, 104 Cox, D.R., 64 Davies, J .B., 103 Diamond, P.A., 141 Dreyfus, S.,
10 Ehrenberg, R.G., 59, 80 Fearn, R.M., 9, 34 Feldstein, M., 4 Feller, W., 65 Finegan, T.A., 45, 48 Fitoussi, J., 8, 139 Flinn, C.J., 9, 62, 86-88, 94-95, 160 Friedman, M., 5,6 Gabriel, K.R., 64, 74
Garfinkel, I., 3, 55-56 Goldberger, A., 26-27, 92 Goodman, L., 58 Gordon, D.F., 139 Gordon, R.J., 31-32, 139 Gramlich, E.M., 4, 31, 32, 55 Graybill, F.A., 76, 93, 102, 103, 104, 148 Grossman, H., 8
Author Index
Polachek, S., 133 Prescott, E., 137, 140 Rapping, L., 139-140, 155 Reder, M.W., 4, 130 Ridder, G., 62 Rosen, S., 2 Rothschild, M., 98 SaIant, S.W., 58 Sattinger, M., 2, 21, 33, 64, 66, 103, 141
Shorrocks, A.F., 76, 103, 104 Singer, B., 58, 62, 76 Smith, R.E., 59, 80 Spence, A.M., 4 Spilerman, S., 58, 76 Stigler, G., 9, 97 Stiglitz, J .E., 98 Summers, L., 55-58, 62 Takacs, L., 9 Taylor,
H.M., 9, 10 Telser, L.G., 98 Tinbergen, J., 2 Tobin, J., 141 Toikka, R.S., 9 Vanski, J.E., 80 Warner, J.T., 9, 34 Watson, G.N., 65 Westergaard-Nielsen, N., 9 Whiteman, T.C., 4 Whittaker, E.T., 65
Wilson, C., 17
Subject Index
Assignment (of workers to jobs), 9, 20, 100, 101, 141, 156 and regression towards the mean, 129, 137, 151-155 and segmentation, 128 in the study of earnings distributions, 2, 3, 85 Bessel functions
and coefficients, 65, 66 Brontosaurus theorem, 93 Business cycle, 23-24, 130-132 Choice, 1, 2, 5-7, 63 contribution to earnings inequality of, 104, 114-122 contribution to employment inequality of,
78-84 contribution to wage rate inequality of, 97, 101 in consumer choice problem, 25-26 in worker problem, 27-28 of reservation wage, 11-12 variation of, among groups, 129-130 Choice set frontier,
29-31, 104 Compensating wage differentials, 33-34 Discount rate, 13, 14, 15, 29 Distribution. See also Pareto distribution exponential, 90-92, 100, 108, 127 lognormal, 88-90, 100 mixture (or
contagious), 76, 92-93,94,97, 101 normal, 88-89, 94, 97, 101 one-tailed (single tailed), 93, 101, 157 Pareto-Levy, 93-94 truncated, 26-27, 85-92 uniform, 76 Distribution of accepted wages, 85, 134,
157 relation of, to wage offer and reservation wage distributions, 92-93 shape of, 93-97 Distribution of earnings. See also Inequality, Joint distribution of earnings and unemployment in relation to
employment and wage rate distributions, 102-103 observed, 102, 104-114 upper tail of, 93, 106 Distribution of employment compared with actual distribution, 80-82
cumulative function for, 65-66 effect of, on distribution of earnings, 104-108 inequality in, 70-76, 84 in Markov process, 64-70, 157 numerical examples for, 66-70 previous work on, 55-63 probability
density function for, 64-65 Distribution of new hires, 19-20 Distribution of reservation wages. See also Reservation wages, Inequality and grades of labor, 17 and nonemployment benefits, 38 evidence
on, 122-123 in relation to accepted wage rates, 92-93 Distribution of time spent unemployed. See Distribution of employment Distribution of wage offers. See also Pareto distribution and grade
requirements, 18-19, 92 estimation of, 86-88 generation of, 86 impacts of changes in, on reservation wage, 15-16 parameters inferred for, 108-114 relation of, to stocks and flows, 95-97 source of
dispersion in, 97-100 truncated, 85-92 upper tail of, 86, 93, 94 Distribution of well-being, 102, 123-124 Dual labor markets, 128-130 Duration dependence. See also Markov process. 13-14, 62, 76
Earning capacity, 3, 55, 124 Earnings as product of expected earnings and random variable, 114-116 Earnings differentials over business cycle, 131-132 Envelope theorem, 14-16, 124, 156 Expected
present discounted value, PDV, defined, 142-143 Expected wage and reservation wage, 15-16 expression for, 11, 19 Feedback line (or relation), 148, 150 Firm behavior, 20-24, 129, 156-157 Gini
coefficient. See Inequality measures
Subject Index
Grade (of labor), 17, 21 average, 21-22 marginal product for, 22-23 Grade differential, 23, 129, 131 and regression towards the mean, 149 Grade requirement, 17-18, 22, 23, 86 and assignment, 152 and
tests for underemployment, 146-147 Heterogeneity and decline in transition rates, 57-58 in nonemployment benefits, 38 in reservation wages, 98, 100 in transition rates, 58-62, 76-82 in workers and
firms, 98 Hierarchies, 94 Human capital, 3, 4, 6 Implicit labor contracts, 23, 56 Inequality, 102-127 as measured by reservation wages, 122-124 differences in importance of, by source, 2, 124, 126 in
earnings, observed, 104-114 in earnings, related to employment and wage rate inequality, 103 in reservation wages, 108-114 in unemployment benefits, 38 sources of, 2, 108-114, 124-127 Inequality
measures coefficient of variation, 76-78, 103, 106 coefficient of variation, decline in, 89 decomposable, 74-75, 104 Gini coefficient, 74-75 variance of logarithms, 89, 103, 106 Job market signaling,
4 Job matching, 13, 17, 137, 140, 155 Job search and assignment, 2-3, 9, 20, 144, 151-155 and choice, 6, 7, 26-27 and firm behavior, 20-24 and ineqUality, 1, 119, 122, 158 and nature of unemployment,
7, 8, 137, 138, 140, 141 and reservation wage property, 10 in Markov model, 9-14 in standard model, 8, 140 second order condition for, 12 Job vacancies, 9 distribution of, 18-20, 86 stocks and flows
of, 95-97 Joint distribution of employment and earnings, 1, 104-106, 158 Joint distribution of reservation wages and grades. See Distribution of reservation wages Joint distributiion of wage offers
and grade requirements. See Distribution of wage offers Labor force participation, 4, 16, 84 and estimation of unemployment valuations, 45-49
full, as norm, 81 Labor market conditions, impacts of, 14-16 dual, 128-130 overlapping, 128, 156 segmentation, 62, 128 stock!!, and flows in, 95-97 submarkets in, 98 Lorenz curve, 74 Luck. See also
Random outcomes. 1,2,97, 126 Marginal product of worker, 22-23, 100, 144, 157 Markov process (or model), 7, 8, 24, 64-70. 74, 76,80 and job search model, 9-14 and Markov property, 10, 13 and
stationarity, 10 bias in estimates for, 58 criticism of, 55, 57, 62, 63, 82 duration dependence in, 13-14, 62, 76 second order conditions for, 12 state dependence in, 13-14 Minimum wage, 31, 100,
130, 132 Mover-stayer model, 45, 58, 76 Natural unemployment rate, 7, 133, 140 Net marginal product, NMP, defined, 142-143 Nonemployment benefits, 62, 100, 122, 127 and inequality, 38, 124 and
reservation wage, 15, 25 and search intensity, 16 and unemployment valuations, 31, 51, 53 Offer rate fJ, 9, 19, 87 Out of work, 80, 108 Overemployment, 139 Pareto distribution and recoverability, 88
and wage offers, 90-92 arguments for, 93-94 constancy of ratio wolwe for, 90, 157 estimates of parameter IX for, 108-114 variance of logarithms for, 119 Phillips curve, 138 Present value of future
contributions, LMP, defined, 142-143 Quits, 98, 129 and assignment, 154 Random outcomes, 2, 4, 84, 100, 126, 127 as variable E, 114-122 Regression towards the mean, 129, 137, 138, 147-149 Reservation
wage property, 10 Reservation wage. See also Distribution of reservation wages, Inequality and tests for underemployment, 145-146 as measure of inequality, 122-124 as source of inequality, 124-126
choice of, 10-12
Subject Index
expression for, 12, 14, 156 second order condition for, 12 Risk aversion. See also Random outcomes, uncertainty. 5, 16 Search congestion, 141 Search costs for firm, 21-24, 129 for worker, 11, 25,
100, 151 Search distortion, 100, 155 Search efficiency, 132, 137 Search intensity, 16 Self-selection, 37, 128, 156 Tastes and preferences (for work versus leisure), 52 Time spent unemployed,
distribution of. See Distribution of employment Trades, indirect, 142 Transition rates, 9-10, 70 and reservation wage, 14-15 and state dependence, 13-14 declining, 62 estimation of distribution,
using duration data, 58-62 heterogeneous, 55, 57-58, 64, 70, 76-78, 82 Uncertainty, 16, 114, 119, 126 Underemployment, 137, 138 chronic, 137, 149-155 definitions for, 142-144 in Lucas-Rapping model,
139 tests for, 145-147, 155 Unemployment. See also Distribution of employment, Inequality, Valuation of unemployment and inequality, previous estimates, 3-5 as constraint, 8-9, 33 benefit, 39 choice
in, 7 compensating wage differentials for, 33-34 compensation, 25, 34 duration of, 56, 82 economic role of, 138-141 effect of, on reservation wage, 15 efficient, 5 voluntary, 7, 137, 138, 140, 141
Unemployment trade-off, premium and cost. See Valuation of unemployment Vacancies. See Job vacancies Valuation of unemployment. See also Unemployment. 6, 12-13, 25-54 aggregate approach for, 38-45
and minimum wage, 31 calculation of, 39, 44 cost, defined, 28 cross-section estimates for, 49-50 different values of, among workers, 29 direct estimates for, 34-38 implicit, 25, 30 labor force
participation estimates for, 45-49 premium, defined, 28
previous estimates for, 31-34 reasons for high values of, 52-53 trade-off, defined, 28 Value of being employed, L(w), 10-12 Value of being unemployed, M(w.), 10-12 Wage, expected. See Expected wage
Wage offers. See also Distribution of wage offers and tests for underemployment, 146 previous estimates of, 86-88 stocks and flows of, 95-97 Wage rates, unemployment-compensated, 122-124, 127 Wage
resistance, 133 Wage rigidity, 132-133 Wages, reservation. See Reservation wages
Lecture Notes in Economics and Mathematical Systems Managing Editors: M. Beckmann, W. Krelle This series reports new developments in (mathematical) economics, econometrics, operations research, and
mathematical systems, research and teaching - quickly, informally and at a high level. A selection: Editors: S. Osaki, Hiroshima
University, Higashi-Hiroshima; Y.Hatoyama, Senshu University, Kawasaki, Japan
Volume 235
Stochastic Models in Reliability Theory
Proceedings of a Symposium Held in Nagoya, Japan, April 23-24, 1984 1984. VII, 212 pages. ISBN 3-540-13888-9
This book contains the proceedings of a symposium on ·Stochastic Models in Reliability Theory" which was held in Nagoya, Japan, in April 1983. The 14 contributions to the volume deal with coherent
structure theory, maintenance and replacement problems, reliability and availability modeling, fault-tolerant computing systems, software reliability modeling and Markovian deterioration and
replacement modeling. Important stochastic models are developed from basic theory to practical applications. B.C.Eaves, Stanford University, Statiford, CA, USA
Volume 234
A Course in Triangulations for Solving Equations with Deformations 1984. III, 302 pages. ISBN 3-540-13876-5
This book offers, for the fIrst time, an organized presentation of such constructions: It begins with a general theory of triangulations and a fuJI development of the very important Freudenthal
triangulation, before presenting a careful progression of triangulations and subdivisions leading to a variable rate refining triangulation. G. Wagenhals, University ofHeidel-
berg, Gennany
Volume 233
The World Copper Market Structure and Econometric Model 1984. XI, 190 pages. ISBN 3-540-13860-9 Contents: Introduction. - Structure of the World Copper Market: Production. Consumption. Trade and
Prices. Reserves and Resources. - Econometric Model of the World Copper Market: Copper Market Models. Mine Production and Capacities. Demand. Other Equations. Historical Dynamic Solution and
Sensitivity Analysis. - Appencjices and Bibliography.
L.Bauweus, Loll\lain-La-Neuve,
Volume 232
Bayesian Full Information Analysis of Simultaneous Equation Models Using Integration by Monte Carlo 1984. VI, 114 pages. ISBN 3-540-13384-4
The author of this volume deals with Bayesian fuD information analysis of the simUltaneous equation model (SEM) in econometrics. Their coverage ranges as far as the design of automatic procedures
which allow estimation of an SEM with an implemented user friendly computer package requiring little programming effort. G.F.NeweU, University of
California, Berkeley, CA, USA
Volume 231
The M/M/oo Service System with Ranked Servers in Heavy Traffic With a Preface by F. Fersch! 1984. XI, 126 pages. ISBN 3-540-13377-1
Contents: Introduction. - Limit properties for a i'> I. - Descriptive properties of the evolution. - The overflow distribution. - Joint distributions. - A diffusion equation. - Transient properties.
- Equilibrium properties of the diffusion equation. - Equivalent random method. - Index of Notation.
Please askfor more information
Springer-Verlag Berlin Heidelberg New York Tokyo
Lectures on Schumpeterian Economics Schumpeter Centenary Memorial Lectures, Oraz 1983 Editor: C.Seidl, University ofOraz, Austria With contributions by K.Acham, L.Beinsen, P.Hammond,
P.Schachner-Blazizek, C.Seidl, P.Swoboda, O. Tichy 1984. X, 219 pages. ISBN 3-540-13290-2 What has Schumpeter, the most prominent supply-sider, to offer for the pressing problems of today? What does
his economics offer, if translated into a modem garb? The present volume tries to answer such questions. Six distinguished economists and one sociologist from the universities of Oraz and Stanford
joined to deliver thirteen lectures covering all important facets ofSchumpeterian Economics in the light of its modem relevance.
Studies in Labor Market Dynamics Proceedings of a Workshop on Labor Market Dynamics Held at Sandbjerg, Denmark, August 24-28, 1982 Editors: G.R.Neumann, N.C. WestergArd-Nielsen 1984. X, 285 pages.
(Studies in Contemporary Economics, Volume 11). ISBN 3-540-13942-7 This book contains a number on theoretical and empirical aspects of the application oflongitudinal data, presented at a conference
arranged by the University of Aarhus. The papers include the development ofvarious statistical survival time models describing transitions between jobs and between states. They also include
econometric applications which are used to estimate transition rates, participation rates, and reservation wages for various groups of workers. Other applications cover an assessment of Swedish labor
market policies and an econometric analysis of the gains of job mobility. A thorough description of the Danish longitudinal data set constructed from public registers closes the volume.
The Economics of the Shadow Economy
Springer-Verlag Berlin Heidelberg New York Tokyo
Proceedings of the International Conference on the Economics of the Shadow Economy Held at the University of Bielefeld, West Germany October 10-14, 1983 Editor: W. Gaertner, A. Wenig 1985. XIV, 401
pages. (Studies in Contemporary Economics, Volume IS). ISBN 3-540-15095-1 The twentyfour refereed papers in this volume address the most important issues such as the various attempts to measure the
shadow economy, welfare aspects of tax-evasion, problems of economic policy in an economy with a large underground sector, the relative importance of house-hold production, and the role the shadow
economy plays in socialist countries.
E-Book Information
• Year: 1,985
• Edition: 1
• Pages: 178
• Pages In File: 186
• Language: English
• Identifier: 978-3-642-70549-6,978-3-642-70547-2
• Doi: 10.1007/978-3-642-70547-2
• Org File Size: 11,504,019
• Extension: pdf
• Tags: Economics general
• Toc: Front Matter....Pages I-XIV
Introduction....Pages 1-7
Search in Labor Markets....Pages 8-24
The Valuation of Unemployment....Pages 25-54
The Distribution of Employment....Pages 55-84
The Distribution of Wage Rates....Pages 85-101
Inequality....Pages 102-127
The Operation of Labor Markets....Pages 128-136
Chronic Underemployment And Regression Towards the Mean....Pages 137-155
Summary....Pages 156-161
Back Matter....Pages 163-175
|
{"url":"https://vdoc.pub/documents/unemployment-choice-and-inequality-k2c6cu3nfq00","timestamp":"2024-11-04T20:44:46Z","content_type":"text/html","content_length":"582806","record_id":"<urn:uuid:c9a508f8-9b4b-4792-ae19-0bb2b21968b8>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00461.warc.gz"}
|
John’s boat hire business generates
Question 1
1. John’s boat hire business generates $200,000 per annum for the next 10 years. Given an interest rate of 10% per year, would he be willing to sell the business today for $1,800,000?
2. While Andy was a student at Albury University, he borrowed $25,000 in student loan at an annual interest rate of 7%. If he repays $2,000 per year, calculate the period required (to the nearest
year) to pay off his debt.
3. Benjamin receives a payment of $120,000 from his grandmother’s estate. The entire amount is invested today at an interest rate of 8% per year. He expects to receive 100 equal monthly payments
from the investment; the first payment is expected in one year. Find the size of the payments.
Question 2
Your parents are interested in getting advice on what is the best outcome at the end of a six-year period for investing a sum of money in the following options.
1. Invest $6,000 as a lump sum today.
2. Invest $1,000 at the end of each of the next five years.
3. Invest a lump sum of $2,500 today and $800 at the end of the next five years.
4. Invest $1,200 at the end of year one, end of year four and end of year five.
Given an interest rate of 8% per year and compounding of interest occurs at the end of each year, what is the preferred option? Justify all relevant calculations.
Question 3
BTC Limited, an Australian-based air conditioner manufacturer is evaluating two overseas locations for a proposed expansion of production facilities at one site in Canada and the other in Thailand.
The likely future return from investment in each site depends to a great extent on future economic conditions. The scenarios are postulated, and the investment return from each investment is
estimated under each scenario. The Returns with their estimated probabilities are shown below:
Rate of return for Canada (%) Rate of return for Thailand (%) Probability
10 25 0.5
20 15 0.3
25 25 0.2
a. Calculate the expected value and standard deviation of the investment return in each location. Discuss the relative dispersion of the returns. (5 marks)
b. Assuming correlation coefficient of -0.3 between the returns from the two locations, what would be the expected return and the standard deviation of the following investment strategies?
i. Allocating 50 per cent of available funds to the site in Canada and 50 per cent to the Thailand site.(4 marks)
ii. Allocating 75 per cent of the funds to the site in Canada and 25 per cent to the Thailand site. (4 marks)
Which of the two strategies would you recommend for BTC? Discuss your recommendation. (2 marks)
Click on Buy Solution and make payment. All prices shown above are in USD. Payment supported in all currencies. Price shown above includes the solution of all questions mentioned on this page. Please
note that our prices are fixed (do not bargain).
After making payment, solution is available instantly.Solution is available either in Word or Excel format unless otherwise specified.
If your question is slightly different from the above question, please contact us at info@myassignmentguru.com with your version of question.
|
{"url":"https://myassignmentguru.com/assignments/johns-boat-hire-business-generates/","timestamp":"2024-11-06T20:31:46Z","content_type":"text/html","content_length":"73097","record_id":"<urn:uuid:1af84cae-850d-4c03-8d60-7e3174a47975>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00466.warc.gz"}
|
Vectors Question #1 - The Culture SGVectors Question #1
Since I’m at the topic of Vectors for most of my JC1 classes, I thought I share an interesting question regarding it. Yes, some students do tell me the questions aren’t really A’levels, but I think
it does make you think and this is very important in learning.
Question: Can you place 5 points in three dimensional space such that they are pair wise equidistant?
The answer is no. Consider starting with three pairwise equidistant points with a common distant d between pairs. These are vertices of an equilateral triangle, and they will define a plane. If we
add a d units from each of the other three points, then it must be placed at a distance of latex \frac{\sqrt{3}d}{2}d from each of the other four. But we will get the same result.
If you figured this out, or understand the explanation, here is a food for thought, in 4-dimensional space, the above will possible 🙂
|
{"url":"https://theculture.sg/2015/12/vectors-question-1/","timestamp":"2024-11-02T19:09:11Z","content_type":"text/html","content_length":"100877","record_id":"<urn:uuid:d4cf6d2c-ccef-4844-be50-471593ba1ab7>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00549.warc.gz"}
|
concepts | Jeremy Aaron Heminger
This exercise is based on the solution found at Google Life's YouTube channel. Admittedly, I was pulling my hair out for a bit on this one.
The question asks the programmer to find an efficient way to find two numbers in a sequence that add up to a number.
|
{"url":"http://jeremyheminger.com/taxonomy/term/146","timestamp":"2024-11-10T10:36:51Z","content_type":"text/html","content_length":"30376","record_id":"<urn:uuid:6e849479-375c-4ea7-a4da-129211d7f91c>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00128.warc.gz"}
|
Class GeometryEngine
Class GeometryEngine
Performs geometric operations such as spatial relationship tests, reprojections, shape manipulations, topological query and analysis operations on Geometry objects.
System.Object.Equals(System.Object, System.Object)
System.Object.ReferenceEquals(System.Object, System.Object)
Assembly: Esri.ArcGISRuntime.dll
public static class GeometryEngine
Capabilities include:
GeometryEngine generally operates in two dimensions; operations do not account for z-values unless documented as such for a specific method (for example Project(Geometry, SpatialReference) transforms
z-values in some cases).
Geodetic methods are better suited to data that have a geographic spatial reference (see IsGeographic, especially for large-area, small-scale use, while planar methods are suitable to data that have
a projected coordinate system, especially for local, large-scale areas. Geodetic methods indicate this in the name, for example BufferGeodetic(Geometry, Double, LinearUnit, Double, GeodeticCurveType)
Name Description
Gets the simple area for the Geometry passed in. This is a planar measurement using 2D Cartesian mathematics to compute the area. Use AreaGeodetic(Geometry,
Area(Geometry) AreaUnit, GeodeticCurveType) for geodetic measurement.
AreaGeodetic(Geometry, AreaUnit, Gets the geodesic area of a polygon.
AutoComplete(IEnumerable<Polygon>, Fills the closed gaps between polygons using polygon boundaries and polylines as the boundary for the new polygons.
Boundary(Geometry) Calculates the boundary of the given geometry.
Creates a buffer polygon at the specified distance around the given geometry. This is a planar buffer operation. Use BufferGeodetic(Geometry, Double,
Buffer(Geometry, Double) LinearUnit, Double, GeodeticCurveType) to produce geodetic buffers.
Buffer(IEnumerable<Geometry>, Creates and returns a buffer relative to the given geometries. This is a planar buffer operation. Use BufferGeodetic(Geometry, Double, LinearUnit, Double,
IEnumerable<Double>, Boolean) GeodeticCurveType) to produce geodetic buffers.
BufferGeodetic(Geometry, Double, Calculates the geodesic buffer of a given geometry.
LinearUnit, Double, GeodeticCurveType)
BufferGeodetic(IEnumerable<Geometry>, Creates and returns a geodesic buffer or buffers relative to the given collection of geometries.
IEnumerable<Double>, LinearUnit,
Double, GeodeticCurveType, Boolean)
Clip(Geometry, Envelope) Constructs the polygon created by clipping geometry by envelope.
CombineExtents(Geometry, Geometry) Returns the envelope of the two given geometries.
CombineExtents(IEnumerable<Geometry>) Returns the envelope of geometries in the given collection.
Contains(Geometry, Geometry) Returns true if geometry1 contains geometry2.
ConvexHull(Geometry) Calculates the minimum bounding geometry (convex hull) that completely encloses the given geometry.
ConvexHull(IEnumerable<Geometry>, Calculates the minimum bounding geometry (convex hull) for the geometries in the given collection.
CreatePointAlong(Polyline, Double) Returns the point at the given distance along the line.
Crosses(Geometry, Geometry) Returns true if geometry1 crosses geometry2.
Cut(Geometry, Polyline) Cut the 'geometry' with the 'cutter'
Densify(Geometry, Double) Densifies the input geometry by inserting additional vertices along the geometry at an interval defined by maxSegmentLength.
DensifyGeodetic(Geometry, Double, Densifies the input geometry by creating additional vertices along the geometry, using a geodesic curve.
LinearUnit, GeodeticCurveType)
Difference(Geometry, Geometry) Constructs the set-theoretic difference between two geometries.
Disjoint(Geometry, Geometry) Returns true if geometry1 is not within geometry2.
Measures the simple Euclidean distance between two geometries. This is a planar measurement using 2D Cartesian mathematics to calculate the distance in the same
Distance(Geometry, Geometry) coordinate space as the inputs. Use DistanceGeodetic(MapPoint, MapPoint, LinearUnit, AngularUnit, GeodeticCurveType) for geodetic measurement.
DistanceGeodetic(MapPoint, MapPoint, Calculates the geodesic distance between 2 given points and calculates the azimuth at both points for the geodesic curve that connects the points.
LinearUnit, AngularUnit,
The function returns a piecewise approximation of a geodesic ellipse (or geodesic circle, if semiAxis1Length = semiAxis2Length). Constructs a geodesic ellipse
centered on the specified point. If this method is used to generate a polygon or a polyline, the result may have more than one path, depending on the size of
EllipseGeodesic the ellipse and its position relative to the horizon of the coordinate system. When the method generates a polyline or a multipoint, the result vertices lie on
(GeodesicEllipseParameters) the boundary of the ellipse. When a polygon is generated, the interior of the polygon is the interior of the ellipse, however the boundary of the polygon may
contain segments from the spatial reference horizon, or from the GCS extent.
Equals(Geometry, Geometry) Tests if two geometries are equal (have equivalent spatial reference systems, same geometry type, and same points).
Extend(Polyline, Polyline, Extends a polyline using another polyline as the extender.
FractionAlong(Polyline, MapPoint, Finds the location on the line nearest the input point, expressed as the fraction along the line's total geodesic length, if the point is within the specified
Double) distance from the closest location on the line.
Generalize(Geometry, Double, Boolean) Generalizes the given geometry by removing vertices based on the Douglas-Poiker algorithm.
Intersection(Geometry, Geometry) Calculates the intersection of two geometries.
Intersections(Geometry, Geometry) Calculates the intersection of two geometries.
Intersects(Geometry, Geometry) Tests if two geometries intersect each other.
IsSimple(Geometry) Test if the geometry is topologically simple.
LabelPoint(Polygon) Calculates an interior point for the given polygon. This point can be used by clients to place a label for the polygon.
Gets the length for a specified Geometry. This is a planar measurement using 2D Cartesian mathematics to compute the length in the same coordinate space as the
Length(Geometry) inputs. Use LengthGeodetic(Geometry, LinearUnit, GeodeticCurveType) for geodetic measurement.
LengthGeodetic(Geometry, LinearUnit, Gets the geodesic length for the Geometry passed in.
Move(Geometry, Double, Double) Moves the provided geometry by the specified distances along the x-axis and y-axis.
MoveGeodetic(IEnumerable<MapPoint>, Moves each map point in the read-only collection by a geodesic distance.
Double, LinearUnit, Double,
AngularUnit, GeodeticCurveType)
NearestCoordinate(Geometry, MapPoint) Determines the nearest point in the input geometry to the input point using a simple planar measurement.
NearestCoordinateGeodetic(Geometry, Determines the nearest point in the input geometry to the input point, by using a shape preserving geodesic approximation of the input geometry.
MapPoint, Double, LinearUnit)
NearestVertex(Geometry, MapPoint) Returns a ProximityResult that describes the nearest vertex in the input geometry to the input point.
Folds the geometry into a range of 360 degrees. This may be necessary when wrap around is enabled on the map. If Geometry is an Envelope then a Polygon will be
NormalizeCentralMeridian(Geometry) returned unless the Envelope is empty then and Empty Envelope will be returned.
Offset(Geometry, Double, OffsetType, Returns offset version of the input geometry.
Double, Double)
Overlaps(Geometry, Geometry) Returns true if geometry1 overlaps geometry2.
Project(Geometry, SpatialReference) Projects the given geometry from its current spatial reference system into the given spatial reference system.
Project(Geometry, SpatialReference, Projects the given geometry from its current spatial reference system into the given output spatial reference system, applying the datum transformation
DatumTransformation) provided.
Compares the spatial relationship of two geometries. Can compare Interior, Boundary and Exterior of two geometries based on a DE-9IM encoded string. This must
Relate(Geometry, Geometry, String) be 9 characters long and contain combinations only of these characters: TF*012
RemoveM(Geometry) Return a copy of the given geometry with its m-values removed.
RemoveZ(Geometry) Return a copy of the given geometry with its z-coordinate removed.
RemoveZAndM(Geometry) Return a copy of the given geometry with its z-coordinate and m-values removed.
Reshape(Multipart, Polyline) Reshape polygons or polylines with a single path polyline.
Rotate(Geometry, Double, MapPoint) Rotates the geometry by the specified angle of rotation around the provided origin point.
Scale(Geometry, Double, Double, Scales the given geometry by the specified factors from the specified origin point.
The function returns a piecewise approximation of a geodesic sector. If this method is used to generate a polygon or a polyline, the result may have more than
SectorGeodesic one path, depending on the size of the sector and its position relative to the horizon of the coordinate system. When the method generates a polyline or a
(GeodesicSectorParameters) multipoint, the result vertices lie on the boundary of the ellipse. When a polygon is generated, the interior of the polygon is the interior of the sector,
however the boundary of the polygon may contain segments from the spatial reference horizon, or from the GCS extent.
SetM(Geometry, Double) Return a copy of a geometry with the supplied M value.
SetZ(Geometry, Double) Return a copy of a geometry with the supplied z-coordinate.
SetZAndM(Geometry, Double, Double) Return a copy of a geometry with the supplied z-coordinate and m-value.
Simplify(Geometry) Simplifies the given geometry to make it topologically consistent according to its geometry type.
SymmetricDifference(Geometry, Performs the Symmetric difference operation on the two geometries.
Touches(Geometry, Geometry) Returns true if geometry1 touches geometry2.
Union(Geometry, Geometry) The union operation constructs the set-theoretic union of the geometries in the input array.
Union(IEnumerable<Geometry>) Calculates the union of a collection of geometries
Within(Geometry, Geometry) Returns true if geometry1 is within geometry2.
Applies to
Target Versions
.NET Standard 2.0 100.3 - 200.5
.NET 100.13 - 200.5
.NET Windows 100.13 - 200.5
.NET Android 200.0 - 200.5
.NET iOS 200.0 - 200.5
.NET Framework 100.0 - 200.5
Xamarin.Android 100.0 - 100.15
Xamarin.iOS 100.0 - 100.15
UWP 100.0 - 200.5
|
{"url":"https://developers.arcgis.com/net/api-reference/api/netfx/Esri.ArcGISRuntime/Esri.ArcGISRuntime.Geometry.GeometryEngine.html","timestamp":"2024-11-12T09:06:53Z","content_type":"text/html","content_length":"66029","record_id":"<urn:uuid:b7c7e901-1fe7-4114-90ed-27935219b181>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00798.warc.gz"}
|